Autoregressive Speech-To-Text Alignment is a Critical Component of Neural Text-To-Speech (TTS) Models
DOI:
https://doi.org/10.32628/IJSRSET229643Keywords:
Neural Text-To-Speech, RAD-TTS, TTS models, Artificial intelligence (AI), RecSLAMAbstract
Autoregressive speech-to-text alignment is a critical component of neural text-to-speech (TTS) models. Commonly, autoregressive TTS models rely on an attention mechanism to train these alignments online--but they are often brittle and fail to generalize in long utterances or out-of-domain text, leading to missing or repeating words. Non-autoregressive endto end TTS models usually rely on durations extracted from external sources. Our work exploits the alignment mechanism proposed in RAD -, which can be applied to various neural TTS architectures. In our experiments, the proposed alignment learning framework improves all tested TTS architectures—both autoregressive (Flowtron and Tacotron 2) and non-autoregressive (FastPitch, FastSpeech 2, RAD-TTS). Specifically, it improves alignment convergence speed of existing attention-based mechanisms; simplifies the training pipeline; and makes models more robust to errors on long utterances. Most importantly, it also improved the perceived speech synthesis quality when subject to expert human evaluation.
References
- MikikoBazeley is a Senior ML Operations and Platform Engineer at Mailchimp. She has extensive experience as an engineer, data scientist, and data analyst for startups and high-growth companies leveraging machine learning and data for consumer and enterprise facing products. She actively contributes content around best practices for developing ML products as well as speaking and mentoring non-traditional candidates in building careers in data science.
- Nvidia, https://www.nvidia.com/en-in/data-center/solutions/accelerated-computing/
- Simulations, learning and the metaverse: changing cultures in legal education Paul Maharg (Glasgow Graduate School of Law) Martin Owen, (Futurelab)
- Edge Robotics: Edge-Computing-Accelerated Multi-Robot Simultaneous Localization and Mapping, Liekang Zeng, Xu Chen, Ke Luo, Zhi Zhou, Shuai Yu
- Why High-Performance Modelling and Simulation for Big Data Applications Matters, Clemens Grelck, Ewa Niewiadomska-Szynkiewicz, Marco Aldinucci, Andrea Bracciali & Elisabeth Larsson
- Image on iStocks, Licence details, Creator: MF3d, Credit: Getty Images
- Y. Wang, R. J. Skerry-Ryan, D. Stanton, Y. Wu, R. J. Weiss, N. Jaitly, Z. Yang, Y. Xiao, Z. Chen, S. Bengio, Q. V. Le, Y. Agiomyrgiannakis, R. Clark, and R. A. Saurous, “Tacotron: A fully end-to-end text-to-speech synthesis model,” CoRR, vol. abs/1703.10135, 2017. [Online]. Available: http://arxiv.org/abs/1703.10135
- J. Shen, R. Pang, R. J. Weiss, M. Schuster, N. Jaitly, Z. Yang, Z. Chen, Y. Zhang, Y. Wang, R. J. Skerry-Ryan, R. A. Saurous, Y. Agiomyrgiannakis, and Y. Wu, “Natural TTS synthesis by conditioning wavenet on mel spectrogram predictions,” CoRR, vol. abs/1712.05884, 2017. [Online]. Available: http://arxiv.org/abs/1712.05884
- R. Valle, K. Shih, R. Prenger, and B. Catanzaro, “Flowtron: an autoregressive flow-based generative network for text-to-speech synthesis,” 2020.
- Y. Ren, C. Hu, T. Qin, S. Zhao, Z. Zhao, and T.-Y. Liu, “Fastspeech 2: Fast and high-quality end-to-end text-to-speech,” arXiv preprint arXiv:2006.04558, 2020.
- Y. Ren, Y. Ruan, X. Tan, T. Qin, S. Zhao, Z. Zhao, and T.-Y. Liu, “Fastspeech: Fast, robust and controllable text to speech,” in Advances in Neural Information Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alche-Buc, E. Fox, and ´ R. Garnett, Eds., vol. 32. Curran Associates, Inc., 2019, pp. 3171–3180.
Downloads
Published
Issue
Section
License
Copyright (c) IJSRSET

This work is licensed under a Creative Commons Attribution 4.0 International License.