Sony AI Advances Speech Synthesis: Unleashes Power of Neural Networks with Revolutionary BigVSAN Vocoder
The journey in speech synthesis technology has come a long way, with the ground-breaking concept of neural networks serving as the catalyst of revolution in this field. Building from the elementary text-to-speech, researchers are progressively pushing the realms of what artificial speech might accomplish, with a key punch being the application of deep generative models like autoregressive, generative adversarial network (GAN)-based, flow-based, and diffusion-based models.
Diving into the nitty-gritty of speech synthesis systems, the traditional two-stage method has been significant in shaping the foundation for these systems. The prediction of an intermediate representation from the input, typically mel-spectrograms, sets the precedence before the conversion of these complex dialects into simpler audio waveforms, a task driven by the important cog in this machine – the vocoder.
A vocoder holds the key to the final output in the speech synthesis process. Its role of converting signals into understandable audio waveform is influential in the quality of speech synthesis. This has therefore led to persistent efforts of research and development to bring about enhanced functions and quality in vocoder outputs.
In recent years, GANs have gained traction in the realm of vocoders. Why? Their unique ability to generate high-quality waveforms in a swift manner outshines many other models, attracting a vast amount of interest for innovators in the vocoder field. However, a major challenge with GAN-based vocoders is deciding the ideal feature space projection, a critical element that differentiates between real and fake data, impacting the delivered audio quality.
To overcome this hurdle, a research team from Sony AI, Tokyo, and Sony Group Corporation, Tokyo, have explored an improved GAN training framework, aptly named the Slicing Adversarial Network (SAN), initially intended for image generation tasks.
Their promising revelation discovered that the SAN could be adapted to improve the efficiency of GAN-based vocoders. With the least-squares GAN at the helm, they managed to find the optimum feature space projection to improve the audio quality produced by GAN-based vocoders. The outcome: an upgraded model, named BigVSAN.
BigVSAN, equipped with the SAN framework, demonstrates enhanced performance in comparison to its predecessor, BigVGAN. A key breakthrough in their work was the introduction of the “Soft Monotonization Scheme,” a method adapted specifically to convert least-squares GANs into SANs.
As a critical agent in the drive for innovation in speech synthesis, the use of Neural Networks and developmental models like BigVSAN can unleash the potential of vocoders in attaining unrivaled audio quality. From daily interactions with smartphone assistants to learning languages, these advancements signify an exciting transformation in the realm of synthesized speech.
Overall, the development of Sony AI’s BigVSAN is a promising stride that magnifies the potential applications and benefits in the field of speech synthesis, breathing life into what was once thought to be a mechanistic domain.
*The information this blog provides is for general informational purposes only and is not intended as financial or professional advice. The information may not reflect current developments and may be changed or updated without notice. Any opinions expressed on this blog are the author’s own and do not necessarily reflect the views of the author’s employer or any other organization. You should not act or rely on any information contained in this blog without first seeking the advice of a professional. No representation or warranty, express or implied, is made as to the accuracy or completeness of the information contained in this blog. The author and affiliated parties assume no liability for any errors or omissions.