Less Is More: Accurate Speech Recognition & Translation Without Web-scale Data · The Large Language Model Bible Contribute to LLM-Bible

Less Is More: Accurate Speech Recognition & Translation Without Web-scale Data

Puvvada Krishna C., Żelasko Piotr, Huang He, Hrinchuk Oleksii, Koluguri Nithin Rao, Dhawan Kunal, Majumdar Somshubra, Rastorgueva Elena, Chen Zhehuai, Lavrukhin Vitaly, Balam Jagadeesh, Ginsburg Boris. Arxiv 2024

[Paper]    
Applications Attention Mechanism Fine Tuning Model Architecture Pretraining Methods Training Techniques

Recent advances in speech recognition and translation rely on hundreds of thousands of hours of Internet speech data. We argue that state-of-the art accuracy can be reached without relying on web-scale data. Canary - multilingual ASR and speech translation model, outperforms current state-of-the-art models - Whisper, OWSM, and Seamless-M4T on English, French, Spanish, and German languages, while being trained on an order of magnitude less data than these models. Three key factors enables such data-efficient model: (1) a FastConformer-based attention encoder-decoder architecture (2) training on synthetic data generated with machine translation and (3) advanced training techniques: data-balancing, dynamic data blending, dynamic bucketing and noise-robust fine-tuning. The model, weights, and training code will be open-sourced.

Similar Work