Effect And Analysis Of Large-scale Language Model Rescoring On Competitive ASR Systems · The Large Language Model Bible Contribute to LLM-Bible

Effect And Analysis Of Large-scale Language Model Rescoring On Competitive ASR Systems

Udagawa Takuma, Suzuki Masayuki, Kurata Gakuto, Itoh Nobuyasu, Saon George. Arxiv 2022

[Paper]    
BERT GPT Model Architecture Pretraining Methods Training Techniques

Large-scale language models (LLMs) such as GPT-2, BERT and RoBERTa have been successfully applied to ASR N-best rescoring. However, whether or how they can benefit competitive, near state-of-the-art ASR systems remains unexplored. In this study, we incorporate LLM rescoring into one of the most competitive ASR baselines: the Conformer-Transducer model. We demonstrate that consistent improvement is achieved by the LLM’s bidirectionality, pretraining, in-domain finetuning and context augmentation. Furthermore, our lexical analysis sheds light on how each of these components may be contributing to the ASR performance.

Similar Work