Enhancing Conversational Quality In Language Learning Chatbots: An Evaluation Of GPT4 For ASR Error Correction · The Large Language Model Bible Contribute to LLM-Bible

Enhancing Conversational Quality In Language Learning Chatbots: An Evaluation Of GPT4 For ASR Error Correction

Mai Long, Carson-berndsen Julie. Arxiv 2023

[Paper]    
Applications GPT Model Architecture Reinforcement Learning Training Techniques

The integration of natural language processing (NLP) technologies into educational applications has shown promising results, particularly in the language learning domain. Recently, many spoken open-domain chatbots have been used as speaking partners, helping language learners improve their language skills. However, one of the significant challenges is the high word-error-rate (WER) when recognizing non-native/non-fluent speech, which interrupts conversation flow and leads to disappointment for learners. This paper explores the use of GPT4 for ASR error correction in conversational settings. In addition to WER, we propose to use semantic textual similarity (STS) and next response sensibility (NRS) metrics to evaluate the impact of error correction models on the quality of the conversation. We find that transcriptions corrected by GPT4 lead to higher conversation quality, despite an increase in WER. GPT4 also outperforms standard error correction methods without the need for in-domain training data.

Similar Work