Knowledge Distillation For Improved Accuracy In Spoken Question Answering · The Large Language Model Bible Contribute to LLM-Bible

Knowledge Distillation For Improved Accuracy In Spoken Question Answering

Chenyu You, Nuo Chen, Yuexian Zou. Arxiv 2020 – 19 citations

[Paper]    
Efficiency and Optimization Distillation Tools Training Techniques

Spoken question answering (SQA) is a challenging task that requires the machine to fully understand the complex spoken documents. Automatic speech recognition (ASR) plays a significant role in the development of QA systems. However, the recent work shows that ASR systems generate highly noisy transcripts, which critically limit the capability of machine comprehension on the SQA task. To address the issue, we present a novel distillation framework. Specifically, we devise a training strategy to perform knowledge distillation (KD) from spoken documents and written counterparts. Our work makes a step towards distilling knowledge from the language model as a supervision signal to lead to better student accuracy by reducing the misalignment between automatic and manual transcriptions. Experiments demonstrate that our approach outperforms several state-of-the-art language models on the Spoken-SQuAD dataset.

Similar Work