Leveraging Linguistic Coordination In Reranking N-best Candidates For End-to-end Response Selection Using BERT · The Large Language Model Bible Contribute to LLM-Bible

Leveraging Linguistic Coordination In Reranking N-best Candidates For End-to-end Response Selection Using BERT

Yu Mingzhi University Of Pittsburgh, Litman Diane University Of Pittsburgh. Arxiv 2021

[Paper]    
Applications BERT Model Architecture RAG

Retrieval-based dialogue systems select the best response from many candidates. Although many state-of-the-art models have shown promising performance in dialogue response selection tasks, there is still quite a gap between R@1 and R@10 performance. To address this, we propose to leverage linguistic coordination (a phenomenon that individuals tend to develop similar linguistic behaviors in conversation) to rerank the N-best candidates produced by BERT, a state-of-the-art pre-trained language model. Our results show an improvement in R@1 compared to BERT baselines, demonstrating the utility of repairing machine-generated outputs by leveraging a linguistic theory.

Similar Work