Evaluating Pretrained Transformer Models For Entity Linking In Task-oriented Dialog · The Large Language Model Bible Contribute to LLM-Bible

Evaluating Pretrained Transformer Models For Entity Linking In Task-oriented Dialog

Jayanthi Sai Muralidhar, Embar Varsha, Raghunathan Karthik. Arxiv 2021

[Paper] [Code]    
Has Code Model Architecture Pretraining Methods Transformer

The wide applicability of pretrained transformer models (PTMs) for natural language tasks is well demonstrated, but their ability to comprehend short phrases of text is less explored. To this end, we evaluate different PTMs from the lens of unsupervised Entity Linking in task-oriented dialog across 5 characteristics – syntactic, semantic, short-forms, numeric and phonetic. Our results demonstrate that several of the PTMs produce sub-par results when compared to traditional techniques, albeit competitive to other neural baselines. We find that some of their shortcomings can be addressed by using PTMs fine-tuned for text-similarity tasks, which illustrate an improved ability in comprehending semantic and syntactic correspondences, as well as some improvements for short-forms, numeric and phonetic variations in entity mentions. We perform qualitative analysis to understand nuances in their predictions and discuss scope for further improvements. Code can be found at https://github.com/murali1996/el_tod

Similar Work