MALTO At Semeval-2024 Task 6: Leveraging Synthetic Data For LLM Hallucination Detection · The Large Language Model Bible Contribute to LLM-Bible

MALTO At Semeval-2024 Task 6: Leveraging Synthetic Data For LLM Hallucination Detection

Borra Federico, Savelli Claudio, Rosso Giacomo, Koudounas Alkis, Giobergia Flavio. Arxiv 2024

[Paper]    
RAG

In Natural Language Generation (NLG), contemporary Large Language Models (LLMs) face several challenges, such as generating fluent yet inaccurate outputs and reliance on fluency-centric metrics. This often leads to neural networks exhibiting “hallucinations”. The SHROOM challenge focuses on automatically identifying these hallucinations in the generated text. To tackle these issues, we introduce two key components, a data augmentation pipeline incorporating LLM-assisted pseudo-labelling and sentence rephrasing, and a voting ensemble from three models pre-trained on Natural Language Inference (NLI) tasks and fine-tuned on diverse datasets.

Similar Work