Irel At Semeval-2024 Task 9: Improving Conventional Prompting Methods For Brain Teasers · The Large Language Model Bible Contribute to LLM-Bible

Irel At Semeval-2024 Task 9: Improving Conventional Prompting Methods For Brain Teasers

Gupta Harshit, Chaudhary Manav, Raha Tathagata, Subramanian Shivansh, Varma Vasudeva. Arxiv 2024

[Paper]    
Applications Few Shot In Context Learning Prompting

This paper describes our approach for SemEval-2024 Task 9: BRAINTEASER: A Novel Task Defying Common Sense. The BRAINTEASER task comprises multiple-choice Question Answering designed to evaluate the models’ lateral thinking capabilities. It consists of Sentence Puzzle and Word Puzzle subtasks that require models to defy default common-sense associations and exhibit unconventional thinking. We propose a unique strategy to improve the performance of pre-trained language models, notably the Gemini 1.0 Pro Model, in both subtasks. We employ static and dynamic few-shot prompting techniques and introduce a model-generated reasoning strategy that utilizes the LLM’s reasoning capabilities to improve performance. Our approach demonstrated significant improvements, showing that it performed better than the baseline models by a considerable margin but fell short of performing as well as the human annotators, thus highlighting the efficacy of the proposed strategies.

Similar Work