AILS-NTUA At Semeval-2024 Task 9: Cracking Brain Teasers: Transformer Models For Lateral Thinking Puzzles · The Large Language Model Bible Contribute to LLM-Bible

AILS-NTUA At Semeval-2024 Task 9: Cracking Brain Teasers: Transformer Models For Lateral Thinking Puzzles

Panagiotopoulos Ioannis, Filandrianos Giorgos, Lymperaiou Maria, Stamou Giorgos. Arxiv 2024

[Paper]    
Fine Tuning GPT Model Architecture Pretraining Methods RAG Security Training Techniques Transformer

In this paper, we outline our submission for the SemEval-2024 Task 9 competition: ‘BRAINTEASER: A Novel Task Defying Common Sense’. We engage in both sub-tasks: Sub-task A-Sentence Puzzle and Sub-task B-Word Puzzle. We evaluate a plethora of pre-trained transformer-based language models of different sizes through fine-tuning. Subsequently, we undertake an analysis of their scores and responses to aid future researchers in understanding and utilizing these models effectively. Our top-performing approaches secured competitive positions on the competition leaderboard across both sub-tasks. In the evaluation phase, our best submission attained an average accuracy score of 81.7% in the Sentence Puzzle, and 85.4% in the Word Puzzle, significantly outperforming the best neural baseline (ChatGPT) by more than 20% and 30% respectively.

Similar Work