Cleared For Takeoff? Compositional & Conditional Reasoning May Be The Achilles Heel To (flight-booking) Language Agents · The Large Language Model Bible Contribute to LLM-Bible

Cleared For Takeoff? Compositional & Conditional Reasoning May Be The Achilles Heel To (flight-booking) Language Agents

Kohli Harsh, Sun Huan. Arxiv 2024

[Paper]    
Agentic Applications GPT Model Architecture Prompting Reinforcement Learning Tools

The rapid progress of large language models (LLMs) has seen them excel and frequently surpass human performance on standard benchmarks. This has enabled many downstream applications, such as LLM agents, to rely on their sophisticated reasoning to navigate complex task requirements. However, LLMs are known to unexpectedly falter in simple tasks and under seemingly straightforward circumstances - underscoring the need for better and more diverse evaluation setups to measure their true capabilities. To this end, we choose to study compositional and conditional reasoning, two cornerstones of human cognition, and introduce GroundCocoa - a lexically diverse benchmark connecting these reasoning skills to the real-world problem of flight booking. Our task involves aligning detailed user preferences with available flight options presented in a multiple-choice format. Results indicate a significant disparity in performance among current state-of-the-art LLMs with even the best performing model, GPT-4 Turbo, not exceeding 67% accuracy despite advanced prompting techniques.

Similar Work