Lateval: An Interactive Llms Evaluation Benchmark With Incomplete Information From Lateral Thinking Puzzles · The Large Language Model Bible Contribute to LLM-Bible

Lateval: An Interactive Llms Evaluation Benchmark With Incomplete Information From Lateral Thinking Puzzles

Huang Shulin, Ma Shirong, Li Yinghui, Huang Mengzuo, Zou Wuhe, Zhang Weidong, Zheng Hai-tao. Arxiv 2023

[Paper]    
GPT Model Architecture Reinforcement Learning Tools

With the continuous evolution and refinement of LLMs, they are endowed with impressive logical reasoning or vertical thinking capabilities. But can they think out of the box? Do they possess proficient lateral thinking abilities? Following the setup of Lateral Thinking Puzzles, we propose a novel evaluation benchmark, LatEval, which assesses the model’s lateral thinking within an interactive framework. In our benchmark, we challenge LLMs with 2 aspects: the quality of questions posed by the model and the model’s capability to integrate information for problem-solving. We find that nearly all LLMs struggle with employing lateral thinking during interactions. For example, even the most advanced model, GPT-4, exhibits the advantage to some extent, yet still maintain a noticeable gap when compared to human. This evaluation benchmark provides LLMs with a highly challenging and distinctive task that is crucial to an effective AI assistant.

Similar Work