Decomposition For Enhancing Attention: Improving Llm-based Text-to-sql Through Workflow Paradigm · The Large Language Model Bible Contribute to LLM-Bible

Decomposition For Enhancing Attention: Improving Llm-based Text-to-sql Through Workflow Paradigm

Xie Yuanzhen, Jin Xinzhou, Xie Tao, Lin Mingxiong, Chen Liang, Yu Chenyun, Cheng Lei, Zhuo Chengxiang, Hu Bo, Li Zang. Arxiv 2024

[Paper] [Code]    
Attention Mechanism Has Code In Context Learning Merging Model Architecture Prompting Reinforcement Learning

In-context learning of large-language models (LLMs) has achieved remarkable success in the field of natural language processing, while extensive case studies reveal that the single-step chain-of-thought prompting approach faces challenges such as attention diffusion and inadequate performance in complex tasks like text-to-SQL. To improve the contextual learning capabilities of LLMs in text-to-SQL, a workflow paradigm method is proposed, aiming to enhance the attention and problem-solving scope of LLMs through decomposition. Specifically, the information determination module for eliminating redundant information and the brand-new prompt structure based on problem classification greatly enhance the model’s attention. Additionally, the inclusion of self-correction and active learning modules greatly expands the problem-solving scope of LLMs, hence improving the upper limit of LLM-based approaches. Extensive experiments conducted on three datasets demonstrate that our approach outperforms other methods by a significant margin. About 2-3 percentage point improvements compared to the existing baseline on the Spider Dev, Spider-Realistic, and Bird Dev datasets and new SOTA results on the Spider Test dataset are achieved. Our code is available on GitHub: \url{https://github.com/FlyingFeather/DEA-SQL}.

Similar Work