Synthesizing Text-to-sql Data From Weak And Strong Llms · The Large Language Model Bible Contribute to LLM-Bible

Synthesizing Text-to-sql Data From Weak And Strong Llms

Yang Jiaxi, Hui Binyuan, Yang Min, Yang Jian, Lin Junyang, Zhou Chang. Arxiv 2024

[Paper]    
Prompting

The capability gap between open-source and closed-source large language models (LLMs) remains a challenge in text-to-SQL tasks. In this paper, we introduce a synthetic data approach that combines data produced by larger, more powerful models (strong models) with error information data generated by smaller, not well-aligned models (weak models). The method not only enhances the domain generalization of text-to-SQL models but also explores the potential of error data supervision through preference learning. Furthermore, we employ the synthetic data approach for instruction tuning on open-source LLMs, resulting SENSE, a specialized text-to-SQL model. The effectiveness of SENSE is demonstrated through state-of-the-art results on the SPIDER and BIRD benchmarks, bridging the performance gap between open-source models and methods prompted by closed-source models.

Similar Work