Open-domain Implicit Format Control For Large Language Model Generation · The Large Language Model Bible Contribute to LLM-Bible

Open-domain Implicit Format Control For Large Language Model Generation

Yao Yiqun, Ma Wenjia, Fang Xuezhi, Jiang Xin, Li Xiang, Meng Xuying, Han Peng, Li Jing, Sun Aixin, Wang Yequan. Arxiv 2024

[Paper] [Code]    
Applications Fine Tuning Has Code Pretraining Methods RAG Tools Training Techniques

Controlling the format of outputs generated by large language models (LLMs) is a critical functionality in various applications. Current methods typically employ constrained decoding with rule-based automata or fine-tuning with manually crafted format instructions, both of which struggle with open-domain format requirements. To address this limitation, we introduce a novel framework for controlled generation in LLMs, leveraging user-provided, one-shot QA pairs. This study investigates LLMs’ capabilities to follow open-domain, one-shot constraints and replicate the format of the example answers. We observe that this is a non-trivial problem for current LLMs. We also develop a dataset collection methodology for supervised fine-tuning that enhances the open-domain format control of LLMs without degrading output quality, as well as a benchmark on which we evaluate both the helpfulness and format correctness of LLM outputs. The resulting datasets, named OIFC-SFT, along with the related code, will be made publicly available at https://github.com/cofe-ai/OIFC.

Similar Work