Fireact: Toward Language Agent Fine-tuning · The Large Language Model Bible Contribute to LLM-Bible

Fireact: Toward Language Agent Fine-tuning

Chen Baian, Shu Chang, Shareghi Ehsan, Collier Nigel, Narasimhan Karthik, Yao Shunyu. Arxiv 2023

[Paper]    
Agentic Applications Efficiency And Optimization Few Shot Fine Tuning GPT In Context Learning Model Architecture Pretraining Methods Prompting Reinforcement Learning Security Tools Training Techniques

Recent efforts have augmented language models (LMs) with external tools or environments, leading to the development of language agents that can reason and act. However, most of these agents rely on few-shot prompting techniques with off-the-shelf LMs. In this paper, we investigate and argue for the overlooked direction of fine-tuning LMs to obtain language agents. Using a setup of question answering (QA) with a Google search API, we explore a variety of base LMs, prompting methods, fine-tuning data, and QA tasks, and find language agents are consistently improved after fine-tuning their backbone LMs. For example, fine-tuning Llama2-7B with 500 agent trajectories generated by GPT-4 leads to a 77% HotpotQA performance increase. Furthermore, we propose FireAct, a novel approach to fine-tuning LMs with trajectories from multiple tasks and prompting methods, and show having more diverse fine-tuning data can further improve agents. Along with other findings regarding scaling effects, robustness, generalization, efficiency and cost, our work establishes comprehensive benefits of fine-tuning LMs for agents, and provides an initial set of experimental designs, insights, as well as open questions toward language agent fine-tuning.

Similar Work