Toolace: Winning The Points Of LLM Function Calling · The Large Language Model Bible Contribute to LLM-Bible

Toolace: Winning The Points Of LLM Function Calling

Liu Weiwen, Huang Xu, Zeng Xingshan, Hao Xinlong, Yu Shuai, Li Dexun, Wang Shuai, Gan Weinan, Liu Zhengying, Yu Yuanqing, Wang Zezhong, Wang Yuxian, Ning Wu, Hou Yutai, Wang Bin, Wu Chuhan, Wang Xinzhi, Liu Yong, Wang Yasheng, Tang Duyu, Tu Dandan, Shang Lifeng, Jiang Xin, Tang Ruiming, Lian Defu, Liu Qun, Chen Enhong. Arxiv 2024

[Paper]    
Agentic GPT Model Architecture RAG Tools Training Techniques Uncategorized

Function calling significantly extends the application boundary of large language models, where high-quality and diverse training data is critical for unlocking this capability. However, real function-calling data is quite challenging to collect and annotate, while synthetic data generated by existing pipelines tends to lack coverage and accuracy. In this paper, we present ToolACE, an automatic agentic pipeline designed to generate accurate, complex, and diverse tool-learning data. ToolACE leverages a novel self-evolution synthesis process to curate a comprehensive API pool of 26,507 diverse APIs. Dialogs are further generated through the interplay among multiple agents, guided by a formalized thinking process. To ensure data accuracy, we implement a dual-layer verification system combining rule-based and model-based checks. We demonstrate that models trained on our synthesized data, even with only 8B parameters, achieve state-of-the-art performance on the Berkeley Function-Calling Leaderboard, rivaling the latest GPT-4 models. Our model and a subset of the data are publicly available at https://huggingface.co/Team-ACE.

Similar Work