Agentsims: An Open-source Sandbox For Large Language Model Evaluation · The Large Language Model Bible Contribute to LLM-Bible

Agentsims: An Open-source Sandbox For Large Language Model Evaluation

Lin Jiaju, Zhao Haoran, Zhang Aochi, Wu Yiting, Ping Huqiuyue, Chen Qin. Arxiv 2023

[Paper] [Code]    
Agentic GPT Has Code Model Architecture Reinforcement Learning

With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question. Existing evaluation methods suffer from following shortcomings: (1) constrained evaluation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task-based evaluation, where LLM agents complete tasks in a simulated environment, is a one-for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mechanisms, i.e. memory, planning and tool-use systems, by a few lines of codes. Our demo is available at https://agentsims.com .

Similar Work