Chainforge: A Visual Toolkit For Prompt Engineering And LLM Hypothesis Testing · The Large Language Model Bible Contribute to LLM-Bible

Chainforge: A Visual Toolkit For Prompt Engineering And LLM Hypothesis Testing

Ian Arawjo, Chelse Swoopes, Priyan Vaithilingam, Martin Wattenberg, Elena Glassman. Arxiv 2023 – 53 citations

[Paper]    
Fine-Tuning Tools Reinforcement Learning Prompting

Evaluating outputs of large language models (LLMs) is challenging, requiring making – and making sense of – many responses. Yet tools that go beyond basic prompting tend to require knowledge of programming APIs, focus on narrow domains, or are closed-source. We present ChainForge, an open-source visual toolkit for prompt engineering and on-demand hypothesis testing of text generation LLMs. ChainForge provides a graphical interface for comparison of responses across models and prompt variations. Our system was designed to support three tasks: model selection, prompt template design, and hypothesis testing (e.g., auditing). We released ChainForge early in its development and iterated on its design with academics and online users. Through in-lab and interview studies, we find that a range of people could use ChainForge to investigate hypotheses that matter to them, including in real-world settings. We identify three modes of prompt engineering and LLM hypothesis testing: opportunistic exploration, limited evaluation, and iterative refinement.

Similar Work