Chainforge: A Visual Toolkit For Prompt Engineering And LLM Hypothesis Testing · The Large Language Model Bible Contribute to LLM-Bible

Chainforge: A Visual Toolkit For Prompt Engineering And LLM Hypothesis Testing

Arawjo Ian, Swoopes Chelse, Vaithilingam Priyan, Wattenberg Martin, Glassman Elena. Arxiv 2023

[Paper]    
Fine Tuning Prompting Reinforcement Learning Tools

Evaluating outputs of large language models (LLMs) is challenging, requiring making – and making sense of – many responses. Yet tools that go beyond basic prompting tend to require knowledge of programming APIs, focus on narrow domains, or are closed-source. We present ChainForge, an open-source visual toolkit for prompt engineering and on-demand hypothesis testing of text generation LLMs. ChainForge provides a graphical interface for comparison of responses across models and prompt variations. Our system was designed to support three tasks: model selection, prompt template design, and hypothesis testing (e.g., auditing). We released ChainForge early in its development and iterated on its design with academics and online users. Through in-lab and interview studies, we find that a range of people could use ChainForge to investigate hypotheses that matter to them, including in real-world settings. We identify three modes of prompt engineering and LLM hypothesis testing: opportunistic exploration, limited evaluation, and iterative refinement.

Similar Work