Chain-of-thought Prompting Under Streaming Batch: A Case Study · The Large Language Model Bible Contribute to LLM-Bible

Chain-of-thought Prompting Under Streaming Batch: A Case Study

Yuxin Tang. Arxiv 2023

[Paper]    
Prompting

Recently, Large Language Models (LLMs) have demonstrated remarkable capabilities. Chain-of-Thought (CoT) has been proposed as a way of assisting LLMs in performing complex reasoning. However, developing effective prompts can be a challenging and labor-intensive task. Many studies come out of some way to automatically construct CoT from test data. Most of them assume that all test data is visible before testing and only select a small subset to generate rationales, which is an unrealistic assumption. In this paper, we present a case study on how to construct and optimize chain-of-thought prompting using batch data in streaming settings.

Similar Work