Refining The Responses Of Llms By Themselves · The Large Language Model Bible Contribute to LLM-Bible

Refining The Responses Of Llms By Themselves

Yan Tianqiang, Xu Tiansheng. Arxiv 2023

[Paper]    
Efficiency And Optimization GPT Model Architecture Prompting RAG Tools Uncategorized

In this paper, we propose a simple yet efficient approach based on prompt engineering that leverages the large language model itself to optimize its answers without relying on auxiliary models. We introduce an iterative self-evaluating optimization mechanism, with the potential for improved output quality as iterations progress, removing the need for manual intervention. The experiment’s findings indicate that utilizing our response refinement framework on the GPT-3.5 model yields results that are on par with, or even surpass, those generated by the cutting-edge GPT-4 model. Detailed implementation strategies and illustrative examples are provided to demonstrate the superiority of our proposed solution.

Similar Work