APEER: Automatic Prompt Engineering Enhances Large Language Model Reranking · The Large Language Model Bible Contribute to LLM-Bible

APEER: Automatic Prompt Engineering Enhances Large Language Model Reranking

Jin Can, Peng Hongwu, Zhao Shiyu, Wang Zhenting, Xu Wujiang, Han Ligong, Zhao Jiahui, Zhong Kai, Rajasekaran Sanguthevar, Metaxas Dimitris N.. Arxiv 2024

[Paper] [Code]    
Efficiency And Optimization Has Code Language Modeling Prompting Reinforcement Learning

Large Language Models (LLMs) have significantly enhanced Information Retrieval (IR) across various modules, such as reranking. Despite impressive performance, current zero-shot relevance ranking with LLMs heavily relies on human prompt engineering. Existing automatic prompt engineering algorithms primarily focus on language modeling and classification tasks, leaving the domain of IR, particularly reranking, underexplored. Directly applying current prompt engineering algorithms to relevance ranking is challenging due to the integration of query and long passage pairs in the input, where the ranking complexity surpasses classification tasks. To reduce human effort and unlock the potential of prompt optimization in reranking, we introduce a novel automatic prompt engineering algorithm named APEER. APEER iteratively generates refined prompts through feedback and preference optimization. Extensive experiments with four LLMs and ten datasets demonstrate the substantial performance improvement of APEER over existing state-of-the-art (SoTA) manual prompts. Furthermore, we find that the prompts generated by APEER exhibit better transferability across diverse tasks and LLMs. Code is available at https://github.com/jincan333/APEER.

Similar Work