Prompting Llms To Compose Meta-review Drafts From Peer-review Narratives Of Scholarly Manuscripts · The Large Language Model Bible Contribute to LLM-Bible

Prompting Llms To Compose Meta-review Drafts From Peer-review Narratives Of Scholarly Manuscripts

Santu Shubhra Kanti Karmaker, Sinha Sanjeev Kumar, Bansal Naman, Knipper Alex, Sarkar Souvika, Salvador John, Mahajan Yash, Guttikonda Sri, Akter Mousumi, Freestone Matthew, Williams Matthew C. Jr. Arxiv 2024

[Paper]    
GPT Model Architecture Prompting Reinforcement Learning Survey Paper Uncategorized

One of the most important yet onerous tasks in the academic peer-reviewing process is composing meta-reviews, which involves understanding the core contributions, strengths, and weaknesses of a scholarly manuscript based on peer-review narratives from multiple experts and then summarizing those multiple experts’ perspectives into a concise holistic overview. Given the latest major developments in generative AI, especially Large Language Models (LLMs), it is very compelling to rigorously study the utility of LLMs in generating such meta-reviews in an academic peer-review setting. In this paper, we perform a case study with three popular LLMs, i.e., GPT-3.5, LLaMA2, and PaLM2, to automatically generate meta-reviews by prompting them with different types/levels of prompts based on the recently proposed TELeR taxonomy. Finally, we perform a detailed qualitative study of the meta-reviews generated by the LLMs and summarize our findings and recommendations for prompting LLMs for this complex task.

Similar Work