Advisorqa: Towards Helpful And Harmless Advice-seeking Question Answering With Collective Intelligence · The Large Language Model Bible Contribute to LLM-Bible

Advisorqa: Towards Helpful And Harmless Advice-seeking Question Answering With Collective Intelligence

Kim Minbeom, Lee Hwanhee, Park Joonsuk, Lee Hwaran, Jung Kyomin. Arxiv 2024

[Paper]    
Applications GPT Model Architecture RAG Tools

As the integration of large language models into daily life is on the rise, there is a clear gap in benchmarks for advising on subjective and personal dilemmas. To address this, we introduce AdvisorQA, the first benchmark developed to assess LLMs’ capability in offering advice for deeply personalized concerns, utilizing the LifeProTips subreddit forum. This forum features a dynamic interaction where users post advice-seeking questions, receiving an average of 8.9 advice per query, with 164.2 upvotes from hundreds of users, embodying a collective intelligence framework. Therefore, we’ve completed a benchmark encompassing daily life questions, diverse corresponding responses, and majority vote ranking to train our helpfulness metric. Baseline experiments validate the efficacy of AdvisorQA through our helpfulness metric, GPT-4, and human evaluation, analyzing phenomena beyond the trade-off between helpfulness and harmlessness. AdvisorQA marks a significant leap in enhancing QA systems for providing personalized, empathetic advice, showcasing LLMs’ improved understanding of human subjectivity.

Similar Work