Lamsum: Creating Extractive Summaries Of User Generated Content Using Llms · The Large Language Model Bible Contribute to LLM-Bible

Lamsum: Creating Extractive Summaries Of User Generated Content Using Llms

Chhikara Garima, Sharma Anurag, Gurucharan V., Ghosh Kripabandhu, Chakraborty Abhijnan. Arxiv 2024

[Paper]    
Applications GPT Model Architecture RAG Tools

Large Language Models (LLMs) have demonstrated impressive performance across a wide range of NLP tasks, including summarization. LLMs inherently produce abstractive summaries by paraphrasing the original text, while the generation of extractive summaries - selecting specific subsets from the original text - remains largely unexplored. LLMs have a limited context window size, restricting the amount of data that can be processed at once. We tackle this challenge by introducing LaMSUM, a novel multi-level framework designed to generate extractive summaries from large collections of user-generated text using LLMs. LaMSUM integrates summarization with different voting methods to achieve robust summaries. Extensive evaluation using four popular LLMs (Llama 3, Mixtral, Gemini, GPT-4o) demonstrates that LaMSUM outperforms state-of-the-art extractive summarization methods. Overall, this work represents one of the first attempts to achieve extractive summarization by leveraging the power of LLMs, and is likely to spark further interest within the research community.

Similar Work