Enhancing Incremental Summarization With Structured Representations · The Large Language Model Bible Contribute to LLM-Bible

Enhancing Incremental Summarization With Structured Representations

Hwang Eunjeong, Zhou Yichao, Wendt James Bradley, Gunel Beliz, Vo Nguyen, Xie Jing, Tata Sandeep. Arxiv 2024

[Paper]    
Applications Reinforcement Learning

Large language models (LLMs) often struggle with processing extensive input contexts, which can lead to redundant, inaccurate, or incoherent summaries. Recent methods have used unstructured memory to incrementally process these contexts, but they still suffer from information overload due to the volume of unstructured data handled. In our study, we introduce structured knowledge representations (\(GU_{json}\)), which significantly improve summarization performance by 40% and 14% across two public datasets. Most notably, we propose the Chain-of-Key strategy (\(CoK_{json}\)) that dynamically updates or augments these representations with new information, rather than recreating the structured memory for each new source. This method further enhances performance by 7% and 4% on the datasets.

Similar Work