Learning To Plan Long-term For Language Modeling · The Large Language Model Bible Contribute to LLM-Bible

Learning To Plan Long-term For Language Modeling

Mai Florian, Cornille Nathan, Moens Marie-francine. Arxiv 2024

[Paper]    
Attention Mechanism Language Modeling Model Architecture Uncategorized

Modern language models predict the next token in the sequence by considering the past text through a powerful function such as attention. However, language models have no explicit mechanism that allows them to spend computation time for planning long-distance future text, leading to a suboptimal token prediction. In this paper, we propose a planner that predicts a latent plan for many sentences into the future. By sampling multiple plans at once, we condition the language model on an accurate approximation of the distribution of text continuations, which leads to better next token prediction accuracy. In effect, this allows trading computation time for prediction accuracy.

Similar Work