Batch Universal Prediction · The Large Language Model Bible Contribute to LLM-Bible

Batch Universal Prediction

Bondaschi Marco, Gastpar Michael. Arxiv 2024

[Paper]    
RAG Reinforcement Learning

Large language models (LLMs) have recently gained much popularity due to their surprising ability at generating human-like English sentences. LLMs are essentially predictors, estimating the probability of a sequence of words given the past. Therefore, it is natural to evaluate their performance from a universal prediction perspective. In order to do that fairly, we introduce the notion of batch regret as a modification of the classical average regret, and we study its asymptotical value for add-constant predictors, in the case of memoryless sources and first-order Markov sources.

Similar Work