Efficient LLM Context Distillation · The Large Language Model Bible Contribute to LLM-Bible

Efficient LLM Context Distillation

Upadhayayaya Rajesh, Smith Zachary, Kottmyer Chritopher, Osti Manish Raj. Arxiv 2024

[Paper]    
Distillation Efficiency And Optimization

This paper specifically investigates context distillation a method that extends the utility of task-specific examples by internalizing them, thus augmenting the example set accessible for model inference.

Similar Work