Language Models Are Few-shot Butlers · The Large Language Model Bible Contribute to LLM-Bible

Language Models Are Few-shot Butlers

Micheli Vincent, Fleuret François. Arxiv 2021

[Paper]    
Agentic Few Shot GPT Pretraining Methods Reinforcement Learning

Pretrained language models demonstrate strong performance in most NLP tasks when fine-tuned on small task-specific datasets. Hence, these autoregressive models constitute ideal agents to operate in text-based environments where language understanding and generative capabilities are essential. Nonetheless, collecting expert demonstrations in such environments is a time-consuming endeavour. We introduce a two-stage procedure to learn from a small set of demonstrations and further improve by interacting with an environment. We show that language models fine-tuned with only 1.2% of the expert demonstrations and a simple reinforcement learning algorithm achieve a 51% absolute improvement in success rate over existing methods in the ALFWorld environment.

Similar Work