Embedding-aligned Language Models · The Large Language Model Bible Contribute to LLM-Bible

Embedding-aligned Language Models

Tennenholtz Guy, Chow Yinlam, Hsu Chih-wei, Shani Lior, Liang Ethan, Boutilier Craig. Arxiv 2024

[Paper]    
Agentic Applications Efficiency And Optimization Language Modeling RAG Reinforcement Learning Training Techniques

We propose a novel approach for training large language models (LLMs) to adhere to objectives defined within a latent embedding space. Our method leverages reinforcement learning (RL), treating a pre-trained LLM as an environment. Our embedding-aligned guided language (EAGLE) agent is trained to iteratively steer the LLM’s generation towards optimal regions of the latent embedding space, w.r.t. some predefined criterion. We demonstrate the effectiveness of the EAGLE agent using the MovieLens 25M dataset to surface content gaps that satisfy latent user demand. We also demonstrate the benefit of using an optimal design of a state-dependent action set to improve EAGLE’s efficiency. Our work paves the way for controlled and grounded text generation using LLMs, ensuring consistency with domain-specific knowledge and data representations.

Similar Work