Memory-augmented Generative Adversarial Transformers · The Large Language Model Bible Contribute to LLM-Bible

Memory-augmented Generative Adversarial Transformers

Raaijmakers Stephan, Bakker Roos, Cremers Anita, De Kleijn Roy, Kouwenhoven Tom, Verhoef Tessa. Arxiv 2024

[Paper]    
Applications Attention Mechanism Model Architecture Pretraining Methods Reinforcement Learning Security Transformer

Conversational AI systems that rely on Large Language Models, like Transformers, have difficulty interweaving external data (like facts) with the language they generate. Vanilla Transformer architectures are not designed for answering factual questions with high accuracy. This paper investigates a possible route for addressing this problem. We propose to extend the standard Transformer architecture with an additional memory bank holding extra information (such as facts drawn from a knowledge base), and an extra attention layer for addressing this memory. We add this augmented memory to a Generative Adversarial Network-inspired Transformer architecture. This setup allows for implementing arbitrary felicity conditions on the generated language of the Transformer. We first demonstrate how this machinery can be deployed for handling factual questions in goal-oriented dialogues. Secondly, we demonstrate that our approach can be useful for applications like {\it style adaptation} as well: the adaptation of utterances according to certain stylistic (external) constraints, like social properties of human interlocutors in dialogues.

Similar Work