Batgpt: A Bidirectional Autoregessive Talker From Generative Pre-trained Transformer · The Large Language Model Bible Contribute to LLM-Bible

Batgpt: A Bidirectional Autoregessive Talker From Generative Pre-trained Transformer

Li Zuchao, Zhang Shitou, Zhao Hai, Yang Yifei, Yang Dongjie. Arxiv 2023

[Paper]    
Agentic Applications GPT Language Modeling Model Architecture Pretraining Methods Prompting RAG Reinforcement Learning Training Techniques Transformer

BatGPT is a large-scale language model designed and trained jointly by Wuhan University and Shanghai Jiao Tong University. It is capable of generating highly natural and fluent text in response to various types of input, including text prompts, images, and audio. In the modeling level, we employ a bidirectional autoregressive architecture that allows the model to efficiently capture the complex dependencies of natural language, making it highly effective in tasks such as language generation, dialog systems, and question answering. Moreover, the bidirectional autoregressive modeling not only operates from left to right but also from right to left, effectively reducing fixed memory effects and alleviating model hallucinations. In the training aspect, we propose a novel parameter expansion method for leveraging the pre-training of smaller models and employ reinforcement learning from both AI and human feedback, aimed at improving the model’s alignment performance. Overall, these approaches significantly improve the effectiveness of BatGPT, and the model can be utilized for a wide range of natural language applications.

Similar Work