When Large Language Models Meet Evolutionary Algorithms · The Large Language Model Bible Contribute to LLM-Bible

When Large Language Models Meet Evolutionary Algorithms

Chao Wang, Zhao Jiaxuan, Jiao Licheng, Li Lingling, Liu Fang, Yang Shuyuan. Arxiv 2024

[Paper]    
Agentic Applications Fine Tuning Language Modeling Model Architecture Pretraining Methods Reinforcement Learning Tools Training Techniques Transformer

Pre-trained large language models (LLMs) have powerful capabilities for generating creative natural text. Evolutionary algorithms (EAs) can discover diverse solutions to complex real-world problems. Motivated by the common collective and directionality of text generation and evolution, this paper illustrates the parallels between LLMs and EAs, which includes multiple one-to-one key characteristics: token representation and individual representation, position encoding and fitness shaping, position embedding and selection, Transformers block and reproduction, and model training and parameter adaptation. By examining these parallels, we analyze existing interdisciplinary research, with a specific focus on evolutionary fine-tuning and LLM-enhanced EAs. Drawing from these insights, valuable future directions are presented for advancing the integration of LLMs and EAs, while highlighting key challenges along the way. These parallels not only reveal the evolution mechanism behind LLMs but also facilitate the development of evolved artificial agents that approach or surpass biological organisms.

Similar Work