Algorithmic Language Models With Neurally Compiled Libraries · The Large Language Model Bible Contribute to LLM-Bible

Algorithmic Language Models With Neurally Compiled Libraries

Saldyt Lucas, Kambhampati Subbarao. Arxiv 2024

[Paper]    
Efficiency And Optimization Fine Tuning Model Architecture Pretraining Methods Tools Training Techniques Transformer

Important tasks such as reasoning and planning are fundamentally algorithmic, meaning that solving them robustly requires acquiring true reasoning or planning algorithms, rather than shortcuts. Large Language Models lack true algorithmic ability primarily because of the limitations of neural network optimization algorithms, their optimization data and optimization objective, but also due to architectural inexpressivity. To solve this, our paper proposes augmenting LLMs with a library of fundamental operations and sophisticated differentiable programs, so that common algorithms do not need to be learned from scratch. We add memory, registers, basic operations, and adaptive recurrence to a transformer architecture built on LLaMA3. Then, we define a method for directly compiling algorithms into a differentiable starting library, which is used natively and propagates gradients for optimization. In this preliminary study, we explore the feasability of augmenting LLaMA3 with a differentiable computer, for instance by fine-tuning small transformers on simple algorithmic tasks with variable computational depth.

Similar Work