Galla: Graph Aligned Large Language Models For Improved Source Code Understanding · The Large Language Model Bible Contribute to LLM-Bible

Galla: Graph Aligned Large Language Models For Improved Source Code Understanding

Zhang Ziyin, Yu Hang, Li Shijie, Di Peng, Li Jianguo, Wang Rui. Arxiv 2024

[Paper]    
Model Architecture Multimodal Models Pretraining Methods Reinforcement Learning Tools Training Techniques Transformer

Programming languages possess rich semantic information such as data flow that is represented by graphs and not available from the surface form of source code. Recent code language models have scaled to billions of parameters, but model source code solely as text tokens while ignoring any other structural information. Conversely, models that do encode structural information of code make modifications to the Transformer architecture, limiting their scale and compatibility with pretrained LLMs. In this work, we take the best of both worlds with GALLa - Graph Aligned Large Language Model. GALLa utilizes graph neural networks and cross-modal alignment technologies to inject the structural information of code into LLMs as an auxiliary task during finetuning. This framework is both model-agnostic and task-agnostic, as it can be applied to any code LLM for any code downstream task, and requires the structural graph data only at training time from a corpus unrelated to the finetuning data, while incurring no cost at inference time over the baseline LLM. Experiments on five code tasks with four different baseline LLMs ranging in size from 350M to 8B validate the effectiveness of GALLa, demonstrating consistent improvement over the baseline, even for powerful models such as LLaMA3.

Similar Work