Joint Prompt Optimization Of Stacked Llms Using Variational Inference · The Large Language Model Bible Contribute to LLM-Bible

Joint Prompt Optimization Of Stacked Llms Using Variational Inference

Sordoni Alessandro, Yuan Xingdi, Côté Marc-alexandre, Pereira Matheus, Trischler Adam, Xiao Ziang, Hosseini Arian, Niedtner Friederike, Roux Nicolas Le. Arxiv 2023

[Paper]    
Applications Efficiency And Optimization GPT Model Architecture Prompting

Large language models (LLMs) can be seen as atomic units of computation mapping sequences to a distribution over sequences. Thus, they can be seen as stochastic language layers in a language network, where the learnable parameters are the natural language prompts at each layer. By stacking two such layers and feeding the output of one layer to the next, we obtain a Deep Language Network (DLN). We first show how to effectively perform prompt optimization for a 1-Layer language network (DLN-1). Then, we present an extension that applies to 2-layer DLNs (DLN-2), where two prompts must be learned. The key idea is to consider the output of the first layer as a latent variable, which requires inference, and prompts to be learned as the parameters of the generative distribution. We first test the effectiveness of DLN-1 in multiple reasoning and natural language understanding tasks. Then, we show that DLN-2 can reach higher performance than a single layer, showing promise that we might reach comparable performance to GPT-4, even when each LLM in the network is smaller and less powerful.

Similar Work