Verbalized Probabilistic Graphical Modeling With Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Verbalized Probabilistic Graphical Modeling With Large Language Models

Huang Hengguan, Shen Xing, Wang Songtao, Liu Dianbo, Wang Hao. Arxiv 2024

[Paper]    
Applications Language Modeling Prompting Reinforcement Learning Training Techniques

Faced with complex problems, the human brain demonstrates a remarkable capacity to transcend sensory input and form latent understandings of perceived world patterns. However, this cognitive capacity is not explicitly considered or encoded in current large language models (LLMs). As a result, LLMs often struggle to capture latent structures and model uncertainty in complex compositional reasoning tasks. This work introduces a novel Bayesian prompting approach that facilitates training-free Bayesian inference with LLMs by using a verbalized Probabilistic Graphical Model (PGM). While traditional Bayesian approaches typically depend on extensive data and predetermined mathematical structures for learning latent factors and dependencies, our approach efficiently reasons latent variables and their probabilistic dependencies by prompting LLMs to adhere to Bayesian principles. We evaluated our model on several compositional reasoning tasks, both close-ended and open-ended. Our results indicate that the model effectively enhances confidence elicitation and text generation quality, demonstrating its potential to improve AI language understanding systems, especially in modeling uncertainty.

Similar Work