What's In An Embedding? Would A Rose By Any Embedding Smell As Sweet? · The Large Language Model Bible Contribute to LLM-Bible

What's In An Embedding? Would A Rose By Any Embedding Smell As Sweet?

Venkatasubramanian Venkat. Arxiv 2024

[Paper]    
Applications Interpretability And Explainability Uncategorized

Large Language Models (LLMs) are often criticized for lacking true “understanding” and the ability to “reason” with their knowledge, being seen merely as autocomplete systems. We believe that this assessment might be missing a nuanced insight. We suggest that LLMs do develop a kind of empirical “understanding” that is “geometry”-like, which seems adequate for a range of applications in NLP, computer vision, coding assistance, etc. However, this “geometric” understanding, built from incomplete and noisy data, makes them unreliable, difficult to generalize, and lacking in inference capabilities and explanations, similar to the challenges faced by heuristics-based expert systems decades ago. To overcome these limitations, we suggest that LLMs should be integrated with an “algebraic” representation of knowledge that includes symbolic AI elements used in expert systems. This integration aims to create large knowledge models (LKMs) that not only possess “deep” knowledge grounded in first principles, but also have the ability to reason and explain, mimicking human expert capabilities. To harness the full potential of generative AI safely and effectively, a paradigm shift is needed from LLM to more comprehensive LKM.

Similar Work