Beyond Llms: Advancing The Landscape Of Complex Reasoning · The Large Language Model Bible Contribute to LLM-Bible

Beyond Llms: Advancing The Landscape Of Complex Reasoning

Chu-carroll Jennifer, Beck Andrew, Burnham Greg, Melville David Os, Nachman David, Özcan A. Erdem, Ferrucci David. Arxiv 2024

[Paper]    
Efficiency And Optimization RAG Reinforcement Learning Tools

Since the advent of Large Language Models a few years ago, they have often been considered the de facto solution for many AI problems. However, in addition to the many deficiencies of LLMs that prevent them from broad industry adoption, such as reliability, cost, and speed, there is a whole class of common real world problems that Large Language Models perform poorly on, namely, constraint satisfaction and optimization problems. These problems are ubiquitous and current solutions are highly specialized and expensive to implement. At Elemental Cognition, we developed our EC AI platform which takes a neuro-symbolic approach to solving constraint satisfaction and optimization problems. The platform employs, at its core, a precise and high performance logical reasoning engine, and leverages LLMs for knowledge acquisition and user interaction. This platform supports developers in specifying application logic in natural and concise language while generating application user interfaces to interact with users effectively. We evaluated LLMs against systems built on the EC AI platform in three domains and found the EC AI systems to significantly outperform LLMs on constructing valid and optimal solutions, on validating proposed solutions, and on repairing invalid solutions.

Similar Work