[Paper]
Large Language Models (LLMs) currently struggle with tool invocation and chaining, as they often hallucinate or miss essential steps in a sequence. We propose RE-GAINS and EnChAnT, two novel frameworks that empower LLMs to tackle complex user queries by making API calls to external tools based on tool descriptions and argument lists. Tools are chained based on the expected output, without receiving the actual results from each individual call. EnChAnT, an open-source solution, leverages an LLM format enforcer, OpenChat 3.5 (an LLM), and ToolBench’s API Retriever. RE-GAINS utilizes OpenAI models and embeddings with a specialized prompt based on the \(\underline{R}\)easoning vi\(\underline{a}\) \(\underline{P}\)lanning \((RAP)\) framework. Both frameworks are low cost (0.01$ per query). Our key contribution is enabling LLMs for tool invocation and chaining using modifiable, externally described tools.