Llm-dcache: Improving Tool-augmented Llms With Gpt-driven Localized Data Caching · The Large Language Model Bible Contribute to LLM-Bible

Llm-dcache: Improving Tool-augmented Llms With Gpt-driven Localized Data Caching

Singh Simranjit, Fore Michael, Karatzas Andreas, Lee Chaehong, Jian Yanan, Shangguan Longfei, Yu Fuxun, Anagnostopoulos Iraklis, Stamoulis Dimitrios. Arxiv 2024

[Paper]    
Agentic GPT Model Architecture Prompting RAG Reinforcement Learning Tools

As Large Language Models (LLMs) broaden their capabilities to manage thousands of API calls, they are confronted with complex data operations across vast datasets with significant overhead to the underlying system. In this work, we introduce LLM-dCache to optimize data accesses by treating cache operations as callable API functions exposed to the tool-augmented agent. We grant LLMs the autonomy to manage cache decisions via prompting, seamlessly integrating with existing function-calling mechanisms. Tested on an industry-scale massively parallel platform that spans hundreds of GPT endpoints and terabytes of imagery, our method improves Copilot times by an average of 1.24x across various LLMs and prompting techniques.

Similar Work