From Generalist To Specialist: Improving Large Language Models For Medical Physics Using Arcot · The Large Language Model Bible Contribute to LLM-Bible

From Generalist To Specialist: Improving Large Language Models For Medical Physics Using Arcot

Grandinetti Jace, Mcbeth Rafe. Arxiv 2024

[Paper]    
Fine Tuning Pretraining Methods Prompting RAG Tools Training Techniques

Large Language Models (LLMs) have achieved remarkable progress, yet their application in specialized fields, such as medical physics, remains challenging due to the need for domain-specific knowledge. This study introduces ARCoT (Adaptable Retrieval-based Chain of Thought), a framework designed to enhance the domain-specific accuracy of LLMs without requiring fine-tuning or extensive retraining. ARCoT integrates a retrieval mechanism to access relevant domain-specific information and employs step-back and chain-of-thought prompting techniques to guide the LLM’s reasoning process, ensuring more accurate and context-aware responses. Benchmarking on a medical physics multiple-choice exam, our model outperformed standard LLMs and reported average human performance, demonstrating improvements of up to 68% and achieving a high score of 90%. This method reduces hallucinations and increases domain-specific performance. The versatility and model-agnostic nature of ARCoT make it easily adaptable to various domains, showcasing its significant potential for enhancing the accuracy and reliability of LLMs in specialized fields.

Similar Work