Fine-tuning Large Enterprise Language Models Via Ontological Reasoning · The Large Language Model Bible Contribute to LLM-Bible

Fine-tuning Large Enterprise Language Models Via Ontological Reasoning

Baldazzi Teodoro, Bellomarini Luigi, Ceri Stefano, Colombo Andrea, Gentili Andrea, Sallinger Emanuel. Arxiv 2023

[Paper]    
Applications Fine Tuning Model Architecture Pretraining Methods RAG Training Techniques

Large Language Models (LLMs) exploit fine-tuning as a technique to adapt to diverse goals, thanks to task-specific training data. Task specificity should go hand in hand with domain orientation, that is, the specialization of an LLM to accurately address the tasks of a given realm of interest. However, models are usually fine-tuned over publicly available data or, at most, over ground data from databases, ignoring business-level definitions and domain experience. On the other hand, Enterprise Knowledge Graphs (EKGs) are able to capture and augment such domain knowledge via ontological reasoning. With the goal of combining LLM flexibility with the domain orientation of EKGs, we propose a novel neurosymbolic architecture that leverages the power of ontological reasoning to build task- and domain-specific corpora for LLM fine-tuning.

Similar Work