A Survey On Symbolic Knowledge Distillation Of Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

A Survey On Symbolic Knowledge Distillation Of Large Language Models

Acharya Kamal, Velasquez Alvaro, Song Houbing Herbert. Arxiv 2024

[Paper]    
Applications BERT Distillation Efficiency And Optimization Ethics And Bias GPT Interpretability And Explainability Merging Model Architecture Pretraining Methods Reinforcement Learning Survey Paper Transformer

This survey paper delves into the emerging and critical area of symbolic knowledge distillation in Large Language Models (LLMs). As LLMs like Generative Pre-trained Transformer-3 (GPT-3) and Bidirectional Encoder Representations from Transformers (BERT) continue to expand in scale and complexity, the challenge of effectively harnessing their extensive knowledge becomes paramount. This survey concentrates on the process of distilling the intricate, often implicit knowledge contained within these models into a more symbolic, explicit form. This transformation is crucial for enhancing the interpretability, efficiency, and applicability of LLMs. We categorize the existing research based on methodologies and applications, focusing on how symbolic knowledge distillation can be used to improve the transparency and functionality of smaller, more efficient Artificial Intelligence (AI) models. The survey discusses the core challenges, including maintaining the depth of knowledge in a comprehensible format, and explores the various approaches and techniques that have been developed in this field. We identify gaps in current research and potential opportunities for future advancements. This survey aims to provide a comprehensive overview of symbolic knowledge distillation in LLMs, spotlighting its significance in the progression towards more accessible and efficient AI systems.

Similar Work