Llms Could Autonomously Learn Without External Supervision · The Large Language Model Bible Contribute to LLM-Bible

Llms Could Autonomously Learn Without External Supervision

Ji Ke, Chen Junying, Gao Anningzhe, Xie Wenya, Wan Xiang, Wang Benyou. Arxiv 2024

[Paper]    
Efficiency And Optimization Fine Tuning Pretraining Methods RAG Training Techniques

In the quest for super-human performance, Large Language Models (LLMs) have traditionally been tethered to human-annotated datasets and predefined training objectives-a process that is both labor-intensive and inherently limited. This paper presents a transformative approach: Autonomous Learning for LLMs, a self-sufficient learning paradigm that frees models from the constraints of human supervision. This method endows LLMs with the ability to self-educate through direct interaction with text, akin to a human reading and comprehending literature. Our approach eliminates the reliance on annotated data, fostering an Autonomous Learning environment where the model independently identifies and reinforces its knowledge gaps. Empirical results from our comprehensive experiments, which utilized a diverse array of learning materials and were evaluated against standard public quizzes, reveal that Autonomous Learning outstrips the performance of both Pre-training and Supervised Fine-Tuning (SFT), as well as retrieval-augmented methods. These findings underscore the potential of Autonomous Learning to not only enhance the efficiency and effectiveness of LLM training but also to pave the way for the development of more advanced, self-reliant AI systems.

Similar Work