Re-task: Revisiting LLM Tasks From Capability, Skill, And Knowledge Perspectives · The Large Language Model Bible Contribute to LLM-Bible

Re-task: Revisiting LLM Tasks From Capability, Skill, And Knowledge Perspectives

Wang Zhihu, Zhao Shiwan, Wang Yu, Huang Heyuan, Shi Jiaxin, Xie Sitao, Wang Zhixing, Zhang Yubo, Li Hongyan, Yan Junchi. Arxiv 2024

[Paper]    
Fine Tuning Pretraining Methods Prompting Tools Training Techniques

As large language models (LLMs) continue to scale, their enhanced performance often proves insufficient for solving domain-specific tasks. Systematically analyzing their failures and effectively enhancing their performance remain significant challenges. This paper introduces the Re-TASK framework, a novel theoretical model that Revisits LLM Tasks from cApability, Skill, Knowledge perspectives, guided by the principles of Bloom’s Taxonomy and Knowledge Space Theory. The Re-TASK framework provides a systematic methodology to deepen our understanding, evaluation, and enhancement of LLMs for domain-specific tasks. It explores the interplay among an LLM’s capabilities, the knowledge it processes, and the skills it applies, elucidating how these elements are interconnected and impact task performance. Our application of the Re-TASK framework reveals that many failures in domain-specific tasks can be attributed to insufficient knowledge or inadequate skill adaptation. With this insight, we propose structured strategies for enhancing LLMs through targeted knowledge injection and skill adaptation. Specifically, we identify key capability items associated with tasks and employ a deliberately designed prompting strategy to enhance task performance, thereby reducing the need for extensive fine-tuning. Alternatively, we fine-tune the LLM using capability-specific instructions, further validating the efficacy of our framework. Experimental results confirm the framework’s effectiveness, demonstrating substantial improvements in both the performance and applicability of LLMs.

Similar Work