Matching Pairs: Attributing Fine-tuned Models To Their Pre-trained Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Matching Pairs: Attributing Fine-tuned Models To Their Pre-trained Large Language Models

Foley Myles, Rawat Ambrish, Lee Taesung, Hou Yufang, Picco Gabriele, Zizzo Giulio. Arxiv 2023

[Paper]    
Applications Responsible AI Tools

The wide applicability and adaptability of generative large language models (LLMs) has enabled their rapid adoption. While the pre-trained models can perform many tasks, such models are often fine-tuned to improve their performance on various downstream applications. However, this leads to issues over violation of model licenses, model theft, and copyright infringement. Moreover, recent advances show that generative technology is capable of producing harmful content which exacerbates the problems of accountability within model supply chains. Thus, we need a method to investigate how a model was trained or a piece of text was generated and what their pre-trained base model was. In this paper we take the first step to address this open problem by tracing back the origin of a given fine-tuned LLM to its corresponding pre-trained base model. We consider different knowledge levels and attribution strategies, and find that we can correctly trace back 8 out of the 10 fine tuned models with our best method.

Similar Work