SHIELD: Evaluation And Defense Strategies For Copyright Compliance In LLM Text Generation · The Large Language Model Bible Contribute to LLM-Bible

SHIELD: Evaluation And Defense Strategies For Copyright Compliance In LLM Text Generation

Liu Xiaoze, Sun Ting, Xu Tianyang, Wu Feijie, Wang Cunxiang, Wang Xiaoqian, Gao Jing. Arxiv 2024

[Paper] [Code]    
Applications Has Code Language Modeling Reinforcement Learning Security Tools

Large Language Models (LLMs) have transformed machine learning but raised significant legal concerns due to their potential to produce text that infringes on copyrights, resulting in several high-profile lawsuits. The legal landscape is struggling to keep pace with these rapid advancements, with ongoing debates about whether generated text might plagiarize copyrighted materials. Current LLMs may infringe on copyrights or overly restrict non-copyrighted texts, leading to these challenges: (i) the need for a comprehensive evaluation benchmark to assess copyright compliance from multiple aspects; (ii) evaluating robustness against safeguard bypassing attacks; and (iii) developing effective defense targeted against the generation of copyrighted text. To tackle these challenges, we introduce a curated dataset to evaluate methods, test attack strategies, and propose lightweight, real-time defense to prevent the generation of copyrighted text, ensuring the safe and lawful use of LLMs. Our experiments demonstrate that current LLMs frequently output copyrighted text, and that jailbreaking attacks can significantly increase the volume of copyrighted output. Our proposed defense mechanism significantly reduces the volume of copyrighted text generated by LLMs by effectively refusing malicious requests. Code is publicly available at https://github.com/xz-liu/SHIELD

Similar Work