Unmasking The Imposters: In-domain Detection Of Human Vs. Machine-generated Tweets · The Large Language Model Bible Contribute to LLM-Bible

Unmasking The Imposters: In-domain Detection Of Human Vs. Machine-generated Tweets

Tuck Bryan E., Verma Rakesh M.. Arxiv 2024

[Paper]    
Fine Tuning GPT Model Architecture Pretraining Methods Tools Training Techniques

The rapid development of large language models (LLMs) has significantly improved the generation of fluent and convincing text, raising concerns about their misuse on social media platforms. We present a methodology using Twitter datasets to examine the generative capabilities of four LLMs: Llama 3, Mistral, Qwen2, and GPT4o. We evaluate 7B and 8B parameter base-instruction models of the three open-source LLMs and validate the impact of further fine-tuning and “uncensored” versions. Our findings show that “uncensored” models with additional in-domain fine-tuning dramatically reduce the effectiveness of automated detection methods. This study addresses a gap by exploring smaller open-source models and the effects of “uncensoring,” providing insights into how fine-tuning and content moderation influence machine-generated text detection.

Similar Work