Enhancing Robustness Of Llm-synthetic Text Detectors For Academic Writing: A Comprehensive Analysis · The Large Language Model Bible Contribute to LLM-Bible

Enhancing Robustness Of Llm-synthetic Text Detectors For Academic Writing: A Comprehensive Analysis

Dou Zhicheng, Guo Yuchen, Chang Ching-chun, Nguyen Huy H., Echizen Isao. Arxiv 2024

[Paper]    
Attention Mechanism GPT Model Architecture Pretraining Methods Prompting Security Transformer

The emergence of large language models (LLMs), such as Generative Pre-trained Transformer 4 (GPT-4) used by ChatGPT, has profoundly impacted the academic and broader community. While these models offer numerous advantages in terms of revolutionizing work and study methods, they have also garnered significant attention due to their potential negative consequences. One example is generating academic reports or papers with little to no human contribution. Consequently, researchers have focused on developing detectors to address the misuse of LLMs. However, most existing methods prioritize achieving higher accuracy on restricted datasets, neglecting the crucial aspect of generalizability. This limitation hinders their practical application in real-life scenarios where reliability is paramount. In this paper, we present a comprehensive analysis of the impact of prompts on the text generated by LLMs and highlight the potential lack of robustness in one of the current state-of-the-art GPT detectors. To mitigate these issues concerning the misuse of LLMs in academic writing, we propose a reference-based Siamese detector named Synthetic-Siamese which takes a pair of texts, one as the inquiry and the other as the reference. Our method effectively addresses the lack of robustness of previous detectors (OpenAI detector and DetectGPT) and significantly improves the baseline performances in realistic academic writing scenarios by approximately 67% to 95%.

Similar Work