Non-linear Inference Time Intervention: Improving LLM Truthfulness · The Large Language Model Bible Contribute to LLM-Bible

Non-linear Inference Time Intervention: Improving LLM Truthfulness

Hoscilowicz Jakub, Wiacek Adam, Chojnacki Jan, Cieslak Adam, Michon Leszek, Urbanevych Vitalii, Janicki Artur. Arxiv 2024

[Paper]    
Attention Mechanism Ethics And Bias Fine Tuning Model Architecture Pretraining Methods Tools Training Techniques

In this work, we explore LLM’s internal representation space to identify attention heads that contain the most truthful and accurate information. We further developed the Inference Time Intervention (ITI) framework, which lets bias LLM without the need for fine-tuning. The improvement manifests in introducing a non-linear multi-token probing and multi-token intervention: Non-Linear ITI (NL-ITI), which significantly enhances performance on evaluation benchmarks. NL-ITI is tested on diverse multiple-choice datasets, including TruthfulQA, on which we report over 16% relative MC1 (accuracy of model pointing to the correct answer) improvement with respect to the baseline ITI results. Moreover, we achieved a 10% relative improvement over the recently released Truth Forest (TrFf) method that also focused on ITI improvement.

Similar Work