Resetox: Re-learning Attention Weights For Toxicity Mitigation In Machine Translation · The Large Language Model Bible Contribute to LLM-Bible

Resetox: Re-learning Attention Weights For Toxicity Mitigation In Machine Translation

Gilabert Javier García, Escolano Carlos, Costa-jussà Marta R.. Arxiv 2023

[Paper]    
Applications Attention Mechanism Model Architecture RAG Training Techniques Transformer

Our proposed method, ReSeTOX (REdo SEarch if TOXic), addresses the issue of Neural Machine Translation (NMT) generating translation outputs that contain toxic words not present in the input. The objective is to mitigate the introduction of toxic language without the need for re-training. In the case of identified added toxicity during the inference process, ReSeTOX dynamically adjusts the key-value self-attention weights and re-evaluates the beam search hypotheses. Experimental results demonstrate that ReSeTOX achieves a remarkable 57% reduction in added toxicity while maintaining an average translation quality of 99.5% across 164 languages.

Similar Work