Groundial: Human-norm Grounded Safe Dialog Response Generation · The Large Language Model Bible Contribute to LLM-Bible

Groundial: Human-norm Grounded Safe Dialog Response Generation

Kim Siwon, Dai Shuyang, Kachuee Mohammad, Ray Shayan, Taghavi Tara, Yoon Sungroh. Arxiv 2024

[Paper]    
Fine Tuning In Context Learning Pretraining Methods Prompting Responsible AI Training Techniques

Current conversational AI systems based on large language models (LLMs) are known to generate unsafe responses, agreeing to offensive user input or including toxic content. Previous research aimed to alleviate the toxicity, by fine-tuning LLM with manually annotated safe dialogue histories. However, the dependency on additional tuning requires substantial costs. To remove the dependency, we propose GrounDial, where response safety is achieved by grounding responses to commonsense social rules without requiring fine-tuning. A hybrid approach of in-context learning and human-norm-guided decoding of GrounDial enables the response to be quantitatively and qualitatively safer even without additional data or tuning.

Similar Work