Interpretable NLG For Task-oriented Dialogue Systems With Heterogeneous Rendering Machines · The Large Language Model Bible Contribute to LLM-Bible

Interpretable NLG For Task-oriented Dialogue Systems With Heterogeneous Rendering Machines

Li Yangming, Yao Kaisheng. Arxiv 2020

[Paper]    
Applications Interpretability And Explainability Tools

End-to-end neural networks have achieved promising performances in natural language generation (NLG). However, they are treated as black boxes and lack interpretability. To address this problem, we propose a novel framework, heterogeneous rendering machines (HRM), that interprets how neural generators render an input dialogue act (DA) into an utterance. HRM consists of a renderer set and a mode switcher. The renderer set contains multiple decoders that vary in both structure and functionality. For every generation step, the mode switcher selects an appropriate decoder from the renderer set to generate an item (a word or a phrase). To verify the effectiveness of our method, we have conducted extensive experiments on 5 benchmark datasets. In terms of automatic metrics (e.g., BLEU), our model is competitive with the current state-of-the-art method. The qualitative analysis shows that our model can interpret the rendering process of neural generators well. Human evaluation also confirms the interpretability of our proposed approach.

Similar Work