Tree-structured Semantic Encoder With Knowledge Sharing For Domain Adaptation In Natural Language Generation · The Large Language Model Bible Contribute to LLM-Bible

Tree-structured Semantic Encoder With Knowledge Sharing For Domain Adaptation In Natural Language Generation

Tseng Bo-hsiang, Budzianowski Paweł, Wu Yen-chen, Gašić Milica. Arxiv 2019

[Paper]    
Applications Attention Mechanism Fine Tuning Model Architecture Reinforcement Learning Transformer

Domain adaptation in natural language generation (NLG) remains challenging because of the high complexity of input semantics across domains and limited data of a target domain. This is particularly the case for dialogue systems, where we want to be able to seamlessly include new domains into the conversation. Therefore, it is crucial for generation models to share knowledge across domains for the effective adaptation from one domain to another. In this study, we exploit a tree-structured semantic encoder to capture the internal structure of complex semantic representations required for multi-domain dialogues in order to facilitate knowledge sharing across domains. In addition, a layer-wise attention mechanism between the tree encoder and the decoder is adopted to further improve the model’s capability. The automatic evaluation results show that our model outperforms previous methods in terms of the BLEU score and the slot error rate, in particular when the adaptation data is limited. In subjective evaluation, human judges tend to prefer the sentences generated by our model, rating them more highly on informativeness and naturalness than other systems.

Similar Work