Steering Output Style And Topic In Neural Response Generation · The Large Language Model Bible Contribute to LLM-Bible

Steering Output Style And Topic In Neural Response Generation

Di Wang, Nebojsa Jojic, Chris Brockett, Eric Nyberg. Arxiv 2017 – 18 citations

[Paper]    
Ethics and Bias Applications Agentic Training Techniques

We propose simple and flexible training and decoding methods for influencing output style and topic in neural encoder-decoder based language generation. This capability is desirable in a variety of applications, including conversational systems, where successful agents need to produce language in a specific style and generate responses steered by a human puppeteer or external knowledge. We decompose the neural generation process into empirically easier sub-problems: a faithfulness model and a decoding method based on selective-sampling. We also describe training and sampling algorithms that bias the generation process with a specific language style restriction, or a topic restriction. Human evaluation results show that our proposed methods are able to restrict style and topic without degrading output quality in conversational tasks.

Similar Work