High Recall Data-to-text Generation With Progressive Edit · The Large Language Model Bible Contribute to LLM-Bible

High Recall Data-to-text Generation With Progressive Edit

Kim Choonghan, Lee Gary Geunbae. Arxiv 2022

[Paper]    
Applications Language Modeling Model Architecture Pretraining Methods Transformer

Data-to-text (D2T) generation is the task of generating texts from structured inputs. We observed that when the same target sentence was repeated twice, Transformer (T5) based model generates an output made up of asymmetric sentences from structured inputs. In other words, these sentences were different in length and quality. We call this phenomenon “Asymmetric Generation” and we exploit this in D2T generation. Once asymmetric sentences are generated, we add the first part of the output with a no-repeated-target. As this goes through progressive edit (ProEdit), the recall increases. Hence, this method better covers structured inputs than before editing. ProEdit is a simple but effective way to improve performance in D2T generation and it achieves the new stateof-the-art result on the ToTTo dataset

Similar Work