Semantics-aware BERT For Language Understanding · The Large Language Model Bible Contribute to LLM-Bible

Semantics-aware BERT For Language Understanding

Zhang Zhuosheng, Wu Yuwei, Zhao Hai, Li Zuchao, Zhang Shuailiang, Zhou Xi, Zhou Xiang. Arxiv 2019

[Paper]    
Applications BERT Fine Tuning GPT Model Architecture Pretraining Methods Training Techniques

The latest work on language representations carefully integrates contextualized features into language model training, which enables a series of success especially in various machine reading comprehension and natural language inference tasks. However, the existing language representation models including ELMo, GPT and BERT only exploit plain context-sensitive features such as character or word embeddings. They rarely consider incorporating structured semantic information which can provide rich semantics for language representation. To promote natural language understanding, we propose to incorporate explicit contextual semantics from pre-trained semantic role labeling, and introduce an improved language representation model, Semantics-aware BERT (SemBERT), which is capable of explicitly absorbing contextual semantics over a BERT backbone. SemBERT keeps the convenient usability of its BERT precursor in a light fine-tuning way without substantial task-specific modifications. Compared with BERT, semantics-aware BERT is as simple in concept but more powerful. It obtains new state-of-the-art or substantially improves results on ten reading comprehension and language inference tasks.

Similar Work