Qiaoning At Semeval-2020 Task 4: Commonsense Validation And Explanation System Based On Ensemble Of Language Model · The Large Language Model Bible Contribute to LLM-Bible

Qiaoning At Semeval-2020 Task 4: Commonsense Validation And Explanation System Based On Ensemble Of Language Model

Liu Pai. Arxiv 2020

[Paper]    
BERT Fine Tuning Interpretability And Explainability Model Architecture Reinforcement Learning

In this paper, we present language model system submitted to SemEval-2020 Task 4 competition: “Commonsense Validation and Explanation”. We participate in two subtasks for subtask A: validation and subtask B: Explanation. We implemented with transfer learning using pretrained language models (BERT, XLNet, RoBERTa, and ALBERT) and fine-tune them on this task. Then we compared their characteristics in this task to help future researchers understand and use these models more properly. The ensembled model better solves this problem, making the model’s accuracy reached 95.9% on subtask A, which just worse than human’s by only 3% accuracy.

Similar Work