A BERT Baseline For The Natural Questions · The Large Language Model Bible Contribute to LLM-Bible

A BERT Baseline For The Natural Questions

Alberti Chris, Lee Kenton, Collins Michael. Arxiv 2019

[Paper] [Code]    
BERT Has Code Model Architecture

This technical note describes a new baseline for the Natural Questions. Our model is based on BERT and reduces the gap between the model F1 scores reported in the original dataset paper and the human upper bound by 30% and 50% relative for the long and short answer tasks respectively. This baseline has been submitted to the official NQ leaderboard at ai.google.com/research/NaturalQuestions. Code, preprocessed data and pretrained model are available at https://github.com/google-research/language/tree/master/language/question_answering/bert_joint.

Similar Work