Quantity Doesn't Buy Quality Syntax With Neural Language Models · The Large Language Model Bible Contribute to LLM-Bible

Quantity Doesn't Buy Quality Syntax With Neural Language Models

Van Schijndel Marten, Mueller Aaron, Linzen Tal. Arxiv 2019

[Paper]    
BERT GPT Model Architecture Pretraining Methods RAG Reinforcement Learning Training Techniques Transformer

Recurrent neural networks can learn to predict upcoming words remarkably well on average; in syntactically complex contexts, however, they often assign unexpectedly high probabilities to ungrammatical words. We investigate to what extent these shortcomings can be mitigated by increasing the size of the network and the corpus on which it is trained. We find that gains from increasing network size are minimal beyond a certain point. Likewise, expanding the training corpus yields diminishing returns; we estimate that the training corpus would need to be unrealistically large for the models to match human performance. A comparison to GPT and BERT, Transformer-based models trained on billions of words, reveals that these models perform even more poorly than our LSTMs in some constructions. Our results make the case for more data efficient architectures.

Similar Work