Large Linguistic Models: Analyzing Theoretical Linguistic Abilities Of Llms · The Large Language Model Bible Contribute to LLM-Bible

Large Linguistic Models: Analyzing Theoretical Linguistic Abilities Of Llms

Beguš Gašper, Dąbkowski Maksymilian, Rhodes Ryan. Arxiv 2023

[Paper]    
GPT Interpretability And Explainability Model Architecture Prompting Uncategorized

The performance of large language models (LLMs) has recently improved to the point where the models can perform well on many language tasks. We show here that for the first time, the models can also generate coherent and valid formal analyses of linguistic data and illustrate the vast potential of large language models for analyses of their metalinguistic abilities. LLMs are primarily trained on language data in the form of text; analyzing and evaluating their metalinguistic abilities improves our understanding of their general capabilities and sheds new light on theoretical models in linguistics. In this paper, we probe into GPT-4’s metalinguistic capabilities by focusing on three subfields of formal linguistics: syntax, phonology, and semantics. We outline a research program for metalinguistic analyses of large language models, propose experimental designs, provide general guidelines, discuss limitations, and offer future directions for this line of research. This line of inquiry also exemplifies behavioral interpretability of deep learning, where models’ representations are accessed by explicit prompting rather than internal representations.

Similar Work