Constituency Parsing Using Llms · The Large Language Model Bible Contribute to LLM-Bible

Constituency Parsing Using Llms

Bai Xuefeng, Wu Jialong, Chen Yulong, Wang Zhongqing, Zhang Yue. Arxiv 2023

[Paper]    
Few Shot GPT Model Architecture Training Techniques

Constituency parsing is a fundamental yet unsolved natural language processing task. In this paper, we explore the potential of recent large language models (LLMs) that have exhibited remarkable performance across various domains and tasks to tackle this task. We employ three linearization strategies to transform output trees into symbol sequences, such that LLMs can solve constituency parsing by generating linearized trees. We conduct experiments using a diverse range of LLMs, including ChatGPT, GPT-4, OPT, LLaMA, and Alpaca, comparing their performance against the state-of-the-art constituency parsers. Our experiments encompass zero-shot, few-shot, and full-training learning settings, and we evaluate the models on one in-domain and five out-of-domain test datasets. Our findings reveal insights into LLMs’ performance, generalization abilities, and challenges in constituency parsing.

Similar Work