[lions: 1] And [tigers: 2] And [bears: 3], Oh My! Literary Coreference Annotation With Llms · The Large Language Model Bible Contribute to LLM-Bible

[lions: 1] And [tigers: 2] And [bears: 3], Oh My! Literary Coreference Annotation With Llms

Hicke Rebecca M. M., Mimno David. Arxiv 2024

[Paper]    
Reinforcement Learning Training Techniques

Coreference annotation and resolution is a vital component of computational literary studies. However, it has previously been difficult to build high quality systems for fiction. Coreference requires complicated structured outputs, and literary text involves subtle inferences and highly varied language. New language-model-based seq2seq systems present the opportunity to solve both these problems by learning to directly generate a copy of an input sentence with markdown-like annotations. We create, evaluate, and release several trained models for coreference, as well as a workflow for training new models.

Similar Work