An Application Of Large Language Models To Coding Negotiation Transcripts
Friedman Ray, Cho Jaewoo, Brett Jeanne, Zhan Xuhui, Han Ningyu, Kannan Sriram, Ma Yingxiang, Spencer-smith Jesse, Jäckel Elisabeth, Zerres Alfred, Hooper Madison, Babbit Katie, Acharya Manish, Adair Wendi, Aslani Soroush, Aykaç Tayfun, Bauman Chris, Bennett Rebecca, Brady Garrett, Briggs Peggy, Dowie Cheryl, Eck Chase, Geiger Igmar, Jacob Frank, Kern Molly, Lee Sujin, Liu Leigh Anne, Liu Wu, Loewenstein Jeffrey, Lytle Anne, Ma Li, Mann Michel, Mislin Alexandra, Mitchell Tyree, Nagler Hannah Martensen Née, Nandkeolyar Amit, Olekalns Mara, Paliakova Elena, Parlamis Jennifer, Pierce Jason, Pierce Nancy, Pinkley Robin, Prime Nathalie, Ramirez-marin Jimena, Rockmann Kevin, Ross William, Semnani-azad Zhaleh, Schroeder Juliana, Smith Philip, Stimmer Elena, Swaab Roderick, Thompson Leigh, Tinsley Cathy, Tuncel Ece, Weingart Laurie, Wilken Robert, Yao Jingjing, Zhang Zhi-xue. Arxiv 2024
[Paper]
Applications
In Context Learning
Prompting
Reinforcement Learning
In recent years, Large Language Models (LLM) have demonstrated impressive
capabilities in the field of natural language processing (NLP). This paper
explores the application of LLMs in negotiation transcript analysis by the
Vanderbilt AI Negotiation Lab. Starting in September 2022, we applied multiple
strategies using LLMs from zero shot learning to fine tuning models to
in-context learning). The final strategy we developed is explained, along with
how to access and use the model. This study provides a sense of both the
opportunities and roadblocks for the implementation of LLMs in real life
applications and offers a model for how LLMs can be applied to coding in other
fields.
Similar Work