GPT-4 Generated Narratives Of Life Events Using A Structured Narrative Prompt: A Validation Study · The Large Language Model Bible Contribute to LLM-Bible

GPT-4 Generated Narratives Of Life Events Using A Structured Narrative Prompt: A Validation Study

Lynch Christopher J., Jensen Erik, Munro Madison H., Zamponi Virginia, Martinez Joseph, O'brien Kevin, Feldhaus Brandon, Smith Katherine, Reinhold Ann Marie, Gore Ross. Arxiv 2024

[Paper]    
Applications Fine Tuning GPT Model Architecture Prompting RAG

Large Language Models (LLMs) play a pivotal role in generating vast arrays of narratives, facilitating a systematic exploration of their effectiveness for communicating life events in narrative form. In this study, we employ a zero-shot structured narrative prompt to generate 24,000 narratives using OpenAI’s GPT-4. From this dataset, we manually classify 2,880 narratives and evaluate their validity in conveying birth, death, hiring, and firing events. Remarkably, 87.43% of the narratives sufficiently convey the intention of the structured prompt. To automate the identification of valid and invalid narratives, we train and validate nine Machine Learning models on the classified datasets. Leveraging these models, we extend our analysis to predict the classifications of the remaining 21,120 narratives. All the ML models excelled at classifying valid narratives as valid, but experienced challenges at simultaneously classifying invalid narratives as invalid. Our findings not only advance the study of LLM capabilities, limitations, and validity but also offer practical insights for narrative generation and natural language processing applications.

Similar Work