Chatting Up Attachment: Using Llms To Predict Adult Bonds · The Large Language Model Bible Contribute to LLM-Bible

Chatting Up Attachment: Using Llms To Predict Adult Bonds

Soares Paulo, Mccurdy Sean, Gerber Andrew J., Fonagy Peter. Arxiv 2024

[Paper]    
Agentic GPT Model Architecture Reinforcement Learning TACL Training Techniques

Obtaining data in the medical field is challenging, making the adoption of AI technology within the space slow and high-risk. We evaluate whether we can overcome this obstacle with synthetic data generated by large language models (LLMs). In particular, we use GPT-4 and Claude 3 Opus to create agents that simulate adults with varying profiles, childhood memories, and attachment styles. These agents participate in simulated Adult Attachment Interviews (AAI), and we use their responses to train models for predicting their underlying attachment styles. We evaluate our models using a transcript dataset from 9 humans who underwent the same interview protocol, analyzed and labeled by mental health professionals. Our findings indicate that training the models using only synthetic data achieves performance comparable to training the models on human data. Additionally, while the raw embeddings from synthetic answers occupy a distinct space compared to those from real human responses, the introduction of unlabeled human data and a simple standardization allows for a closer alignment of these representations. This adjustment is supported by qualitative analyses and is reflected in the enhanced predictive accuracy of the standardized embeddings.

Similar Work