A Comparison Of Large Language Model And Human Performance On Random Number Generation Tasks · The Large Language Model Bible Contribute to LLM-Bible

A Comparison Of Large Language Model And Human Performance On Random Number Generation Tasks

Harrison Rachel M.. Arxiv 2024

[Paper]    
Applications Ethics And Bias GPT Model Architecture Prompting

Random Number Generation Tasks (RNGTs) are used in psychology for examining how humans generate sequences devoid of predictable patterns. By adapting an existing human RNGT for an LLM-compatible environment, this preliminary study tests whether ChatGPT-3.5, a large language model (LLM) trained on human-generated text, exhibits human-like cognitive biases when generating random number sequences. Initial findings indicate that ChatGPT-3.5 more effectively avoids repetitive and sequential patterns compared to humans, with notably lower repeat frequencies and adjacent number frequencies. Continued research into different models, parameters, and prompting methodologies will deepen our understanding of how LLMs can more closely mimic human random generation behaviors, while also broadening their applications in cognitive and behavioral science research.

Similar Work