Analyzing Large Language Models Chatbots: An Experimental Approach Using A Probability Test · The Large Language Model Bible Contribute to LLM-Bible

Analyzing Large Language Models Chatbots: An Experimental Approach Using A Probability Test

Peruchini Melise, Teixeira Julio Monteiro. Arxiv 2024

[Paper]    
Fine Tuning GPT Model Architecture Prompting

This study consists of qualitative empirical research, conducted through exploratory tests with two different Large Language Models (LLMs) chatbots: ChatGPT and Gemini. The methodological procedure involved exploratory tests based on prompts designed with a probability question. The “Linda Problem”, widely recognized in cognitive psychology, was used as a basis to create the tests, along with the development of a new problem specifically for this experiment, the “Mary Problem”. The object of analysis is the dataset with the outputs provided by each chatbot interaction. The purpose of the analysis is to verify whether the chatbots mainly employ logical reasoning that aligns with probability theory or if they are more frequently affected by the stereotypical textual descriptions in the prompts. The findings provide insights about the approach each chatbot employs in handling logic and textual constructions, suggesting that, while the analyzed chatbots perform satisfactorily on a well-known probabilistic problem, they exhibit significantly lower performance on new tests that require direct application of probabilistic logic.

Similar Work