Evaluating Telugu Proficiency In Large Language Models_ A Comparative Analysis Of Chatgpt And Gemini · The Large Language Model Bible Contribute to LLM-Bible

Evaluating Telugu Proficiency In Large Language Models_ A Comparative Analysis Of Chatgpt And Gemini

Kishore Katikela Sreeharsha, Shaik Rahimanuddin. Arxiv 2024

[Paper]    
Fine Tuning GPT Model Architecture RAG Reinforcement Learning

The growing prominence of large language models (LLMs) necessitates the exploration of their capabilities beyond English. This research investigates the Telugu language proficiency of ChatGPT and Gemini, two leading LLMs. Through a designed set of 20 questions encompassing greetings, grammar, vocabulary, common phrases, task completion, and situational reasoning, the study delves into their strengths and weaknesses in handling Telugu. The analysis aims to identify the LLM that demonstrates a deeper understanding of Telugu grammatical structures, possesses a broader vocabulary, and exhibits superior performance in tasks like writing and reasoning. By comparing their ability to comprehend and use everyday Telugu expressions, the research sheds light on their suitability for real-world language interaction. Furthermore, the evaluation of adaptability and reasoning capabilities provides insights into how each LLM leverages Telugu to respond to dynamic situations. This comparative analysis contributes to the ongoing discussion on multilingual capabilities in AI and paves the way for future research in developing LLMs that can seamlessly integrate with Telugu-speaking communities.

Similar Work