Thalle: Text Hyperlocally Augmented Large Language Extension -- Technical Report · The Large Language Model Bible Contribute to LLM-Bible

Thalle: Text Hyperlocally Augmented Large Language Extension -- Technical Report

Labs Kbtg, Khamnuansin Danupat, Petchsod Atthakorn, Lertpiya Anuruth, Balee Pornchanan, Lodkaew Thanawat, Chalothorn Tawunrat, Pongthawornkamol Thadpong, Lertsutthiwong Monchai. Arxiv 2024

[Paper]    
Fine Tuning Pretraining Methods Reinforcement Learning Training Techniques

Recent advancements in Large Language Models (LLMs) have revealed new capabilities and opportunities across the technological landscape. However, the practicality of very large LLMs is challenged by their high compute cost, which does not justify the benefits given their limited capability compared to humans. While smaller, more practical LLMs have shown potential in financial analysis, though they are not yet fully proficient, as evidenced by their near-passing performance on the Chartered Financial Analyst (CFA) exam. In this work, we present Financial Analyst Extension to our Text Hyperlocally Augmented Large Language Extension (THaLLE), a series of 8B LLMs consistently achieving highest performance on mock CFA exams against models of comparable size. We thoroughly document the fine-tuning techniques used to facilitate future research. Additionally, we introduce the use of Flare CFA, a publicly available dataset for evaluating LLMs as a financial advisor.

Similar Work