Fintral: A Family Of GPT-4 Level Multimodal Financial Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Fintral: A Family Of GPT-4 Level Multimodal Financial Large Language Models

Bhatia Gagan, Nagoudi El Moatez Billah, Cavusoglu Hasan, Abdul-mageed Muhammad. Arxiv 2024

[Paper] [Code]    
Efficiency And Optimization Fine Tuning GPT Has Code Model Architecture Multimodal Models Pretraining Methods Reinforcement Learning Tools Training Techniques

We introduce FinTral, a suite of state-of-the-art multimodal large language models (LLMs) built upon the Mistral-7b model and tailored for financial analysis. FinTral integrates textual, numerical, tabular, and image data. We enhance FinTral with domain-specific pretraining, instruction fine-tuning, and RLAIF training by exploiting a large collection of textual and visual datasets we curate for this work. We also introduce an extensive benchmark featuring nine tasks and 25 datasets for evaluation, including hallucinations in the financial domain. Our FinTral model trained with direct preference optimization employing advanced Tools and Retrieval methods, dubbed FinTral-DPO-T&R, demonstrates an exceptional zero-shot performance. It outperforms ChatGPT-3.5 in all tasks and surpasses GPT-4 in five out of nine tasks, marking a significant advancement in AI-driven financial technology. We also demonstrate that FinTral has the potential to excel in real-time analysis and decision-making in diverse financial contexts. The GitHub repository for FinTral is available at \url{https://github.com/UBC-NLP/fintral}.

Similar Work