Exploring Boundary Of GPT-4V On Marine Analysis: A Preliminary Case Study · The Large Language Model Bible Contribute to LLM-Bible

Exploring Boundary Of GPT-4V On Marine Analysis: A Preliminary Case Study

Zheng Ziqiang, Chen Yiwei, Zhang Jipeng, Vu Tuan-anh, Zeng Huimin, Tim Yue Him Wong, Yeung Sai-kit. Arxiv 2024

[Paper] [Code]    
Attention Mechanism GPT Has Code Model Architecture Pretraining Methods Prompting Transformer

Large language models (LLMs) have demonstrated a powerful ability to answer various queries as a general-purpose assistant. The continuous multi-modal large language models (MLLM) empower LLMs with the ability to perceive visual signals. The launch of GPT-4 (Generative Pre-trained Transformers) has generated significant interest in the research communities. GPT-4V(ison) has demonstrated significant power in both academia and industry fields, as a focal point in a new artificial intelligence generation. Though significant success was achieved by GPT-4V, exploring MLLMs in domain-specific analysis (e.g., marine analysis) that required domain-specific knowledge and expertise has gained less attention. In this study, we carry out the preliminary and comprehensive case study of utilizing GPT-4V for marine analysis. This report conducts a systematic evaluation of existing GPT-4V, assessing the performance of GPT-4V on marine research and also setting a new standard for future developments in MLLMs. The experimental results of GPT-4V show that the responses generated by GPT-4V are still far away from satisfying the domain-specific requirements of the marine professions. All images and prompts used in this study will be available at https://github.com/hkust-vgd/Marine_GPT-4V_Eval

Similar Work