POEM: Interactive Prompt Optimization For Enhancing Multimodal Reasoning Of Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

POEM: Interactive Prompt Optimization For Enhancing Multimodal Reasoning Of Large Language Models

He Jianben, Wang Xingbo, Liu Shiyi, Wu Guande, Silva Claudio, Qu Huamin. Arxiv 2024

[Paper]    
Efficiency And Optimization Few Shot Multimodal Models Prompting Reinforcement Learning

Large language models (LLMs) have exhibited impressive abilities for multimodal content comprehension and reasoning with proper prompting in zero- or few-shot settings. Despite the proliferation of interactive systems developed to support prompt engineering for LLMs across various tasks, most have primarily focused on textual or visual inputs, thus neglecting the complex interplay between modalities within multimodal inputs. This oversight hinders the development of effective prompts that guide model multimodal reasoning processes by fully exploiting the rich context provided by multiple modalities. In this paper, we present POEM, a visual analytics system to facilitate efficient prompt engineering for enhancing the multimodal reasoning performance of LLMs. The system enables users to explore the interaction patterns across modalities at varying levels of detail for a comprehensive understanding of the multimodal knowledge elicited by various prompts. Through diverse recommendations of demonstration examples and instructional principles, POEM supports users in iteratively crafting and refining prompts to better align and enhance model knowledge with human insights. The effectiveness and efficiency of our system are validated through two case studies and interviews with experts.

Similar Work