Zero-shot Recommendations With Pre-trained Large Language Models For Multimodal Nudging · The Large Language Model Bible Contribute to LLM-Bible

Zero-shot Recommendations With Pre-trained Large Language Models For Multimodal Nudging

Harrison Rachel M., Dereventsov Anton, Bibin Anton. Arxiv 2023

[Paper]    
Multimodal Models RAG

We present a method for zero-shot recommendation of multimodal non-stationary content that leverages recent advancements in the field of generative AI. We propose rendering inputs of different modalities as textual descriptions and to utilize pre-trained LLMs to obtain their numerical representations by computing semantic embeddings. Once unified representations of all content items are obtained, the recommendation can be performed by computing an appropriate similarity metric between them without any additional learning. We demonstrate our approach on a synthetic multimodal nudging environment, where the inputs consist of tabular, textual, and visual data.

Similar Work