Meta-task Prompting Elicits Embeddings From Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Meta-task Prompting Elicits Embeddings From Large Language Models

Lei Yibin, Wu Di, Zhou Tianyi, Shen Tao, Cao Yu, Tao Chongyang, Yates Andrew. Arxiv 2024

[Paper]    
Fine Tuning Pretraining Methods Prompting RAG Training Techniques

We introduce a new unsupervised text embedding method, Meta-Task Prompting with Explicit One-Word Limitation (MetaEOL), for generating high-quality sentence embeddings from Large Language Models (LLMs) without the need for model fine-tuning. Leveraging meta-task prompting, MetaEOL guides LLMs to produce embeddings through a series of carefully designed prompts that address multiple representational aspects. Our comprehensive experiments demonstrate that embeddings averaged from various meta-tasks are versatile embeddings that yield competitive performance on Semantic Textual Similarity (STS) benchmarks and excel in downstream tasks, surpassing contrastive-trained models. Our findings suggest a new scaling law, offering a versatile and resource-efficient approach for embedding generation across diverse scenarios.

Similar Work