Improving Sentence Embeddings With Automatic Generation Of Training Data Using Few-shot Examples · The Large Language Model Bible Contribute to LLM-Bible

Improving Sentence Embeddings With Automatic Generation Of Training Data Using Few-shot Examples

Sato Soma, Tsukagoshi Hayato, Sasano Ryohei, Takeda Koichi. Arxiv 2024

[Paper]    
Few Shot Fine Tuning Pretraining Methods Prompting RAG Training Techniques

Decoder-based large language models (LLMs) have shown high performance on many tasks in natural language processing. This is also true for sentence embedding learning, where a decoder-based model, PromptEOL, has achieved the best performance on semantic textual similarity (STS) tasks. However, PromptEOL requires a manually annotated natural language inference (NLI) dataset for fine-tuning. We aim to improve sentence embeddings without using large manually annotated datasets by automatically generating an NLI dataset with an LLM and using it for fine-tuning of PromptEOL. To achieve this, we explore methods of data generation suitable for sentence embedding learning in this study. Specifically, we will focus on automatic dataset generation through few-shot learning and explore the appropriate methods to leverage few-shot examples. Experimental results on the STS tasks demonstrate that our approach outperforms existing models in settings without large manually annotated datasets.

Similar Work