MMM: Multilingual Mutual Reinforcement Effect Mix Datasets & Test With Open-domain Information Extraction Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

MMM: Multilingual Mutual Reinforcement Effect Mix Datasets & Test With Open-domain Information Extraction Large Language Models

Gan Chengguang, Yin Qingyu, He Xinyang, Wei Hanjun, Liang Yunhao, Lim Younghun, Wang Shijian, Huang Hexiang, Zhang Qinghao, Ni Shiwen, Mori Tatsunori. Arxiv 2024

[Paper]    
Fine Tuning RAG Tools

The Mutual Reinforcement Effect (MRE) represents a promising avenue in information extraction and multitasking research. Nevertheless, its applicability has been constrained due to the exclusive availability of MRE mix datasets in Japanese, thereby limiting comprehensive exploration by the global research community. To address this limitation, we introduce a Multilingual MRE mix dataset (MMM) that encompasses 21 sub-datasets in English, Japanese, and Chinese. In this paper, we also propose a method for dataset translation assisted by Large Language Models (LLMs), which significantly reduces the manual annotation time required for dataset construction by leveraging LLMs to translate the original Japanese datasets. Additionally, we have enriched the dataset by incorporating open-domain Named Entity Recognition (NER) and sentence classification tasks. Utilizing this expanded dataset, we developed a unified input-output framework to train an Open-domain Information Extraction Large Language Model (OIELLM). The OIELLM model demonstrates the capability to effectively process novel MMM datasets, exhibiting significant improvements in performance.

Similar Work