Overview Of The Promptcblue Shared Task In CHIP2023 · The Large Language Model Bible Contribute to LLM-Bible

Overview Of The Promptcblue Shared Task In CHIP2023

Zhu Wei, Wang Xiaoling, Chen Mosha, Tang Buzhou. Arxiv 2023

[Paper]    
In Context Learning Prompting

This paper presents an overview of the PromptCBLUE shared task (http://cips-chip.org.cn/2023/eval1) held in the CHIP-2023 Conference. This shared task reformualtes the CBLUE benchmark, and provide a good testbed for Chinese open-domain or medical-domain large language models (LLMs) in general medical natural language processing. Two different tracks are held: (a) prompt tuning track, investigating the multitask prompt tuning of LLMs, (b) probing the in-context learning capabilities of open-sourced LLMs. Many teams from both the industry and academia participated in the shared tasks, and the top teams achieved amazing test results. This paper describes the tasks, the datasets, evaluation metrics, and the top systems for both tasks. Finally, the paper summarizes the techniques and results of the evaluation of the various approaches explored by the participating teams.

Similar Work