Concept-aware Data Construction Improves In-context Learning Of Language Models · The Large Language Model Bible Contribute to LLM-Bible

Concept-aware Data Construction Improves In-context Learning Of Language Models

Štefánik Michal, Kadlčík Marek, Sojka Petr. Arxiv 2024

[Paper]    
In Context Learning Model Architecture Pretraining Methods Prompting Tools Training Techniques Transformer

Many recent language models (LMs) are capable of in-context learning (ICL), manifested in the LMs’ ability to perform a new task solely from natural-language instruction. Previous work curating in-context learners assumes that ICL emerges from a vast over-parametrization or the scale of multi-task training. However, recent theoretical work attributes the ICL ability to concept-dependent training data and creates functional in-context learners even in small-scale, synthetic settings. In this work, we practically explore this newly identified axis of ICL quality. We propose Concept-aware Training (CoAT), a framework for constructing training scenarios that make it beneficial for the LM to learn to utilize the analogical reasoning concepts from demonstrations. We find that by using CoAT, pre-trained transformers can learn to better utilise new latent concepts from demonstrations and that such ability makes ICL more robust to the functional deficiencies of the previous models. Finally, we show that concept-aware in-context learning is more effective for a majority of new tasks when compared to traditional instruction tuning, resulting in a performance comparable to the previous in-context learners using magnitudes of more training data.

Similar Work