Harnessing The Intrinsic Knowledge Of Pretrained Language Models For Challenging Text Classification Settings · The Large Language Model Bible Contribute to LLM-Bible

Harnessing The Intrinsic Knowledge Of Pretrained Language Models For Challenging Text Classification Settings

Gao Lingyu. Arxiv 2024

[Paper]    
Applications In Context Learning Model Architecture Pretraining Methods Prompting RAG Reinforcement Learning Security Training Techniques Transformer

Text classification is crucial for applications such as sentiment analysis and toxic text filtering, but it still faces challenges due to the complexity and ambiguity of natural language. Recent advancements in deep learning, particularly transformer architectures and large-scale pretraining, have achieved inspiring success in NLP fields. Building on these advancements, this thesis explores three challenging settings in text classification by leveraging the intrinsic knowledge of pretrained language models (PLMs). Firstly, to address the challenge of selecting misleading yet incorrect distractors for cloze questions, we develop models that utilize features based on contextualized word representations from PLMs, achieving performance that rivals or surpasses human accuracy. Secondly, to enhance model generalization to unseen labels, we create small finetuning datasets with domain-independent task label descriptions, improving model performance and robustness. Lastly, we tackle the sensitivity of large language models to in-context learning prompts by selecting effective demonstrations, focusing on misclassified examples and resolving model ambiguity regarding test example labels.

Similar Work