Tutorials On Stance Detection Using Pre-trained Language Models: Fine-tuning BERT And Prompting Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Tutorials On Stance Detection Using Pre-trained Language Models: Fine-tuning BERT And Prompting Large Language Models

Chuang Yun-shiuan. Arxiv 2023

[Paper]    
BERT Few Shot Fine Tuning GPT Merging Model Architecture Pretraining Methods Prompting Tokenization Training Techniques Transformer

This paper presents two self-contained tutorials on stance detection in Twitter data using BERT fine-tuning and prompting large language models (LLMs). The first tutorial explains BERT architecture and tokenization, guiding users through training, tuning, and evaluating standard and domain-specific BERT models with HuggingFace transformers. The second focuses on constructing prompts and few-shot examples to elicit stances from ChatGPT and open-source FLAN-T5 without fine-tuning. Various prompting strategies are implemented and evaluated using confusion matrices and macro F1 scores. The tutorials provide code, visualizations, and insights revealing the strengths of few-shot ChatGPT and FLAN-T5 which outperform fine-tuned BERTs. By covering both model fine-tuning and prompting-based techniques in an accessible, hands-on manner, these tutorials enable learners to gain applied experience with cutting-edge methods for stance detection.

Similar Work