Reasoning Implicit Sentiment With Chain-of-thought Prompting · The Large Language Model Bible Contribute to LLM-Bible

Reasoning Implicit Sentiment With Chain-of-thought Prompting

Hao Fei, Bobo Li, Qian Liu, Lidong Bing, Fei Li, Tat-seng Chua. Arxiv 2023

[Paper] [Code]    
GPT Has Code Model Architecture Prompting Tools

While sentiment analysis systems try to determine the sentiment polarities of given targets based on the key opinion expressions in input texts, in implicit sentiment analysis (ISA) the opinion cues come in an implicit and obscure manner. Thus detecting implicit sentiment requires the common-sense and multi-hop reasoning ability to infer the latent intent of opinion. Inspired by the recent chain-of-thought (CoT) idea, in this work we introduce a Three-hop Reasoning (THOR) CoT framework to mimic the human-like reasoning process for ISA. We design a three-step prompting principle for THOR to step-by-step induce the implicit aspect, opinion, and finally the sentiment polarity. Our THOR+Flan-T5 (11B) pushes the state-of-the-art (SoTA) by over 6% F1 on supervised setup. More strikingly, THOR+GPT3 (175B) boosts the SoTA by over 50% F1 on zero-shot setting. Our code is open at https://github.com/scofield7419/THOR-ISA.

Similar Work