THOR-ISA: Reasoning Implicit Sentiment with Chain-of-Thought Prompting

1Sea-NExT Joint Lab, National University of Singapore,
2Wuhan University,   3Sea AI Lab,   4DAMO Academy, Alibaba Group

Abstract

While sentiment analysis systems try to determine the sentiment polarities of given targets based on the key opinion expressions in input texts, in implicit sentiment analysis (ISA) the opinion cues come in an implicit and obscure manner. Thus detecting implicit sentiment requires the common-sense and multi-hop reasoning ability to infer the latent intent of opinion. Inspired by the recent chain-of-thought (CoT) idea, in this work we introduce a Three-hop Reasoning (THOR) CoT framework to mimic the human-like reasoning process for ISA. We design a three-step prompting principle for THOR to step-by-step induce the implicit aspect, opinion, and finally the sentiment polarity. Our THOR+Flan-T5 (11B) pushes the state-of-the-art (SoTA) by over 6% F1 on supervised setup. More strikingly, THOR+GPT3 (175B) boosts the SoTA by over 50% F1 on zero-shot setting.

Presentation

Motivation

Here is the illustration of detecting the explicit and implicit sentiment polarities towards targets. Explicit opinion expression can help direct inference of the sentiment, while detecting implicit sentiment requires common-sense and multi-hop reasoning.


Method

We thus propose a Three-hop Reasoning CoT framework, namely THOR.



Step 1. We first ask LLM what aspect a is mentioned with the following template:

C1 is the first-hop prompt context. This step can be formulated as A=argmaxp(a|X, t), where A is the output text which explicitly mentions the aspect a.

Step 2. Now based on X, t and a, we ask LLM to answer in detail what would be the underlying opinion o towards the mentioned aspect a:

C2 is the second-hop prompt context which concatenates C1 and A. This step can be written as O=argmaxp(o|X, t, a), where O is the answer text containing the possible opinion expression o.

Step 3. With the complete sentiment skeleton (X, t, a and o) as context, we finally ask LLM to infer the final answer of polarity t:

C3 is the third-hop prompt context. We note this step as y=argmaxp(y|X, t, a, o).

Experiment

Main results. Here are the performances on supervised and zero-shot settings:



Some analyses.


Demos

Some Comparisons between THOR and the vanilla prompting method, and the zero-shot CoT method (Prompt + ‘Lets think step by step’).


• Case-I

• Vanilla prompt-based result:

• Result by zero-shot CoT method:

• Result by our THOR method:

• Case-2

• Vanilla prompt-based result:

• Result by zero-shot CoT method:

• Result by our THOR method:

Poster

BibTeX

@inproceedings{FeiAcl23THOR,
  author    = {Hao Fei, Bobo Li, Qian Liu, Lidong Bing, Fei Li, Tat-Seng Chua},
  title     = {Reasoning Implicit Sentiment with Chain-of-Thought Prompting},
  journal   = {Proceedings of the Annual Meeting of the Association for Computational Linguistics},
  pages     = {1171--1182},
  year      = {2023},
}