Inheriting the Wisdom of Predecessors: A Multiplex Cascade Framework for Unified Aspect-based Sentiment Analysis

Wuhan University

Abstract

So far, aspect-based sentiment analysis (ABSA) has involved with total seven subtasks, in which, however the interactions among them have been left unexplored sufficiently. This work presents a novel multiplex cascade framework for unified ABSA and maintaining such interactions. First, we model total seven subtasks as a hierarchical dependency in the easy-to-hard order, based on which we then propose a multiplex decoding mechanism, transferring the sentiment layouts and clues in lower tasks to upper ones. The multiplex strategy enables highly-efficient subtask interflows and avoids repetitive training; meanwhile it sufficiently utilizes the existing data without requiring any further annotation. Further, based on the characteristics of aspect-opinion term extraction and pairing, we enhance our multiplex framework by integrating POS tag and syntactic dependency information for term boundary and pairing identification. The proposed Syntax-aware Multiplex (SyMux) framework enhances the ABSA performances on 28 subtasks (7×4 datasets) with big margins.

Presentation

Method

In ABSA community there are at least following seven representative subtasks:



All these ABSA subtasks are co-related by revolving around the predictions of three sentiment elements: < aspect, opinion, polarity >.


In this project, we consider the unified ABSA: analyzing all these subtasks with one unified model in one shot. We try to enhance the ABSA subtasks by making full use of the interactions between all subtasks, with a multiplex cascade framework.


We design the hierarchical dependency (HD) for better unifying all these ABSA tasks.
Different tasks are decoded with 1D and 2D tagging schemes:


We then propose a Syntax-aware Multiplex (SyMux) framework for UABSA:


We further devise a syntax-guided aspect-opinion pairing mechanism:

Data

We use the Semeval benchmark, including Res14, Lap14, Res15 and Res16. To enable multi-task training for UABSA, we re-ensemble the existing ABSA datasets so that most of the sentences’ annotations cover all seven subtasks.

  •      • Wang et al. (2017) [1] annotate the unpaired opinion terms (denoted as D17).

  •      • Fan et al. (2019) [2] pair the aspects with opinion terms (D19),

  •      • Peng et al. (2020) [3] further provide the labels for triple extraction (D20).

[1] Coupled Multi-Layer Attentions for Co-Extraction of Aspect and Opinion Terms. AAAI. 2017.
[2] Target-oriented Opinion Words Extraction with Target-fused Neural Sequence Labeling. NAACL. 2019.
[3] Knowing What, How and Why: A Near Complete Solution for Aspect-Based Sentiment Analysis. AAAI. 2020.

Experiment

▶ Main results.


▶ Model Ablation.


▶ Separating or sharing? sharing by multiplex!


▶ Multiplexing order in hierarchical dependency.


▶ Model robustness against data scarcity.

Paper

BibTeX

@inproceedings{fei2022unifiedABSA,
  author    = {Hao Fei and Fei Li and Chenliang Li and Shengqiong Wu and Jingye Li and Donghong Ji},
  title     = {Inheriting the Wisdom of Predecessors: A Multiplex Cascade Framework for Unified Aspect-based Sentiment Analysis},
  booktitle = {Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, {IJCAI}},
  pages     = {4121--4128},
  year      = {2022},
}