Automatically extracting polarity-bearing topics for cross-domain sentiment classification

Yulan He, Chenghua Lin, Harith Alani

Research output: Chapter in Book/Report/Conference proceedingPublished conference contribution

126 Citations (Scopus)

Abstract

Joint sentiment-topic (JST) model was previously proposed to detect sentiment and topic simultaneously from text. The only supervision required by JST model learning is domain-independent polarity word priors. In this paper, we modify the JST model by incorporating word polarity priors through modifying the topic-word Dirichlet priors. We study the polarity-bearing topics extracted by JST and show that by augmenting the original feature space with polarity-bearing topics, the in-domain supervised classifiers learned from augmented feature representation achieve the state-of-the-art performance of 95% on the movie review data and an average of 90% on the multi-domain sentiment dataset. Furthermore, using feature augmentation and selection according to the information gain criteria for cross-domain sentiment classification, our proposed approach performs either better or comparably compared to previous approaches. Nevertheless, our approach is much simpler and does not require difficult parameter tuning.
Original languageEnglish
Title of host publicationThe 49th Annual Meeting of the Association for Computational Linguistics
Subtitle of host publication Human Language Technologies : Proceedings of the Conference
Place of PublicationStroudsburg, PA
PublisherAssociation for Computational Linguistics
Pages123-131
Number of pages11
Volume1
ISBN (Print)9781932432879
Publication statusPublished - Jun 2011

Fingerprint

Dive into the research topics of 'Automatically extracting polarity-bearing topics for cross-domain sentiment classification'. Together they form a unique fingerprint.

Cite this