Sentence subjectivity detection with weakly-supervised learning

Chenghua Lin, Yulan He, Richard Everson

Research output: Chapter in Book/Report/Conference proceedingPublished conference contribution

28 Citations (Scopus)


This paper presents a hierarchical Bayesian model based on latent Dirichlet allocation (LDA), called subjLDA, for sentence-level subjectivity detection, which automatically identifies whether a given sentence expresses opinion or states facts. In contrast to most of the existing methods relying on either labelled corpora for classifier training or linguistic pattern extraction for subjectivity classification, we view the problem as weakly-supervised generative model learning, where the only input to the model is a small set of domain independent subjectivity lexical clues. A mechanism is introduced to incorporate the prior information about the subjectivity lexical clues into model learning by modifying the Dirichlet priors of topic-word distributions. The subjLDA model has been evaluated on the Multi-Perspective Question Answering (MPQA) dataset and promising results have been observed in the preliminary experiments. We have also explored adding neutral words as prior information for model learning. It was found that while incorporating subjectivity clues bearing positive or negative polarity can achieve a significant performance gain, the prior lexical information from neutral words is less effective.
Original languageEnglish
Title of host publicationProceedings of the 5th International Joint Conference on Natural Language Processing
Subtitle of host publicationChiang Mai, Thailand, November 8 – 13, 2011
Number of pages9
Publication statusPublished - 2011


Dive into the research topics of 'Sentence subjectivity detection with weakly-supervised learning'. Together they form a unique fingerprint.

Cite this