Inducing crosslingual distributed representations of words

Alexandre Klementiev, Ivan Titov, Binod Bhattarai

Research output: Chapter in Book/Report/Conference proceedingPublished conference contribution

Abstract

Distributed representations of words have proven extremely useful in numerous natural language processing tasks. Their appeal is that they can help alleviate data sparsity problems common to supervised learning. Methods for inducing these representations require only unlabeled language data, which are plentiful for many natural languages. In this work, we induce distributed representations for a pair of languages jointly. We treat it as a multitask learning problem where each task corresponds to a single word, and task relatedness is derived from co-occurrence statistics in bilingual parallel data. These representations can be used for a number of crosslingual learning tasks, where a learner can be trained on annotations present in one language and applied to test data in another. We show that our representations are informative by using them for crosslingual document classification, where classifiers trained on these representations substantially outperform strong baselines (e.g. machine translation) when applied to a new language.
Original languageEnglish
Title of host publicationProceedings of COLING 2012
Pages1459-1474
Number of pages16
Publication statusPublished - 2012

Bibliographical note

Proceedings of COLING 2012: Technical Papers, pages 1459–1474,
COLING 2012, Mumbai, December 2012.

Fingerprint

Dive into the research topics of 'Inducing crosslingual distributed representations of words'. Together they form a unique fingerprint.

Cite this