A Stable Variational Autoencoder for Text Modelling

Ruizhe Li*, Xiao Li, Chenghua Lin, Matthew Collinson, Rui Mao

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Variational Autoencoder (VAE) is a powerful method for learning representations of high-dimensional data. However, VAEs can suffer from an issue known as latent variable collapse (or KL loss vanishing), where the posterior collapses to the prior and the model will ignore the latent codes in generative tasks. Such an issue is particularly prevalent when employing VAE-RNN architectures for text modelling (Bowman et al., 2016). In this paper, we present a simple architecture called holistic regularisation VAE (HR-VAE), which can effectively avoid latent variable collapse. Compared to existing VAE-RNN architectures, we show that our model can achieve much more stable training process and can generate text with significantly better quality.
Original languageEnglish
Title of host publicationINLG 2019 proceedings
PublisherACL Anthology
Publication statusAccepted/In press - 2 Sep 2019
EventThe 12th International Conference on Natural Language Generation (INLG 2019) - National Museum of Emerging Science and Innovation (Miraikan), Tokyo, Japan
Duration: 29 Oct 20191 Nov 2019

Conference

ConferenceThe 12th International Conference on Natural Language Generation (INLG 2019)
CountryJapan
CityTokyo
Period29/10/191/11/19

Cite this

Li, R., Li, X., Lin, C., Collinson, M., & Mao, R. (Accepted/In press). A Stable Variational Autoencoder for Text Modelling. In INLG 2019 proceedings ACL Anthology.