A Stable Variational Autoencoder for Text Modelling

Ruizhe Li, Xiao Li, Chenghua Lin, Matthew Collinson, Rui Mao

Research output: Contribution to conferenceOral Presentation/ Invited Talk

6 Downloads (Pure)

Abstract

Variational Autoencoder (VAE) is a powerful method for learning representations of high-dimensional data. However, VAEs can suffer from an issue known as latent variable collapse (or KL loss vanishing), where the posterior collapses to the prior and the model will ignore the latent codes in generative tasks. Such an issue is particularly prevalent when employing VAE-RNN architectures for text modelling (Bowman et al., 2016). In this paper, we present a simple architecture called holistic regularisation VAE (HR-VAE), which can effectively avoid latent variable collapse. Compared to existing VAE-RNN architectures, we show that our model can achieve much more stable training process and can generate text with significantly better quality.
Original languageEnglish
Number of pages7
Publication statusPublished - 13 Nov 2019
EventThe 12th International Conference on Natural Language Generation (INLG 2019) - National Museum of Emerging Science and Innovation (Miraikan), Tokyo, Japan
Duration: 29 Oct 20191 Nov 2019

Conference

ConferenceThe 12th International Conference on Natural Language Generation (INLG 2019)
Country/TerritoryJapan
CityTokyo
Period29/10/191/11/19

Bibliographical note

Accepted by INLG 2019

Keywords

  • cs.CL
  • cs.LG

Cite this