Variational Autoencoder (VAE) is a powerful method for learning representations of high-dimensional data. However, VAEs can suffer from an issue known as latent variable collapse (or KL loss vanishing), where the posterior collapses to the prior and the model will ignore the latent codes in generative tasks. Such an issue is particularly prevalent when employing VAE-RNN architectures for text modelling (Bowman et al., 2016). In this paper, we present a simple architecture called holistic regularisation VAE (HR-VAE), which can effectively avoid latent variable collapse. Compared to existing VAE-RNN architectures, we show that our model can achieve much more stable training process and can generate text with significantly better quality.
|Number of pages||7|
|Publication status||Published - 13 Nov 2019|
|Event||The 12th International Conference on Natural Language Generation (INLG 2019) - National Museum of Emerging Science and Innovation (Miraikan), Tokyo, Japan|
Duration: 29 Oct 2019 → 1 Nov 2019
|Conference||The 12th International Conference on Natural Language Generation (INLG 2019)|
|Period||29/10/19 → 1/11/19|