A Stable Variational Autoencoder for Text Modelling

Ruizhe Li*, Xiao Li, Chenghua Lin, Matthew Collinson, Rui Mao

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Variational Autoencoder (VAE) is a powerful method for learning representations of high-dimensional data. However, VAEs can suffer from an issue known as latent variable collapse (or KL loss vanishing), where the posterior collapses to the prior and the model will ignore the latent codes in generative tasks. Such an issue is particularly prevalent when employing VAE-RNN architectures for text modelling (Bowman et al., 2016). In this paper, we present a simple architecture called holistic regularisation VAE (HR-VAE), which can effectively avoid latent variable collapse. Compared to existing VAE-RNN architectures, we show that our model can achieve much more stable training process and can generate text with significantly better quality.
Original languageEnglish
Title of host publicationINLG 2019 proceedings
PublisherACL Anthology
Publication statusAccepted/In press - 2 Sep 2019
EventThe 12th International Conference on Natural Language Generation (INLG 2019) - National Museum of Emerging Science and Innovation (Miraikan), Tokyo, Japan
Duration: 29 Oct 20191 Nov 2019

Conference

ConferenceThe 12th International Conference on Natural Language Generation (INLG 2019)
CountryJapan
CityTokyo
Period29/10/191/11/19

Cite this

Li, R., Li, X., Lin, C., Collinson, M., & Mao, R. (Accepted/In press). A Stable Variational Autoencoder for Text Modelling. In INLG 2019 proceedings ACL Anthology.

A Stable Variational Autoencoder for Text Modelling. / Li, Ruizhe; Li, Xiao; Lin, Chenghua; Collinson, Matthew; Mao, Rui.

INLG 2019 proceedings. ACL Anthology, 2019.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Li, R, Li, X, Lin, C, Collinson, M & Mao, R 2019, A Stable Variational Autoencoder for Text Modelling. in INLG 2019 proceedings. ACL Anthology, The 12th International Conference on Natural Language Generation (INLG 2019), Tokyo, Japan, 29/10/19.
Li R, Li X, Lin C, Collinson M, Mao R. A Stable Variational Autoencoder for Text Modelling. In INLG 2019 proceedings. ACL Anthology. 2019
Li, Ruizhe ; Li, Xiao ; Lin, Chenghua ; Collinson, Matthew ; Mao, Rui. / A Stable Variational Autoencoder for Text Modelling. INLG 2019 proceedings. ACL Anthology, 2019.
@inproceedings{0a81c77b4e8f4658bce584a21629718c,
title = "A Stable Variational Autoencoder for Text Modelling",
abstract = "Variational Autoencoder (VAE) is a powerful method for learning representations of high-dimensional data. However, VAEs can suffer from an issue known as latent variable collapse (or KL loss vanishing), where the posterior collapses to the prior and the model will ignore the latent codes in generative tasks. Such an issue is particularly prevalent when employing VAE-RNN architectures for text modelling (Bowman et al., 2016). In this paper, we present a simple architecture called holistic regularisation VAE (HR-VAE), which can effectively avoid latent variable collapse. Compared to existing VAE-RNN architectures, we show that our model can achieve much more stable training process and can generate text with significantly better quality.",
author = "Ruizhe Li and Xiao Li and Chenghua Lin and Matthew Collinson and Rui Mao",
note = "Acknowledgement This work is supported by the award made by the UK Engineering and Physical Sciences Research Council (Grant number: EP/P011829/1).",
year = "2019",
month = "9",
day = "2",
language = "English",
booktitle = "INLG 2019 proceedings",
publisher = "ACL Anthology",

}

TY - GEN

T1 - A Stable Variational Autoencoder for Text Modelling

AU - Li, Ruizhe

AU - Li, Xiao

AU - Lin, Chenghua

AU - Collinson, Matthew

AU - Mao, Rui

N1 - Acknowledgement This work is supported by the award made by the UK Engineering and Physical Sciences Research Council (Grant number: EP/P011829/1).

PY - 2019/9/2

Y1 - 2019/9/2

N2 - Variational Autoencoder (VAE) is a powerful method for learning representations of high-dimensional data. However, VAEs can suffer from an issue known as latent variable collapse (or KL loss vanishing), where the posterior collapses to the prior and the model will ignore the latent codes in generative tasks. Such an issue is particularly prevalent when employing VAE-RNN architectures for text modelling (Bowman et al., 2016). In this paper, we present a simple architecture called holistic regularisation VAE (HR-VAE), which can effectively avoid latent variable collapse. Compared to existing VAE-RNN architectures, we show that our model can achieve much more stable training process and can generate text with significantly better quality.

AB - Variational Autoencoder (VAE) is a powerful method for learning representations of high-dimensional data. However, VAEs can suffer from an issue known as latent variable collapse (or KL loss vanishing), where the posterior collapses to the prior and the model will ignore the latent codes in generative tasks. Such an issue is particularly prevalent when employing VAE-RNN architectures for text modelling (Bowman et al., 2016). In this paper, we present a simple architecture called holistic regularisation VAE (HR-VAE), which can effectively avoid latent variable collapse. Compared to existing VAE-RNN architectures, we show that our model can achieve much more stable training process and can generate text with significantly better quality.

UR - https://aclweb.org/anthology/

M3 - Conference contribution

BT - INLG 2019 proceedings

PB - ACL Anthology

ER -