DGST: A dual-generator network for text style transfer

Xiao Li, Guanyi Chen, Chenghua Lin*, Ruizhe Li

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingPublished conference contribution

16 Citations (Scopus)

Abstract

We propose DGST, a novel and simple Dual-Generator network architecture for text Style Transfer. Our model employs two generators only, and does not rely on any discriminators or parallel corpus for training. Both quantitative and qualitative experiments on the Yelp and IMDb datasets show that our model gives competitive performance compared to several strong baselines with more complicated architecture designs.

Original languageEnglish
Title of host publicationEMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference
PublisherAssociation for Computational Linguistics (ACL)
Pages7131-7136
Number of pages6
ISBN (Electronic)9781952148606
Publication statusPublished - 2020
Event2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020 - Virtual, Online
Duration: 16 Nov 202020 Nov 2020

Conference

Conference2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020
CityVirtual, Online
Period16/11/2020/11/20

Bibliographical note

Funding Information:
We would like to thank all the anonymous reviewers for their insightful comments. This work is supported by the award made by the UK Engineering and Physical Sciences Research Council (Grant number: EP/P011829/1).

Publisher Copyright:
© 2020 Association for Computational Linguistics.

Fingerprint

Dive into the research topics of 'DGST: A dual-generator network for text style transfer'. Together they form a unique fingerprint.

Cite this