Deep Bayesian Self-Training

Fabio De Sousa Ribeiro, Francesco Calivá, Mark Swainson, Kjartan Gudmundsson, Georgios Leontidis* (Corresponding Author), Stefanos Kollias

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

15 Citations (Scopus)
15 Downloads (Pure)


Supervised deep learning has been highly successful in recent years, achieving state-of-the-art results in most tasks. However, with the ongoing uptake of such methods in industrial applications, the requirement for large amounts of annotated data is often a challenge. In most real-world problems, manual annotation is practically intractable due to time/labour constraints; thus, the development of automated and adaptive data annotation systems is highly sought after. In this paper, we propose both a (1) deep Bayesian self-training methodology for automatic data annotation, by leveraging predictive uncertainty estimates using variational inference and modern neural network (NN) architectures, as well as (2) a practical adaptation procedure for handling high label variability between different dataset distributions through clustering of NN latent variable representations. An experimental study on both public and private datasets is presented illustrating the superior performance of the proposed approach over standard self-training baselines, highlighting the importance of predictive uncertainty estimates in safety-critical domains.
Original languageEnglish
Pages (from-to)4275-4291
Number of pages17
JournalNeural Computing and Applications
Early online date10 Jul 2019
Publication statusPublished - May 2020


  • Machine Learning
  • Deep Learning
  • Deep learning
  • Representation learning
  • Bayesian CNN
  • Variational inference
  • Clustering
  • Self-training
  • Adaptation
  • Uncertainty weighting


Dive into the research topics of 'Deep Bayesian Self-Training'. Together they form a unique fingerprint.

Cite this