Domain-adapted Driving Scene Understanding with Uncertainty-aware and Diversified GANs

Yining Hua, Jie Sui, Hui Fang, Chuan Hu, Dewei Yi* (Corresponding Author)

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Autonomous vehicles are required to operate in
an uncertain environment. Recent advances in computational
intelligence (CI) techniques make it possible to understand
driving scenes in various environments by using a semantic
segmentation neural network, which assigns a class label to each
pixel. It requires massive pixel-level labelled data to optimise
the network. However, it is challenging to collect sufficient data
and labels in the real world. An alternative solution is to obtain
synthetic dense pixel-level labelled data from a driving simulator.
Although the use of synthetic data is a promising way to alleviate
the labelling problem, models trained with virtual data cannot
generalise well to realistic data due to the domain shift. To
fill this gap, we propose a novel uncertainty-aware generative
ensemble method. In particular, ensembles are obtained from
different optimisation objectives, training iterations, and network
initialisation so that they are complementary to each other to
produce reliable predictions. Moreover, an uncertainty-aware
ensemble scheme is developed to derive fused prediction by
considering the uncertainty from ensembles. Such a design can
make better use of the strengths of ensembles to enhance adapted
segmentation performance. Experimental results demonstrate the
effectiveness of our method on three large-scale datasets.
Original languageEnglish
JournalCAAI Transactions on Intelligence Technology
Publication statusAccepted/In press - 23 May 2023

Keywords

  • measurement
  • uncertainty
  • Neural networ
  • object segmentation
  • autonomous vehicles
  • computer vision
  • adaptive intelligent systems

Cite this