Task-Aware Active Learning for Endoscopic Image Analysis

Shrawan Kumar Thapa, Pranav Poudel, Binod* Bhattarai* (Corresponding Author), Danail Stoyanov

*Corresponding author for this work

Research output: Working paper

Abstract

Semantic segmentation of polyps and depth estimation are two important research problems in endoscopic image analysis. One of the main obstacles to conduct research on these research problems is lack of annotated data. Endoscopic annotations necessitate the specialist knowledge of expert endoscopists and due to this, it can be difficult to organise, expensive and time consuming. To address this problem, we investigate an active learning paradigm to reduce the number of training examples by selecting the most discriminative and diverse unlabelled examples for the task taken into consideration. Most of the existing active learning pipelines are task-agnostic in nature and are often suboptimal to the end task. In this paper, we propose a novel task-aware active learning pipeline and applied for two important tasks in endoscopic image analysis: semantic segmentation and depth estimation. We compared our method with the competitive baselines. From the experimental results, we observe a substantial improvement over the compared baselines. Codes are available at https://github.com/thetna/ endoactive learn.
Original languageEnglish
PublisherArXiv
Number of pages12
DOIs
Publication statusPublished - 1 Apr 2022

Bibliographical note

This project is funded by the EndoMapper project by Horizon 2020 FET (GA 863146). For the purpose of open access, the author has applied a CC BY public copyright licence to any author accepted manuscript version arising from this submission.

Keywords

  • Active Learning
  • Surgical AI
  • Endoscopic Image Analysis
  • Computer Assisted Interventions
  • Depth Estimation
  • Semantic Segmentation

Fingerprint

Dive into the research topics of 'Task-Aware Active Learning for Endoscopic Image Analysis'. Together they form a unique fingerprint.

Cite this