Multi-stage Bias Mitigation for Individual Fairness in Algorithmic Decisions

Adinath Ghadage, Dewei Yi, George Coghill, Wei Pang* (Corresponding Author)

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingPublished conference contribution

1 Citation (Scopus)
3 Downloads (Pure)

Abstract

The widespread use of machine learning algorithms in data-driven decision-making systems has become increasingly popular. Recent studies have raised concerns that this increasing popularity has exacerbated issues of unfairness and discrimination toward individuals. Researchers in this field have proposed a wide variety of fairness-enhanced classifiers and fairness matrices to address these issues, but very few fairness techniques have been translated into the real-world practice of data-driven decisions. This work focuses on individual fairness, where similar individuals need to be treated similarly based on the similarity of tasks. In this paper, we propose a novel model of individual fairness that transforms features into high-level representations that conform to the individual fairness and accuracy of the learning algorithms. The proposed model produces equally deserving pairs of individuals who are distinguished from other pairs in the records by data-driven similarity measures between each individual in the transformed data. Such a design identifies the bias and mitigates it at the data preprocessing stage of the machine learning pipeline to ensure individual fairness. Our method is evaluated on three real-world datasets to demonstrate its effectiveness: the credit card approval dataset, the adult census dataset, and the recidivism dataset.

Original languageEnglish
Title of host publicationArtificial Neural Networks in Pattern Recognition
Subtitle of host publication10th IAPR TC3 Workshop, ANNPR 2022
EditorsNeamat El Gayar, Edmondo Trentin, Mirco Ravanelli, Hazem Abbas
Place of PublicationCham, Switzerland
PublisherSpringer Science and Business Media Deutschland GmbH
Pages40-52
Number of pages13
Volume13739
ISBN (Electronic)978-3-031-20650-4
ISBN (Print)978-3-031-20649-8
DOIs
Publication statusPublished - 2023
Event10th IAPR TC3 International Workshop on Artificial Neural Networks in Pattern Recognition, ANNPR 2022 - Dubai, United Arab Emirates
Duration: 24 Nov 202226 Nov 2022

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13739 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference10th IAPR TC3 International Workshop on Artificial Neural Networks in Pattern Recognition, ANNPR 2022
Country/TerritoryUnited Arab Emirates
CityDubai
Period24/11/2226/11/22

Keywords

  • Algorithmic bias
  • Algorithmic fairness
  • Fairness in machine learning
  • Fairness-aware machine learning
  • Individual fairness

Fingerprint

Dive into the research topics of 'Multi-stage Bias Mitigation for Individual Fairness in Algorithmic Decisions'. Together they form a unique fingerprint.

Cite this