TY - GEN
T1 - Multi-stage Bias Mitigation for Individual Fairness in Algorithmic Decisions
AU - Ghadage, Adinath
AU - Yi, Dewei
AU - Coghill, George
AU - Pang, Wei
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2023
Y1 - 2023
N2 - The widespread use of machine learning algorithms in data-driven decision-making systems has become increasingly popular. Recent studies have raised concerns that this increasing popularity has exacerbated issues of unfairness and discrimination toward individuals. Researchers in this field have proposed a wide variety of fairness-enhanced classifiers and fairness matrices to address these issues, but very few fairness techniques have been translated into the real-world practice of data-driven decisions. This work focuses on individual fairness, where similar individuals need to be treated similarly based on the similarity of tasks. In this paper, we propose a novel model of individual fairness that transforms features into high-level representations that conform to the individual fairness and accuracy of the learning algorithms. The proposed model produces equally deserving pairs of individuals who are distinguished from other pairs in the records by data-driven similarity measures between each individual in the transformed data. Such a design identifies the bias and mitigates it at the data preprocessing stage of the machine learning pipeline to ensure individual fairness. Our method is evaluated on three real-world datasets to demonstrate its effectiveness: the credit card approval dataset, the adult census dataset, and the recidivism dataset.
AB - The widespread use of machine learning algorithms in data-driven decision-making systems has become increasingly popular. Recent studies have raised concerns that this increasing popularity has exacerbated issues of unfairness and discrimination toward individuals. Researchers in this field have proposed a wide variety of fairness-enhanced classifiers and fairness matrices to address these issues, but very few fairness techniques have been translated into the real-world practice of data-driven decisions. This work focuses on individual fairness, where similar individuals need to be treated similarly based on the similarity of tasks. In this paper, we propose a novel model of individual fairness that transforms features into high-level representations that conform to the individual fairness and accuracy of the learning algorithms. The proposed model produces equally deserving pairs of individuals who are distinguished from other pairs in the records by data-driven similarity measures between each individual in the transformed data. Such a design identifies the bias and mitigates it at the data preprocessing stage of the machine learning pipeline to ensure individual fairness. Our method is evaluated on three real-world datasets to demonstrate its effectiveness: the credit card approval dataset, the adult census dataset, and the recidivism dataset.
KW - Algorithmic bias
KW - Algorithmic fairness
KW - Fairness in machine learning
KW - Fairness-aware machine learning
KW - Individual fairness
UR - http://www.scopus.com/inward/record.url?scp=85142703182&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-20650-4_4
DO - 10.1007/978-3-031-20650-4_4
M3 - Published conference contribution
AN - SCOPUS:85142703182
SN - 9783031206498
VL - 13739
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 40
EP - 52
BT - Artificial Neural Networks in Pattern Recognition - 10th IAPR TC3 Workshop, ANNPR 2022, Proceedings
A2 - El Gayar, Neamat
A2 - Trentin, Edmondo
A2 - Ravanelli, Mirco
A2 - Abbas, Hazem
PB - Springer Science and Business Media Deutschland GmbH
T2 - 10th IAPR TC3 International Workshop on Artificial Neural Networks in Pattern Recognition, ANNPR 2022
Y2 - 24 November 2022 through 26 November 2022
ER -