Feature and Label Embedding Spaces Matter in Addressing Image Classifier Bias
| Authors | |
|---|---|
| Publication date | 2021 |
| Book title | 32nd British Machine Vision Conference 2021 |
| Book subtitle | BMVC 2021, Online, November 22-25, 2021 |
| Event | 32nd British Machine Vision Conference |
| Article number | 130 |
| Number of pages | 14 |
| Publisher | BMVA Press |
| Organisations |
|
| Abstract |
This paper strives to address image classifier bias, with a focus on both feature and label embedding spaces. Previous works have shown that spurious correlations from protected attributes, such as age, gender, or skin tone, can cause adverse decisions. To balance potential harms, there is a growing need to identify and mitigate image classifier bias. First, we identify in the feature space a bias direction. We compute class prototypes of each protected attribute value for every class, and reveal an existing subspace that captures the maximum variance of the bias. Second, we mitigate biases by mapping image inputs to label embedding spaces. Each value of the protected attribute has its projection head where classes are embedded through a latent vector representation rather than a common one-hot encoding. Once trained, we further reduce in the feature space the bias effect by removing its direction. Evaluation on biased image datasets, for multi-class, multi-label and binary classifications, shows the effectiveness of tackling both feature and label embedding spaces in improving the fairness of the classifier predictions, while preserving classification performance.
|
| Document type | Conference contribution |
| Note | With supplementary file |
| Language | English |
| Other links | https://github.com/twuilliam/bias-classifiers https://dblp.org/db/conf/bmvc/bmvc2021.html https://www.bmvc2021-virtualconference.com/programme/accepted-papers/ |
| Downloads |
0180
(Final published version)
|
| Supplementary materials | |
| Permalink to this page | |
