Search papers, labs, and topics across Lattice.
This paper introduces Non-Protected Attribute-based Debiasing (NPAD), a novel algorithm for mitigating bias in deep learning models without relying on protected attribute information. NPAD leverages auxiliary information from non-protected attributes and optimizes the model using two novel loss functions: Debiasing via Attribute Cluster Loss (DACL) and Filter Redundancy Loss (FRL). Experiments on LFW-A and CelebA datasets demonstrate significant bias reduction across gender and age subgroups in facial attribute prediction.
Mitigating bias in deep learning models is now possible without needing sensitive protected attribute information, opening doors for fairer AI in privacy-conscious applications.
The problem of bias persists in the deep learning community as models continue to provide disparate performance across different demographic subgroups. Therefore, several algorithms have been proposed to improve the fairness of deep models. However, a majority of these algorithms utilize the protected attribute information for bias mitigation, which severely limits their application in real-world scenarios. To address this concern, we have proposed a novel algorithm, termed as \textbf{Non-Protected Attribute-based Debiasing (NPAD)} algorithm for bias mitigation, that does not require the protected attribute information. The proposed NPAD algorithm utilizes the auxiliary information provided by the non-protected attributes to optimize the model for bias mitigation. Further, two different loss functions, \textbf{Debiasing via Attribute Cluster Loss (DACL)} and \textbf{Filter Redundancy Loss (FRL)} have been proposed to optimize the model for fairness goals. Multiple experiments are performed on the LFWA and CelebA datasets for facial attribute prediction, and a significant reduction in bias across different gender and age subgroups is observed.