FAI: Towards Fairness in Deep Neural Networks with Learning Interpretation Grant uri icon


  • Deep neural networks (DNNs) have achieved great successes in a wide range of applications such as computer vision and natural language processing. Unfortunately, inherent discrimination widely exists in DNNs towards minority subgroups. To facilitate fairness in deep learning, this project is to tackle the challenging problem of algorithmic discrimination in designing, evaluating, as well as deploying DNN systems. The successful outcome of this project will lead to advances in providing theoretical understandings and practical algorithms to enable fairness in complicated deep learning models and predictions. The education program that integrates machine learning, industrial statistics, and social sciences is to train students with data analytics technologies in information systems, to attract members of underrepresented groups to pursue careers in STEM. The primary goal of this project is to systematically investigate and facilitate fairness in deep neural networks by leveraging the interpretability of key elements in a machine learning life-cycle including modeling, data preparation and feature engineering. Specifically, the proposed frameworks uncover the intrinsic properties of fairness in deep learning from the following aspects. Auxiliary training objectives are designed to regularize the augmented local interpretation to promote the fairness of classical DNN architectures. Data construction and data augmentation approaches are developed towards reconstructing a fair dataset for DNN training. Through identifying sensitive features in applications, domain knowledge is extracted and reinforcement learning is further developed to optimize the model fairness under realistic constraints. Finally, the proposed research innovations could be embedded in DNN based real systems, such as medical diagnosis and recommender systems, with concrete solutions and evaluation measurements. This award reflects NSF''s statutory mission and has been deemed worthy of support through evaluation using the Foundation''s intellectual merit and broader impacts review criteria.

date/time interval

  • 2020 - 2023