III: Medium: Collaborative Research: Towards Effective Interpretation of Deep Learning: Prediction, Representation, Modeling and Utilization Grant uri icon

abstract

  • While deep learning has achieved unprecedented prediction capabilities, it is often criticized as a black box because of lacking interpretability, which is very important in real-world applications such as healthcare and cybersecurity. For example, healthcare professionals would appropriately trust and effectively manage prediction results only if they can understand why and how a patient is diagnosed with prediabetes. The project is to investigate the interpretability of deep learning by following the fundamental elements in a data mining practice from representation, modeling to prediction. The results of the project are expected to improve the usability of deep learning in important applications, positively boosting the overall value of the deep learning based information systems. The education program that integrates data science, industrial engineering, and visualization is to train students with data analytics technologies in industrial systems, to attract and mentor members of underrepresented groups pursuing careers in STEM.The research goal of this project is to systematically explore interpretability of deep learning from a machine learning life cycle, i.e., representation, modeling and prediction, as well as the deployment of interpretability in various tasks. Specifically, this project aims to achieve the research goal by developing a series of interpretation algorithms and methods from the following aspects. It explores post-hoc interpretation methods to shed light on how deep learning models produce a specific prediction and generate a representation. It also investigates designing interpretable models from scratch, which aims to construct self-explanatory models and incorporate interpretability directly into the structure of a deep learning model. The aforementioned interpretation derived from a deep learning model is employed to promote the model performance. In addition, the applications of interpretability are utilized to debug model behaviors so as to ensure the model decision making process is consistent with human expert knowledge, as well as to promote model robustness when handling adversarial attacks.This award reflects NSF''s statutory mission and has been deemed worthy of support through evaluation using the Foundation''s intellectual merit and broader impacts review criteria.

date/time interval

  • 2019 - 2023