IDM 2017: Workshop on Interpretable Data Mining - Bridging the Gap between Shallow and Deep Models Conference Paper uri icon

abstract

  • 2017 Copyright held by the owner/author(s). Intelligent systems built upon complex machine learning and data mining models (e.g., deep neural networks) have shown superior performances on various real-world applications. However, their effectiveness is limited by the difficulty in interpreting the resultant prediction mechanisms or how the results are obtained. In contrast the results of many simple or shallow models, such as rule-based or tree-based methods, are explainable but not sufficiently accurate. Model interpretability enables the systems to be clearly understood properly trusted, effectively managed and widely adopted by end users. Interpretations are necessary in applications such as medical diagnosis, fraud detection and object recognition where valid reasons would be significantly helpful, if not necessary, before taking actions based on predictions. This workshop is about interpreting the prediction mechanisms or results of the complex computational models for data mining by taking advantage of simple models which are easier to understand. We wish to exchange ideas on recent approaches to the challenges of model interpretability identify emerging fields of applications for such techniques and provide opportunities for relevant interdisciplinary research or projects.

name of conference

  • Proceedings of the 2017 ACM on Conference on Information and Knowledge Management

published proceedings

  • CIKM'17: PROCEEDINGS OF THE 2017 ACM CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT

author list (cited authors)

  • Hu, X., & Ji, S.

citation count

  • 1

complete list of authors

  • Hu, Xia||Ji, Shuiwang

publication date

  • November 2017