Detecting changes in user behavior to understand interaction provenance during visual data analysis Conference Paper uri icon

abstract

  • 2019 for the individual papers by the papers authors. Copying permitted for private and academic purposes. This volume is published and copyrighted by its editors. Analysts can make better informed decisions with Explainable AI. Visualizations are often used to understand, diagnose, and refine AI models. Yet, it is unclear what type of interactions are appropriate for a given model and how the visualizations are perceived. Furthering research into sensemaking for visual analytics may be useful in understanding how users are interacting with visualizations for AI and in developing a naturalistic model of explanation. Conventional approaches consist of human experts applying theoretical sensemaking models to identify changes in information processing or utilizing recorded rationale provided by the users. However, these approaches can be inefficient and inaccurate since they heavily rely on subjective human reports. In this research, we aim to understand how data-driven techniques can automatically identify changes in user behavior (inflection points) based on user interaction logs collected from eye tracking and mouse interactions. We relay the results of a supervised classification system using Hidden Markov Models to predict changes in a visual data analysis of a cyber security scenario. Preliminary results indicate a 70% accuracy in identifying inflection points. These preliminary results suggest the feasibility of data-driven approaches furthering our understanding of sensemaking processes and interaction provenance.

published proceedings

  • CEUR Workshop Proceedings

author list (cited authors)

  • Pea, A., Nirjhar, E. H., Pachuilo, A., Chaspari, T., & Ragan, E. D.

complete list of authors

  • Peña, A||Nirjhar, EH||Pachuilo, A||Chaspari, T||Ragan, ED

publication date

  • January 2019