Beware the Black-Box of Medical Image Generation: an Uncertainty Analysis by the Learned Feature Space. Conference Paper uri icon

abstract

  • Deep neural networks (DNNs) are the primary driving force for the current development of medical imaging analysis tools and often provide exciting performance on various tasks. However, such results are usually reported on the overall performance of DNNs, such as the Peak signal-to-noise ratio (PSNR) or mean square error (MSE) for imaging generation tasks. As a black-box, DNNs usually produce a relatively stable performance on the same task across multiple training trials, while the learned feature spaces could be significantly different. We believe additional insightful analysis, such as uncertainty analysis of the learned feature space, is equally important, if not more. Through this work, we evaluate the learned feature space of multiple U-Net architectures for image generation tasks using computational analysis and clustering analysis methods. We demonstrate that the learned feature spaces are easily separable between different training trials of the same architecture with the same hyperparameter setting, indicating the models using different criteria for the same tasks. This phenomenon naturally raises the question of which criteria are correct to use. Thus, our work suggests that assessments other than overall performance are needed before applying a DNN model to real-world practice.

name of conference

  • 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC)

published proceedings

  • Annu Int Conf IEEE Eng Med Biol Soc

author list (cited authors)

  • Qu, Y., Yan, D., Xing, E., Zheng, F., Zhang, J., Liu, L., & Liang, G.

citation count

  • 0

complete list of authors

  • Qu, Yunni||Yan, David||Xing, Eric||Zheng, Fengbo||Zhang, Jie||Liu, Liangliang||Liang, Gongbo

publication date

  • July 2022