Identifying diabetes from conjunctival images using a novel hierarchical multi-task network.
Academic Article
Overview
Research
Identity
Additional Document Info
Other
View All
Overview
abstract
Diabetes can cause microvessel impairment. However, these conjunctival pathological changes are not easily recognized, limiting their potential as independent diagnostic indicators. Therefore, we designed a deep learning model to explore the relationship between conjunctival features and diabetes, and to advance automated identification of diabetes through conjunctival images. Images were collected from patients with type 2 diabetes and healthy volunteers. A hierarchical multi-tasking network model (HMT-Net) was developed using conjunctival images, and the model was systematically evaluated and compared with other algorithms. The sensitivity, specificity, and accuracy of the HMT-Net model to identify diabetes were 78.70%, 69.08%, and 75.15%, respectively. The performance of the HMT-Net model was significantly better than that of ophthalmologists. The model allowed sensitive and rapid discrimination by assessment of conjunctival images and can be potentially useful for identifying diabetes.