Deep Convolutional Neural Networks for Multi-Instance Multi-Task Learning
Conference Paper
Overview
Research
Identity
Additional Document Info
Other
View All
Overview
abstract
2015 IEEE. Multi-instance learning studies problems in which labels areassigned to bags that contain multiple instances. In these settings, the relations between instances and labels are usually ambiguous. Incontrast, multi-task learning focuses on the output space in whichan input sample is associated with multiple labels. In real world, asample may be associated with multiple labels that are derived fromobserving multiple aspects of the problem. Thus many real worldapplications are naturally formulated as multi-instance multi-task(MIMT) problems. A common approach to MIMT is to solve it task-by-task independently under the multi-instance learningframework. On the other hand, convolutional neural networks (CNN) have demonstrated promising performance in single-instancesingle-label image classification tasks. However, how CNN deals withmulti-instance multi-label tasks still remains an open problem. Thisis mainly due to the complex multiple-to-multiple relations betweenthe input and output space. In this work, we propose a deep leaningmodel, known as multi-instance multi-task convolutional neuralnetworks (MIMT-CNN), where a number of images representing amulti-task problem is taken as the inputs. Then a shared sub-CNN isconnected with each input image to form instance representations. Those sub-CNN outputs are subsequently aggregated as inputs toadditional convolutional layers and full connection layers toproduce the ultimate multi-label predictions. This CNN model, through transfer learning from other domains, enables transfer ofprior knowledge at image level learned from large single-labelsingle-task data sets. The bag level representations in this modelare hierarchically abstracted by multiple layers from instance levelrepresentations. Experimental results on mouse brain gene expressionpattern annotation data show that the proposed MIMT-CNN modelachieves superior performance.