Comparing Sample-Wise Learnability across Deep Neural Network Models Conference Paper uri icon

abstract

  • Estimating the relative importance of each sample in a training set has important practical and theoretical value, such as in importance sampling or curriculum learning. This kind of focus on individual samples invokes the concept of samplewise learnability: How easy is it to correctly learn each sample (cf. PAC learnability)? In this paper, we approach the sample-wise learnability problem within a deep learning context. We propose a measure of the learnability of a sample with a given deep neural network (DNN) model. The basic idea is to train the given model on the training set, and for each sample, aggregate the hits and misses over the entire training epochs. Our experiments show that the samplewise learnability measure collected this way is highly linearly correlated across different DNN models (ResNet-20, VGG-16, and MobileNet), suggesting that such a measure can provide deep general insights on the datas properties. We expect our method to help develop better curricula for training, and help us better understand the data itself.

published proceedings

  • THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE

author list (cited authors)

  • Lee, S., Kim, J., Jung, H., & Choe, Y.

citation count

  • 1

complete list of authors

  • Lee, Seung-Geon||Kim, Jaedeok||Jung, Hyun-Joo||Choe, Yoonsuck

publication date

  • July 2019