Spatial variational auto-encoding via matrix-variate normal distributions Academic Article uri icon

abstract

  • Copyright 2019 by SIAM. The key idea of variational auto-encoders (VAEs) resembles that of traditional auto-encoder models in which spatial information is supposed to be explicitly encoded in the latent space. However, the latent variables in VAEs are vectors, which can be interpreted as multiple feature maps of size 1x1. Such representations can only convey spatial information implicitly when coupled with powerful decoders. In this work, we propose spatial VAEs that use feature maps of larger size as latent variables to explicitly capture spatial information. This is achieved by allowing the latent variables to be sampled from matrix-variate normal (MVN) distributions whose parameters are computed from the encoder network. To increase dependencies among locations on latent feature maps and reduce the number of parameters, we further propose spatial VAEs via low-rank MVN distributions. Experimental results show that the proposed spatial VAEs outperform original VAEs in capturing rich structural and spatial information.

published proceedings

  • SIAM International Conference on Data Mining, SDM 2019

author list (cited authors)

  • Wang, Z., Yuan, H., & Ji, S.

complete list of authors

  • Wang, Z||Yuan, H||Ji, S

publication date

  • January 2019