Face merged generative adversarial network with tripartite adversaries
Additional Document Info
2019 With the development of deep learning, the accuracy of face recognition of learning based approaches has even exceeded that of humans in some circumstances. However, identifying faces that are heavily deflected is still a challenging issue. To synthesize the frontal view from large-pose faces, a face merged generative adversarial network (FM-GAN) equipped with two generators and one discriminator is proposed in this paper. Generally, generative adversarial network (GAN) has only one generator and one discriminator to compete with each other to make the network convergence. By introducing an additional generator, the confrontation power is greatly enhanced and thus the performance of the designed model is improved. In the proposed framework, the first generator learns the upper and lower parts of a face to capture essential facial features. These high-dimensional information of the merged face together with the multi-scale encoded profile are transmitted into the second generator. Based on the competition with the discriminator, the final frontal face is synthesized by the second generator. To reduce the model complexity, FM-GAN only encodes the original image once through a pre-trained network, and the extracted features are shared by the two decoders. In addition, the first generator simultaneously produces the upper and lower parts of a face by the same encoder. Therefore, only the parameters of two decoders and one discriminator are required to be updated during the training process. The experimental results show that the frontal faces synthesized by our model can better preserve the facial identity and photorealistic than by some existing GANs.