Skip to main content
Cognitive Computing Laboratory
  • Home
  • Projects
  • Deep Mapping from Thermal-to-Visible for Night-Time Face Recognition

Deep Mapping from Thermal-to-Visible for Night-Time Face Recognition

In this project, we investigate a deep coupled learning framework to address the problem of matching non-visible face photos against a gallery of visible faces. The coupled framework contains two sub-networks one dedicated to the visible spectrum and the second sub-network dedicated to the non-visible spectrum, as shown in Figure 1. Each sub-network consists of a generative adversarial network architecture. Inspired by the dense network which is capable of maximizing the information flow among features at different levels, we utilize a densely connected encoder-decoder structure as the generator in each GAN sub-network. The coupled GAN framework will be optimized using multiple loss functions. We propose a coupled deep neural network architecture which forces the features from each sub-network to be as close as possible for the same classes in a common latent subspace, while simultaneously preserving information from the input space. To achieve the realistic photo reconstruction while preserving the discriminative information.

Fig. 1
Fig. 1.  Figure shows the proposed network using two GAN based sub-networks (Vis-GAN and NVis-GAN) coupled by contrastive loss function. Here, the input to NVis-GAN is the IR polarimetric data (S0, S1, S2). In the case of other IR modalities such as (NIR, MWIR, and LWIR) the framework remains the same and only the input to the NVis-GAN is changed accordingly.