- News>
- Science
New AI system can recognise faces in the dark
Scientists have developed an artificial intelligence that can recognise a person`s face even in the dark, a development that could lead to enhanced real-time biometrics and post-mission forensic analysis for covert nighttime operations.
Washington: Scientists have developed an artificial intelligence that can recognise a person's face even in the dark, a development that could lead to enhanced real-time biometrics and post-mission forensic analysis for covert nighttime operations.
The motivations for this technology, developed by researchers from the US Army Research Laboratory (ARL), are to enhance both automatic and human-matching capabilities.
"This technology enables matching between thermal face images and existing biometric face databases/watch lists that only contain visible face imagery," said Benjamin S Riggan, a research scientist at ARL.
"The technology provides a way for humans to visually compare visible and thermal facial imagery through thermal-to-visible face synthesis," said Riggan.
Under nighttime and low-light conditions, there is insufficient light for a conventional camera to capture facial imagery for recognition without active illumination such as a flash or spotlight, which would give away the position of such surveillance cameras.
However, thermal cameras that capture the heat signature naturally emanating from living skin tissue are ideal for such conditions.
"When using thermal cameras to capture facial imagery, the main challenge is that the captured thermal image must be matched against a watch list or gallery that only contains conventional visible imagery from known persons of interest," Riggan said.
"Therefore, the problem becomes what is referred to as cross-spectrum, or heterogeneous, face recognition. In this case, facial probe imagery acquired in one modality is matched against a gallery database acquired using a different imaging modality," she said.
This approach leverages advanced domain adaptation techniques based on deep neural networks. The fundamental approach is composed of two key parts: a non-linear regression model that maps a given thermal image into a corresponding visible latent representation and an optimisation problem that projects the latent projection back into the image space.
Researchers showed that combining global information, such as the features from the across the entire face, and local information, such as features from discriminative fiducial regions, for example, eyes, nose and mouth, enhanced the discriminability of the synthesised imagery.
They showed how the thermal-to-visible mapped representations from both global and local regions in the thermal face signature could be used in conjunction to synthesise a refined visible face image.
The optimisation problem for synthesising an image attempts to jointly preserve the shape of the entire face and appearance of the local fiducial details.
Using the synthesised thermal-to-visible imagery and existing visible gallery imagery, they performed face verification experiments using a common open source deep neural network architecture for face recognition.
The architecture used is explicitly designed for visible-based face recognition. The most surprising result is that their approach achieved better verification performance than a generative adversarial network-based approach, which previously showed photo-realistic properties.
Riggan attributes this result to the fact the game theoretic objective for GANs immediately seeks to generate imagery that is sufficiently similar in dynamic range and photo-like appearance to the training imagery, while sometimes neglecting to preserve identifying characteristics, he said.
The approach developed by ARL preserves identity information to enhance discriminability, for example, increased recognition accuracy for both automatic face recognition algorithms and human adjudication.
Researchers showcased a near real-time demonstration of this technology. The proof of concept demonstration included the use of a FLIR Boson 320 thermal camera and a laptop running the algorithm in near real-time.
This demonstration showed the audience that a captured thermal image of a person can be used to produce a synthesized visible image in situ.