Face recognition from images is a task with many applications in real-life problems.
Recent advances in Big Data and Deep Learning have pushed the state-of-the-art performance in face recognition and, currently, the recognition of thousands of persons in (relatively) controlled settings is considered as (an almost) solved problem.
However, there are still challenges that need to be faced, one of the biggest problems being the recognition of people from facial images depicting a non-frontal (over an angle extend, e.g. profile) facial images.
Other challenges include the recognition of people on images captured many years after those images used for model creation and/or the recognition of people with facial impairment.
In order to tackle such issues, we plan to exploit the high learning ability of deep neural networks (either fully connected or CNNs) to create models that can complement and/or restore missing parts of facial images.
This project will train an image-to-image mapper (using deep architectures) that can take as an input a problematic facial image and map it to an image that can be used for face recognition with high performance.
Two approaches will be followed: the first aims to learn a mapping model which creates high quality facial images that are visually plausible to a human observer, while the second will learn a mapping model that maximizes the performance of the recognizer at the end.