Rechercher des projets européens

Facial Expression Recognition in the Wild (FER in the Wild)
Date du début: 1 juin 2012, Date de fin: 31 mai 2014 PROJET  TERMINÉ 

With the every-day growing reach of technology through mediums like internet and mobile computing, having a human-computer interface with the ability to recognise facial expressions (and subsequently the affective state) can be very useful tool for future technology. However, the existing facial expression recognition systems can typically handle only deliberately displayed and exaggerated expressions under controlled laboratory environment. The main aim of the proposed “FER in the Wild” project is to build a fully-automatic and real-time AU detection system capable of handling spontaneous facial expressions under uncontrolled natural settings that is easy-to-use, cost-effective and can be easily merged with any other existing setup. To achieve this goal, a fully-automatic and real-time face tracking system based on a new class for face models, called the Online Robust Deformable Model (ORDM), is proposed that is capable of handling faces under real-world settings. The proposed ORDM, based on the methods for online and incremental subspace learning, will not rely completely on the offline training and will continuously update its models to achieve robust fit to varying pose and illumination conditions. Moreover, a recently proposed Principal Regression Analysis (PRA) framework, used for placing a strong prior over the space of plausible solutions, will be explored and made much more robust to assist in faster and better convergence of the ORDM fitting methods. The proposed project will also extend the state-of-the-art by formulating the Discriminative Dynamic State Regression Machines (DDSRM) with an underlying structure (i.e. with a hidden units or states), referred to as Hidden-DDSRM. This layer of hidden variables will enable HDDSRM to model the complex underlying structure of spontaneous AUs. As a result, a novel AU detection method capable of continuous and accurate prediction of the AU intensities for spontaneous facial expressions will be developed.

Coordinateur

Details

1 Participants partenaires