14:00
18:00

Facial expression is a major non-verbal means of expecting intentions in human communication. It is one of the most powerful, natural and universal signals for human beings to convey their emotional states and intention. Thus, analysing and understanding human facial expression is crucial for many different applications in multiple domains, including health care and medical fields, virtual reality and augmented reality, education and entertainment. In this thesis, we gave an overview on measuring facial expressions by utilising facial action units (AUs), with an application on automatic PSPI pain intensity estimation. As any human facial expression can be decomposed into a set of facial action units and their intensities, automatic measuring facial AUs intensity seems to be the key step towards better understanding human facial expression and assessment.

Recently, deep learning techniques have emerged as powerful methods for learning feature representations directly from data and have achieved some major improvements in various face-related computer vision tasks.
The main advantage of deep learning approaches is their ability to learn from experience and generalise well on newly unseen data. However, to do so, these deep models require to be trained on a massive amount of data, which is difficult to obtain for the domain of facial expressions, especially the facial AUs and PSPI pain intensity estimation. The reason for that is because it requires a costly and time-consuming labeling effort by trained human annotators. For instance, it may take more than an hour for an expert annotator to code the intensity of AUs in one second of a face video. Moreover, the data distribution of AU intensity is generally imbalanced, the performance of deep methods on these databases are being negatively affected by insufficient data. Hence, in this thesis, we have proposed several approaches that are capable of exploiting better feature representations of facial image features on a limited amount of data. We demonstrate the effectiveness of our approaches on the two widely known UNBC McMaster and DISFA databases, showing some promising results. In the end, we discuss the obstacles to automatic facial expressions assessment and present future research challenges.

English
Amphi du LaBRI