Literature Review: Facial Emotion Recognition in Videos

Verified

Added on  2023/01/19

|7
|1666
|32
Literature Review
AI Summary
This literature review explores the field of facial emotion recognition, specifically focusing on its application in analyzing emotions displayed while watching videos. The review highlights the importance of facial expressions as a key component in human communication and how advancements in artificial intelligence and computer vision have enabled the development of systems capable of detecting and interpreting these expressions. The study discusses the three main steps involved in facial emotion recognition: face and facial component detection, feature extraction, and expression classification. It also examines the role of various facial emotion classifiers, such as support vector machines, AdaBoost, and random forests. Furthermore, the review touches on the applications of facial emotion recognition in video game testing and marketing research, emphasizing its ability to provide valuable insights into user experiences and preferences. The review references several studies and technologies, like Affectiva, that have contributed to the advancements in this field.
Document Page
Project dissertation
Name
Professors’ name
Institution
Date
tabler-icon-diamond-filled.svg

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
Literature Review
Facial emotion recognition while watching videos is common thing that manifest itself in various
forms. People often tend to display different emotions while watching videos. These emotions
are based on various issues but usually, they are dependent on the nature of video that the person
is watching as well as the nature of the person that is watching the video (Ashraf et al, 2007
p.56).
Facial emotion recognition has increased to be a very important issue in various fields such as
those of computer artificial intelligence and computer vision. Facial emotion recognition can be
conducted using quite a number of well-known multiple sensors. Facial emotions comprise of
very important factors in modern human communication as it facilitates the understanding of the
intentions of others. Emotional states such as joy, anger, and sadness can be easily inferred on
other people by using looking at the facial expressions of other people (Cohn, 2009 p.73). There
are quite a number of known nonverbal components that can be used to detect emotions in
people, out of all these numerous nonverbal components, the facial expression is among one of
the best techniques that can be successfully used to determine emotions.
While watching a video there are certain contents in the video that might in one way or another
make a person uncomfortable. In such cases people often tend to display various emotions that
can be reflected and/or seen on their faces. Depending on the nature of the content that a person
is watching, they might either be stressed or get angry. In such cases their feelings and/or
emotions are displayed on their faces (Shotton et al, 2011 p.32)
While watching a video, a person’s emotions and/or feelings tend to change depending on the
nature of the video that they are watching. Facial recognition while watching videos play a very
Document Page
essential role in tracking down the change in emotions and feelings based on the facial
expression portrayed on the face of the person watching the video.
The mission of Affectiva is to make all computers emotional intelligence. This is to mean that all
computers would be in position to determine the emotions and/or feelings of their users through
a thorough analysis of some of their facial expression and facial cues while they are watching a
certain video (Whitehill et al,2009 p.77).
A recent research conducted has shown that at least 92% of are reduced to tears while watching
videos. There are certain instances as well when a people develop other form of feelings such as
empathy, anxiety and grief. Such feelings are developed based on various issues.
Facial emotion recognition has had numerous applications over the last decade especially in
areas of perpetual as well as cognitive science and fields of affective computing and computer
science. Studies indicated that facial emotional recognition while watching videos often happens
in three major steps. First and foremost there is the face and facial component detection (Valstar
& Pantic, 2010 p.69). This is then followed by feature extraction and finally an expression
classification. A face image is first detected from the face region and/or component that could
either be the eyes or the nose or other essential details that can be obtained from various regions
of the face. Once this has been successfully completed, an extraction is next. This extraction
could be of various spatial and/or temporal features from the facial components. The last step is
often computerized and it involves the use of different facial emotion classifiers. An example of
a facial emotion classifier is the support vector machine (SMV), AdaBoost and random forest.
These classifiers have an ability to produce results of the emotions depicted form the videos one
is watching as depicted by the extracted features (Rosenthal, 2005 p.89).
Document Page
There are various facial recognition systems which are pieces of technology that are capable of
identifying the types of emotions on a person’s face while they are watching vedios. Affective
for instance is an emotional measurement instrument technology company. Affectiva has had
numerous achievements as far as recognizing human emotions are concerned. The software often
makes use of facial cues as well as other physiological responses. With the use of this speial
software, various emotions can be tracked based on different facial cues that an individual makes
while they are watching videos. Some of the facial cues and motions that can be tracked by this
software comprises of; surprise, amusement, confusion, smiles, frowns and smirks among others.
In addition to this, it is also important that this piece of technology enables a person’s heart rate
to be measured directly from a webcam even without a person wearing a sensor. This is often
achieved by tracking different color changes in a person’s face which pulses each and every time
the heart beats.
While watching a video, a person tends to have different emotions and/or feelings towards the
piece of information being portrayed in the video. Such emotions can be determined by various
technologies and softwares as the one developed by Affectiva.
Facial expression recognition has been used in video game testing. During this phase in the
making if video games sampling is done on various focus groups of users for a set amount of
time and their emotions (facial emotions) are closely monitored during this period of time. Based
on facial expression recognition the game developers are in a position to gain essential insight
and thereafter draw a conclusion. These conclusions are drawn based on the emotions
experienced during the time the players were playing the game. This information obtained is then
tabler-icon-diamond-filled.svg

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
used by the game developers to redesign the game to improve the game further. This aids the
game developers in making the final product.
Facial expression recognition has also been used in various instances such as marketing research.
When conducting marketing research, facial expression recognition is really important.
Companies and corporations often conducted proper market research to determine what their
customers need, the brand they prefer and some other detailed contents about the market. A large
component of the market is the customers. In market research, there is always an assumption that
preferences stated by different customers during the market research are correct and will hold for
a nearby future but this is not always the case (Dalal & Triggs, 2005 p.70).In such situations,
facial expression recognition comes in handy. It is applied to determine the expressions of the
customers as they interact with a certain product and thereby proving accurate feedback on the
products and what the customers feel about the product.
Studies indicate that facial expression recognition models and software are really important
because of their ability to the unique ability of human coding skills (Kaliouby & Robinson,
2005).
Document Page
References
A. Ashraf, S. Lucey, J. F. Cohn, T. Chen, Z. Ambadar, K. Prkachin, P. Solomon, and B.
Theobald. The painful face: pain expression recognition using active appearance models. In
Proceedings of the 9th international conference on Multimodal interfaces, pages 9–14. ACM,
2007. 1
D. Messinger, M. Mahoor, S. Chow, and J. F. Cohn. Automated measurement of facial
expression in infant–mother interaction: A pilot study. Infancy, 14(3):285–305, 2009. 1
J. F. Cohn, T. Kruez, I. Matthews, Y. Yang, M. Nguyen, M. Padilla, F. Zhou, and F. De la Torre.
Detecting depression from facial actions and vocal prosody. In Affective Computing and
Intelligent Interaction and Workshops, 2009. ACII 2009. 3rd International Conference on, pages
1–7. IEEE, 2009. 1
J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A.
Blake. Real-time human pose recognition in parts from single depth images. In CVPR, volume 2,
page 3, 2011. 1
J. Whitehill, G. Littlewort, I. Fasel, M. Bartlett, and J. Movellan. Toward practical smile
detection. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 31(11):2106– 2111,
2009. 1, 2, 6
M. Valstar and M. Pantic. Induced disgust, happiness and surprise: an addition to the mmi facial
expression database. In Proceedings of the 3rd International Workshop on EMOTION: Corpora
for Research in Emotion and Affect, page 65, 2010. 1, 2
Document Page
N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In Computer
Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on,
volume 1, pages 886–893. Ieee, 2005. 6
R. Kaliouby and P. Robinson. Real-time inference of complex mental states from facial
expressions and head gestures. Real-time vision for human-computer interaction, pages 181–200,
2005. 1
R. Kaliouby and P. Robinson. Real-time inference of complex mental states from facial
expressions and head gestures. Real-time vision for human-computer interaction, pages 181–200,
2005. 1
R. Rosenthal. Conducting judgment studies: Some methodological issues. The handbook of
methods in nonverbal behavior research, pages 199–234, 2005. 5
chevron_up_icon
1 out of 7
circle_padding
hide_on_mobile
zoom_out_icon
logo.png

Your All-in-One AI-Powered Toolkit for Academic Success.

Available 24*7 on WhatsApp / Email

[object Object]