Tactical Information Systems
Biometric Identification Software
SERVER.png

Tactical Information Systems Blog

Identity & Technology

Disney Tests Facial Biometrics to Understand Audience Reactions

Most large companies have dedicated research laboratories because having a small technological advantage over the competition can translate to a massive commercial advantage. Since Disney is in the movie business, they want to know whether their movies are having the desired effect on the audience. You don't want people laughing at the scenes that are supposed to be scary, or bored at the scenes that should be poignant.

Researchers attempt to infer a person's emotional state from "landmark" facial features. For example, a person who is smiling has the corners of their mouth higher than the middle.

Classification of facial expressions

Classification of facial expressions

The researchers used a dark theater with 400 seats and Infra Red (IR) lighting to allow the computer to see the faces in the dark. Interestingly, the faces used in the research were only about 50x50 pixels, which is very small by biometric standards. Using facial landmarks they were able to measure facial expressions such as smiling, laughing, fear, etc. After observing an audience member for 10 minutes, their algorithm could predict the member's response for the rest of the movie with a relatively low error rate (~30%).

At TIS we don't work with emotional recognition of faces. However the facial matching algorithms we use do measure all of the key facial landmarks - eyebrows, nose, mouth, chin, etc. We use these typically just to measure face composition for proper enrollment. 

I find the Disney research interesting, but I have always been skeptical of the ability of computer vision to really measure emotional state except in extreme situations such as shock, fear, outbursts of laughter, etc. If you have ever had a situation where you were extremely angry at a spouse or co-worker and they were unaware you know what I mean. Humans often don't show any change of facial expression for certain emotions and if there isn't anything to see then computer vision isn't going to work either. A more biology based approach of something like galvanic skin response, heart rate, muscle tension, etc. would probably be more accurate (but of course more invasive as well).