We often smile out of politeness,
sometimes when you're amused or even when you're frustrated.
Ever wondered, what is it about those smiles
that make them so different?
We humans are usually pretty good about perceiving
the smiles correctly.
However, we still don't have a good idea
about the low-level features of the smile that
make them so different.
So in our ongoing work, we try to zoom
in to different kinds of smiles and deconstruct them
into low-level facial features.
And then we wondered whether it's
possible to train a computer to recognize some of the smiles
The major bottleneck of this kind of research
is that we need to have a lot of samples of spontaneous smiles.
So for our work, we brought people into the lab.
We gave them a long tedious form to fill out.
The form was intentionally designed to be buggy.
So regardless of whatever they typed, as soon
as they hit the button Submit, it would clear the form
and bring it back to the beginning of the form.
And we realized we're surprised that a lot of people
are extremely frustrated, yet they
were smiling to cope up with that environment.
In that snapshot, you'll see two things.
Number one, this participant has action unit 12, also
known as lip corner pull raised, and also AU 6-- action
unit 6-- cheek raiser pulled.
Based on research, when you have these two muscles evoked,
you're more likely to be in a happy state.
However, if you follow through the video,
you will see that this person was actually
So that tells you that, instead of looking
at the snapshot, if you look at the patterns of how
the signal progresses through time,
it may be able to tell you more about the expression.
So we had two different kinds of smile-- delighted
smiles and frustrated smiles.
For delighted smiles, our algorithms
performed as good as humans.
However, for frustrated smiles, human
performed below chance, whereas the algorithm
performed more than 90%.
One possible explanation is that we humans usually
can zoom out and try to interpret an expression,
whereas a computer algorithm can utilize
the nitty-gritty details of a signal, which
is much more enriching than just kind of zoom out
and look at the high-level picture.
One application of our research that we are excited about
is to help people with autism to interpret expressions better.
Because often in school and in therapy,
they're told that if they see a lip corner pull,
the person is more likely to be happy.
However, in our work we demonstrate
that it's possible for people to be smiling
in different contextual scenarios,
and the meaning would be totally different.
So if you can deconstruct a smile into low-level features,
perhaps we can teach it to them.
And people with autism may get better at it.