San Diego, CA – A hyper-realistic robot with the face of Einstein has taught itself to smile and make facial expressions.
The University of California San Diego researchers used machine learning to teach their robot to learn to make realistic facial expressions.
“As far as we know, no other research group has used machine learning to teach a robot to make realistic facial expressions,” said Tingfan Wu, a computer science PhD student from the University of California San Diego Jacobs School of Engineering.
The Einstein robot head has about 30 facial muscles, each moved by a tiny servo motor connected to the muscle by a string. Today, these need to be set up manually so that the servos pull in the right combinations to make specific face expressions. To begin to automate this process, the UCSD researchers looked to both developmental psychology and machine learning.
Developmental psychologists speculate that infants learn to control their bodies through systematic exploratory movements, including babbling to learn to speak. Initially, these movements appear to be executed in a random manner as infants learn to control their bodies and reach for objects.
“We applied this same idea to the problem of a robot learning to make realistic facial expressions,” said Javier Movellan, the senior author on the paper.
To begin the learning process, the researchers directed the Einstein robot head (Hanson Robotics’ Einstein Head) to twist and turn its face in all directions, a process called “body babbling.” During this period the robot could see itself on a mirror and analyze its own expression using facial expression detection software. This provided the data necessary for machine learning algorithms to learn a mapping between facial expressions and the movements of the muscle motors.
Once the robot learned the relationship between facial expressions and the muscle movements required to make them, the robot learned to make facial expressions it had never encountered, such as eyebrow narrowing.
“During the experiment, one of the servos burned out due to misconfiguration. We therefore ran the experiment without that servo. We discovered that the model learned to automatically compensate for the missing servo by activating a combination of nearby servos,” the authors wrote in the paper presented at the 2009 IEEE International Conference on Development and Learning.
“Currently, we are working on a more accurate facial expression generation model as well as a systematic way to explore the model space efficiently,” said Wu.
Watch the robot do its stuff here.