A robot face that looks like a real talking person
Researchers at the Institute for Cognitive Systems (ICS) at TU Munchen in collaboration with a group in Japan have developed a new robot face that can talk and respond like humans.
Washington: Researchers at the Institute for Cognitive Systems (ICS) at TU Munchen in collaboration with a group in Japan have developed a new robot face that can talk and respond like humans.
Mask-bot can already reproduce simple dialog. For example when researchers Dr. Takaaki Kuratate says “rainbow”, it flutters its eyelids and responds with an astoundingly elaborate sentence on the subject: “When the sunlight strikes raindrops in the air, they act like a prism and form a rainbow.”
And when it talks, Mask-bot also moves its head a little and raises its eyebrows to create a knowledgeable impression.
“Mask-bot will influence the way in which we humans communicate with robots in the future,” said Prof. Gordon Cheng, head of the ICS team.
Mask-bot can display realistic three-dimensional heads on a transparent plastic mask, and can change the face on-demand.
It is also bright enough to function in daylight thanks to a particularly strong and small projector and a coating of luminous paint sprayed on the inside of the plastic mask.
The team claimed this part of the new system could soon be deployed in videoconferences.
“Usually, participants are shown on screen. With Mask-bot, however, you can create a realistic replica of a person that actually sits and speaks with you at the conference table. You can use a generic mask for male and female, or you can provide a custom-made mask for each person,” explained Takaaki Kuratate.
To replicate facial expressions, Takaaki Kuratate developed a talking head animation engine – a system in which a computer filters an extensive series of face motion data from people collected by a motion capture system and selects the facial expressions that best match a specific sound, called a phoneme, when it is being spoken.
The computer extracts a set of facial coordinates from each of these expressions, which it can then assign to any new face, thus bringing it to life.
Emotion synthesis software delivers the visible emotional nuances that indicate, for example, when someone is happy, sad or angry.