Washington: Indian-origin researchers have developed a new `smart` robot that can predict human actions with surprising accuracy.
The robot can refill your coffee cup and hold the door open for you, in addition to performing several other tasks.
The robot developed at the The Personal Robotics Lab at Cornell University, has learned to foresee human action and adjust accordingly.
The robot was programmed to refill a person`s cup when it was nearly empty. To do this the robot must plan its movements in advance and then follow the plan. But if a human sitting at the table happens to raise the cup and drink from it, the robot might pour a drink into a cup that isn`t there.
But when the robot sees the human reaching for the cup, it can anticipate the human action and avoid making a mistake. In another test, the robot observed a human carrying an object toward a refrigerator and helpfully opened the refrigerator door.
Hema S Koppula, Cornell graduate student in computer science, and Ashutosh Saxena, assistant professor of computer science, will describe their work at International Conference of Machine Learning in June in Atlanta.
From a database of 120 3-D videos of people performing common household activities, the robot has been trained to identify human activities by tracking the movements of the body ? reduced to a symbolic skeleton for easy calculation ? breaking them down into sub-activities like reaching, carrying, pouring or drinking, and to associate the activities with objects.
Since each person performs tasks a little differently, the robot can build a model that is general enough to match new events.
"We extract the general principles of how people behave. Drinking coffee is a big activity, but there are several parts to it," said Saxena.
The robot builds a "vocabulary" of such small parts that it can put together in various ways to recognise a variety of big activities, he explained.
Observing a new scene with its Microsoft Kinnect 3-D camera, the robot identifies the activities it sees, considers what uses are possible with the objects in the scene and how those uses fit with the activities.
It then generates a set of possible continuations into the future such as eating, drinking, cleaning, putting away and finally chooses the most probable. As the action continues, it constantly updates and refines its predictions.
In tests, the robot made correct predictions 82 per cent of the time when looking one second into the future, 71 per cent correct for three seconds and 57 per cent correct for 10 seconds. The robot also was more accurate in identifying current actions when it was also running the anticipation algorithm.