Washington: Now, coming soon a robot which will easily find their way round in new environments and manipulate objects - for example, it would be able to handle a particular set of dishes and put them in a particular rack, thanks to an Indian-origin scientist-led team.
"Robots still have a long way to go to learn like humans. We would be really happy if we could build a robot that would even act like a six-month-old baby," said Professor Ashutosh Saxena who is leading the team at Cornell University.
In their research, the scientists have found that placing objects is harder than picking them up, because there are many options. A cup is placed upright on a table, but upside down in a dishwasher, so the robot must be trained to make those decisions.
"We just show the robot some examples and it learns to generalise the placing strategies and applies them to objects that were not seen before. It learns about stability and other criteria for good placing for plates and cups, and when it sees a new object -- a bowl -- it applies them," Saxena said.
In early tests they placed a plate, mug, martini glass, bowl, candy cane, disc, spoon and tuning fork on a flat surface, on a hook, in a stemware holder, in a pen holder and on several different dish racks.
Surveying its environment with a 3-D camera, the robot randomly tests small volumes of space as suitable locations for placement. For some objects it will test for "caging" -- the presence of vertical supports that would hold an object upright. It also gives priority to "preferred" locations: A plate goes flat on a table, but upright in a dishwasher.
After training, their robot placed most objects correctly 98 percent of the time when it had seen the objects and environments previously, and 95 per cent of time when working with new objects in a new environment. Performance could be improved, the researchers suggested, by longer training.
In fact, the scientists have developed a system that enables a robot to scan a room and identify its objects.
Pictures from the robot`s 3-D camera are stitched together to form a 3-D image of an entire room that is then divided into segments, based on discontinuities and distances between objects. The goal is to label each segment.
The team trained a robot by giving it 24 office scenes and 28 home scenes in which they had labelled most objects.
The computer examines such features as colour, texture and what is nearby and decides what characteristics all objects with the same label have in common. In a new environment, it compares each segment of its scan with the objects in its memory and chooses the ones with the best fit.
"The novelty of this work is to learn the contextual relations in 3-D. For identifying a keyboard it may be easier to locate the monitors first, because the keyboards are found below the monitors," Saxena said.
In tests, the robot correctly identified objects about 83 percent of the time in home scenes and 88 percent in offices. In a final test, it successfully located a keyboard in an unfamiliar room.