Berlin: Researchers have developed a robotic bartender and studied how it can understand human communication and serve drinks appropriately by learning to ignore some data and focusing on social signals.
A team at Bielefeld University in Germany invited participants in the lab and asked them to jump into the shoes of their robotic bartender called James.
The participants looked through the robot's eyes and ears and selected actions from its repertoire.
"We asked ourselves how a human bartender solves the problem and whether a robotic bartender can use similar strategies," said lead researcher Jan de Ruiter, from Bielefeld University.
"We teach James how to recognise if a customer wishes to place an order," said de Ruiter.
The participants sat in front of a computer screen and had an overview of the the robot data - visibility of customer, position at bar, position of face, angle of body and angle of face to the robot.
This data was recorded during a trial session with the bartending robot James at its own mock bar in Munich.
For the trial, customers were asked to order a drink with James and to rate their experience afterwards. In the lab, the participants observed on the screen what the robot had recognised at the time.
For example, they were shown if customers said something ("I would like a glass of water, please") and how confident the robotic speech recognition had been.
The participants had to decide in each step what they would do as a (robotic) bartender. They selected an action from the robot's repertoire.
"This is similar to selecting an action from a character's special abilities in a computer game. For example, they could ask which drink the customer would like ("What would you like to drink?"), turn the robot's head towards the customer, serve a drink - or just do nothing," de Ruiter said.
"Customers wish to place an order if they stand near the bar and look at the bartender. It is irrelevant if they speak," said Sebastian Loth, co-author of the study.
"Our new study focussed on the bartenders' actions. For example, the participants did not speak to their customers immediately but they turned the robot towards the customers and looked at them," Loth said.
"This eye contact is a visual handshake. It opens a channel such that both parties can speak," he said.
Once it is established that the customer wishes to place an order, the body language becomes less important.
"At this point, the participants focussed on what the customer said. For example, if the camera lost the customer and the robot believed the customer was "not visible," the participants ignored this visual information," Loth said.
"They continued speaking, served the drink or asked for a repetition of the order. That means that a robotic bartender should sometimes ignore data," Loth said.