With more of us expecting to survive into old age, it's probably inevitable that one day robots would be enlisted to shoulder the burden of care and companionship for the elderly.
The potential for robots in this area hasn't been lost on the European Commission, which put almost €5m ($5.6m) into funding the three-year Accompany project, which produced technology used by the robot pictured above.
The robot itself is a commercial Care-O-Bot 3 made by the Fraunhofer research institute in Germany. But the technology it uses and the intelligent environment in which it operates have been developed by Accompany project researchers from nine institutions across five countries, led by Dr Farshid Amirabdollahian, associate professor in adaptive systems at the UK University of Hertfordshire's School of Computer Science.
Among goals set for the Accompany project was the idea that the robot should be able to learn and show empathy in dealings with people. Devising ethical guidelines for service robots for the elderly was also a significant focus of the research.
Liveried in an abstract representation of a butler's morning suit, the robot seen here is using its right arm to serve a drink in a sensor-equipped beaker, which allows the machine to monitor how much fluid its human companion has consumed.
Its shorter left arm holds what serves as a tray in one orientation but which can flip over to reveal a detachable customised Samsung tablet that acts as the principal communications interface between the automaton and the human. Integrated proximity sensors tell the robot when the tray is empty.
"Part of what we've done is software, of course. But a big part of it is hardware and integrating the different technologies together, which is a very difficult task because they're not really meant to work together. When you integrate them and then put them to work in a certain scenario, you find out all sorts of surprises," Amirabdollahian said.
As part of the project, four robot houses were created in France, Germany, the Netherlands and in a residential area near the Hertfordshire University campus in England.
The robotic environments share common features, such as overhead 360-degree cameras providing fish-eye views of the rooms below to track and record the movements and relative positions of robots and humans.
The houses also employ sensors on doors and cabinets to show what has been opened, together with bot plugs, which can relay data on how much electricity is being consumed by individual devices. So if a fridge door is left open, triggering a rise in power consumption, that information is sent to the central computer and potentially to the robot as well.
As seen here, these autonomous robots are taught certain social behaviours, such as where to position themselves. They also learn about possible activities that the individual they are working with might want to undertake, depending on the time of day and the presence of other people.
"Technology is normally very cold. We try to change this by teaching this technology, the robot itself, some social mannerisms, like not coming too close to you or not approaching from angles that you don't prefer," Amirabdollahian said.
"From the studies we've done, we know that each individual will have their personal distance. They will have an intimate space that they don't want to be intruded into by anyone. An elderly person could be walking - whether the robot is going ahead or following is something that will need to be taught to the machine."
The robot is also equipped with object- and facial-recognition software. These systems enable it to recognise that its companion has, for example, just been given flowers. It can also send the identity of the person giving the bouquet back to the central tracking system, provided he or she is a regular visitor to the room.
A skeletal-tracking algorithm enables the robot to detect that the human has received flowers.
The robot's twin cameras, which can roll over backwards to provide a rear view, are for its own use and for the human, who can view on the tablet what the robot is seeing. So, for example, the robot could be alerted to the entry bell ringing and go to the transparent front door. Its human companion could then see who is visiting through the robot's cameras.
Here are two screenshots from the tablet that the human uses to communicate with the robot. They show the robot's view of the kitchen and the objects it has recognised and labelled using its stereo cameras and data from the ceiling tracker. The white cross in a red circle shows an item that the human has selected for the robot to, for example, pick up.
The tablet has a flexible frame. If the elderly person squeezes it hard, that communicates urgency to the robot. The mouth-like mask overlaying the images conveys emotion and in this case narrows with the urgency of the task. The robot can also use the mask to convey information back to the individual.
"If the mask is a happy mask, the robot is happy. If the mask is sad, the robot is sad. A sad mask would imply, for example, 'Hey, you haven't taken your medicine and it's time you did or you haven't had a glass of water for over an hour," Amirabdollahian said.
"It's not necessarily an approval or disapproval. But [it changes] as what you're doing is growing more serious - for example, you haven't had your medicine and you really should have. In one scenario the robot brings a glass of water and puts it in front of the person and the person doesn't drink it.
"Then the robot will come back again and point by looking at the glass, and you can see on your tablet that the robot is looking at the glass of water. At that point the face turns a little sadder."
The aim of the mouth-like mask, developed at the University of Siena in Italy, is to establish that a number of robot-human interactions can take place effectively and with emotional content but without any language whatsoever.
The robot, working with the robot house computer, has a number of algorithms at its disposal to work out what the individual is doing at any given moment and to check on his or her wellbeing.
As well as the skeletal-tracking software, sensors in furniture, such as chairs, sofas and beds also provide vital information about an elderly person's posture and activity levels.
"We use the overhead camera and the robot's cameras to detect the pose of the individual, because we're interested in knowing when someone has had a fall. It's one of things we can get technology to tell us: 'Look, someone has had a fall and maybe we should do something about it," Amirabdollahian said.
"The first thing we need to know is what posture you have. So when, for example, I'm going to serve myself a cup of tea, I will have a certain set of postures that will that will support serving a cup of tea. The upper part of my body will have a certain pose.
"So one of the things that the upper camera will see and the robot camera will contribute to is how we can be very accurate and how we can still get to a certain level of detection with an acceptable level of accuracy."
The project also looked at the pitch of voice commands to convey urgency to the robot.
"The robot can also use its onboard speakers to tell you things. It can tell you things in audio - for example, there's someone at the door. One of the things we tried not to replicate was the act of talking with natural language to the robot. There are many projects doing that," he said.
Now that many of the Accompany technologies have passed their proof-of-concept stage, the researchers are looking at ways to put them into the market.
"If these types of machines are going to be made into products, we need to make them much more modular so that then I can combine different modules to fit into certain requirements," Amirabdollahian said.
"For example, not everybody needs my robot to have an arm because not everybody has a problem with physical manipulation. Sometimes people just have a problem in mobility. So maybe the robot can become just a mobility aid."