When airline check-in kiosk manufacturer BCSwith New Zealand interactive avatar developer Limbic IO last week, it might have opened a new frontier of customer service automation.
And the face that change could well be Xyza’s.
Her name, like the rest of her, is a work in progress. Her creator, Dr Mark Sagar, says it has a sci-fi feel but actually means “from the sea”. As the technology and the model for Xyza both come from New Zealand it seems appropriate.
Sagar says creating such an avatar is an involved process. First, a highly detailed, fully expressive model is made from a scan of a real person.
A year ago, for instance, Sagar revealed, an interactive avatar modelled on his real-life infant daughter Francesca.
“We fit our facial animation system to that data and we drive the face with our brain system customised for the application,” Sagar says.
Sagar says the model for Xyza was chosen for her overall appropriateness, expressiveness and her “classical face structure”.
The prototype check-in application has a clearly defined workflow, he says, so in the check-in avatar does not need much intelligence, she is more of a guide.
“Her goal is to clarify,” he says. “For this application, the intelligence comes more in the details as her behaviour is generated ‘live’ and the way in which she interacts is adaptable, and we can use the learning systems to achieve the most efficient flow, for example, getting attention when necessary, detecting confusion, guiding [travellers].”
As the prototype develops, Sagar says, Limbic developers will be experimenting with more advanced applications of the learning systems they have been developing, and how these could be integrated with other information which may be available such as the status of flights, weather reports and so forth.
Success, he says, means a faster, clearer and more pleasant customer experience.
All of that raises questions about the design of check-in kiosks. Sagar says current kiosks may be adaptable as they stand, but Limbic will be experimenting with that too.
The first prototype avatar with basic functionality should be ready in about six months.
“If successful this would then be integrated for rollout in actual systems over the next year, and more sophisticated behaviours and contingency management can be developed,” he says.
“Our work has several aspects to it. On a research side, it’s about understanding how we tick, through modelling.
“On an application side, it’s about providing a powerful information interface. A significantly large amount of the human brain is specialised to process faces, so it’s a natural mode of communication.
“In a nutshell, since we communicate the majority of information nonverbally, human computer interfaces can utilize this too, both in the detection and the response.”
A message delivered with a smile has a different meaning to the same message delivered with a frown, Sagar says, so the richness of meaning of the message has been increased.
“For example, we can detect in a conversation if someone doesn’t understand, so we may stop, and clarify, or change the message.”
Contrast that, he says, with automated phone systems that just keep talking.