X
Innovation

A call to bring more human-centered design to artificial intelligence

Many AI systems are developed under the assumption that 'whatever we want from AI, we're going to press the button, and as if if by magic, it's going to deliver the result to us.'
Written by Joe McKendrick, Contributing Writer

Artificial intelligence shouldn't have to be activated through a "big red button" that delivers opaque results that everyone hopes is the final word on a given question. Rather, it should be under some degree of control by humans in the loop who can get a sense of what the results are telling them.

rainbow-hand-cropped-photo-by-joe-mckendrick.jpg
Photo: Joe McKendrick

That's the word from Ge Wang, associate professor at Stanford, who urges a human-centered design approach to AI applications and systems. In a recent webcast hosted by Stanford HAI (Human-Centered AI), Wang urges AI developers and designers to step back and consider the important role of humans in the loop. "We're so far away from answers in AI, we don't yet know most of the questions," he points out. 

Many of today's AI systems are designed around the big red button, he says. "Whatever we want from AI, we're going to press the button, and as if by magic, it's going to deliver the result to us," says Wang. The perception is that "AI has this magical quality, in the sense that it exhibits this, for lack of a better word, intelligence. And it's able to do complex tasks. AI is the most powerful pattern-recognizer that we've ever built." 

The question becomes, then, " what do we really want from AI?" Wang continues. "Do we want oracles all the time, that just give us the right answers without showing its work necessarily? Or, do we want tools? Things that we can use to learn to get better at? And tools that are, by definition, interactive to the human?"

The ways to bring humans more into the AI loop don't necessarily have to be complicated. One highly effective approach would be to introduce "sliders" to accompany the user interfaces to AI applications, Wang suggests. AI systems design "could begin with a simple question" 'Could I put a slider on this?'" Wang says. What would that mean? How would that make the system more useful or interesting?"

A slider would enable the user to increase the intensity of the AI applied to a question -- from a light touch to a full-on AI output. Wang cites the example of one of his student's projects, an AI system that translates legal documents into plain English. "What's interesting is the addition of a slider that controls the legalese into the output,," he relates. The slider can keep the document close to its original legalese, all the way up to completely translated plain language. "The moment that slider is added, this goes from a rigid big-button system to one where you can control, but one where also that different people can identify back to their own kind of needs. What if our tools allowed us to actually do that with our AI?"

The advantage "of this kind of human-in-the loop design is that you often stand to gain significant transparency in how the system actually works," Wand states. "Also, depending on how you design it, it's a way to incorporate human judgment in effective ways. It's just pressure-weighing. We see this actually every time we try to build a system with humans in the loop. We should pressure weight from building the perfect algorithm towards the big red button. This thing doesn't have to be perfect, out the box forever, because you know what? Humans are able to refine the output.'

In turn, adaptations such as sliders that enable greater human control over AI output "almost always enables more powerful systems, not less," Wang summarizes. 

AI -- as with other advanced technologies -- won't succeed if it's simply dropped on top of people organizations. People need to be able to leverage AI-based tools to create and operationalize ideas and approaches that will help make a better world for their customers and their employees. 

Editorial standards