X
Innovation
Why you can trust ZDNET : ZDNET independently tests and researches products to bring you our best recommendations and advice. When you buy through our links, we may earn a commission. Our process

'ZDNET Recommends': What exactly does it mean?

ZDNET's recommendations are based on many hours of testing, research, and comparison shopping. We gather data from the best available sources, including vendor and retailer listings as well as other relevant and independent reviews sites. And we pore over customer reviews to find out what matters to real people who already own and use the products and services we’re assessing.

When you click through from our site to a retailer and buy a product or service, we may earn affiliate commissions. This helps support our work, but does not affect what we cover or how, and it does not affect the price you pay. Neither ZDNET nor the author are compensated for these independent reviews. Indeed, we follow strict guidelines that ensure our editorial content is never influenced by advertisers.

ZDNET's editorial team writes on behalf of you, our reader. Our goal is to deliver the most accurate information and the most knowledgeable advice possible in order to help you make smarter buying decisions on tech gear and a wide array of products and services. Our editors thoroughly review and fact-check every article to ensure that our content meets the highest standards. If we have made an error or published misleading information, we will correct or clarify the article. If you see inaccuracies in our content, please report the mistake via this form.

Close

Human-Centered AI, book review: A roadmap for people-first artificial intelligence

A focus on developing AI that helps people will dissolve much of the fear of lost jobs and machine control, argues Ben Shneiderman.
Written by Wendy M Grossman, Contributor
human-centred-ai-main.jpg

Human-Centered AI • By Ben Shneiderman • Oxford University Press • 400 pages • ISBN: 978-0-19284529-0 • £20 / $25   

About 20 years ago, I sat next to University of Maryland professor Ben Shneiderman at a conference dinner. We spent the time discussing twin software paradigms: in one, what we now call 'smart' software tried to guess our intentions, making its response annoyingly unpredictable; in the other, software with no adaptability did what it was told according to instructions that had to be precisely right. The hot future of the day was software agents that would negotiate on our behalf to get better prices on airline tickets and find mutually agreeable slots in which to schedule meetings. As if. 

As Shneiderman writes in his new book, Human-Centered AI, he's somewhat modified his stance in the intervening years, giving greater weight to ending the tedium of performing the same tasks over and over. However, he remains sceptical that AI will surpass or successfully imitate human intelligence -- scepticism that extends to new, highly contested applications such as emotion detection

What is important to Shneiderman, then and now, is designing computer systems so they put the user at the centre. Incorporating human factors into consumer software became a widespread industry concern in the 1990s, when user interfaces shifted from requiring arcane, precisely typed commands to directly manipulating the graphical icons everyone uses today. 

Shneiderman argues that AI should be no exception, and that a focus on developing AI that helps people will dissolve much of the fear of lost jobs and machine control.  

SEE: What the metaverse means for you and your customers

As an example of the distinction he's making between more usual approaches to AI and the human-centred approach he favours, Shneiderman begins by comparing Roombas and digital cameras. Users have very little control over the Roomba, which is designed with a minimalist user interface -- that is, a couple of buttons -- and does the job of vacuuming carpets on its own without user input. Digital cameras, on the other hand, enable amateurs to be far better photographers while giving them many choices; its design allows users to explore.  

While people love Roombas, the same 'rationalist' approach when embodied in the form of data-driven systems becomes limiting and frustrating, whereas the 'empiricist' approach empowers humans. 

In the bulk of the book, which grew out of 40 public lectures, Shneiderman works methodically through practical guides to three main sets of ideas. First, he lays out a framework to help developers, programmers, software engineers, and business manager think about AI design. Second, he discusses the value of the key AI research goals -- emulating human behaviour and developing useful applications. Finally, he discusses how to adapt existing practices of reliable software engineering, safety culture, and trustworthy independent oversight in order to implement ethical practices surrounding AI. 

I'm not sure people are still as worried about AI and robots taking their jobs as they are concerned that crucial decisions about their lives will be made by these machines -- what benefits they qualify for, whether their job or mortgage applications are seen by prospective employers and lenders, or what pay their work for a platform merits.  

Shneiderman discusses aspects of this, too, calling attention to efforts to incorporate human rights into the ethics of AI system design. Few books on AI discuss the importance to good design of applying the right sort of pressure to the corporate owners of AI systems to push them into social fairness. This one does.

RECENT AND RELATED CONTENT 

DeepMind's 'Gato' is mediocre, so why did they build it?

The EU AI Act: What you need to know

Google I/O: To build better AI, Google invites others to join its AI Test Kitchen

Qualcomm plunges into the robotics market with new platform

IBM CEO: Artificial intelligence is nearing a key tipping point

Read more book reviews 

Editorial standards