"I know to err is human," Agatha Christie once wrote, "but human error is nothing to what a computer can do if it tries." At this particular point in the hype cycle surrounding artificial intelligence, one of the most useful things technically knowledgeable people can do is disseminate a realistic assessment of what machines can and cannot do. In Artificial Unintelligence: How Computers Misunderstand the World, assistant New York University professor Meredith Broussard sets out to do just that.
Broussard has the perfect background for such an undertaking: she has been a software developer at AT&T Bell Labs and the MIT Media Lab, and has worked as a journalist for numerous top-quality publications. She particularly takes issue with the modern belief that anything a human can do a computer can do better, more fairly, and more objectively. Computers, she writes, are great at a lot of things humans are bad at -- performing complex arithmetic with very large numbers, for example. But that doesn't make them good for all the things people want them to do.
Even something like replacing books in schools isn't that simple. Books are inexpensive, reusable, durable, and have relatively few maintenance costs. Why replace them with iPads and computer networks that require maintenance contracts, frequent replacements, and teacher support and training? Similarly, computers are good at ranking things, but many things -- such as the likelihood a particular prison inmate will commit more crimes -- shouldn't be ranked. "Move fast and break things" only works as a mantra if you remember to fix the things you've broken.
Broussard joins Christian Wolmar in expressing doubt that we will be awash in self-driving cars any time soon. Her own sample ride in one nearly ended by crashing into a cement wall. "It happens," shrugs one of the engineers in charge. Granted, that was in 2007 and the cars have improved -- but the fundamental risks haven't changed. In 2017, after failing to find a company in Pittsburgh, PA that would book her a ride in a self-driving car, she pretended to be a customer and took a test drive in a Tesla Model X. Even then, the salesman said that "You know, regulations" barred him from turning on Autopilot. Her conclusion: it doesn't really work.
Broussard takes the trouble to back up her complaints. After several chapters in which she explains how computers, software, and AI and machine learning actually work (including a worked example of creating an algorithm to work on a public dataset), she moves on to consider the times they don't by reviewing her experiences in journalism, education, and programming, and throws in an encounter with self-driving cars.
On a start-up bus she explores the world of hackathons -- which, she says, mostly produce vaporware that never turn into products. Making anything of lasting value is hard, and takes more than a sleepless week to create.
Broussard concludes by recounting her efforts to build Bailiwick, a tool to help US reporters find stories about campaign financing more efficiently. In this case, the computer speeds up one aspect of the journalist's job, but doesn't replace them. This, she argues, is our best hope: a human with a machine can outperform both machines and humans alone.
RECENT AND RELATED CONTENT
NHS IT: Can better use of tech give healthcare a shot in the arm?
Is improving tech the key to to making healthcare more efficient?
This scrappy self-driving upstart is about to beat the biggest companies in the world to autonomous grocery delivery
Founded by a Princeton Professor known as Professor X, this mysterious self-driving company was built using $50 cameras and a whole lotta smarts.
Does AI have the solution to climate change and human trafficking?
Imagine an AI-driven software platform that can power next generation of civic participation -- around the world.
What is artificial general intelligence?
Everything you need to know about the path to creating an AI as smart as a human.
Deepfake your dance moves with an AI body double (CNET)
Artificial intelligence beams Bruno Mars' moves onto anyone's body.
Why most consumers think companies will take AI too far (TechRepublic)
While consumers tend to be excited about the ways AI can improve their lives, they also have very real privacy concerns, according to new Elicit research.
Read more book reviews
- The Open Revolution, book review: Ownership in the digital age
- The 4th Industrial Revolution, book review: A curate's egg
- Cyber Wars, book review: High-profile hacks, deconstructed
- Driverless cars, book review: Do we know where we're going?
- A Dozen Lessons for Entrepreneurs, book review: Insights from the VC world