The trajectory of books about new technologies follows a similar pattern: first, hype; then, backlash; then, finally, a more considered view of what it might actually be good for.
The first hype books about algorithms appeared around 2012. Last year, sanity began to prevail with Cathy O'Neil's Weapons of Math Destruction. And now we have Hannah Fry's Hello World: Being Human in the Age of Algorithms, which seeks to find a sensible path somewhere between tossing algorithms at everything and running them over with a self-driving car.
Fry, a professor in the mathematics of cities at University College London, accepts O'Neil's and others' arguments that algorithms make mistakes and that black-box scoring systems can be unfair. But, she says, that's not a reason to stop using them. Instead, it's a reason to improve them and learn to work with, instead of against, them.
For example, take the COMPAS system used in some areas of the US to determine which arrested suspects are high-risk and which get bail. It's now widely understood that COMPAS returns biased results because decades of human bias are embedded in the data it was fed in training. Fry asks this: what would the algorithm look like unbiased?
To answer this sort of question, Fry begins by explaining what algorithms are, moving on quickly to how we interact with them. We think they are more capable than they are, but are quick to dismiss them when they make mistakes.
SEE: Special report: How to automate the enterprise (free ebook)
Garry Kasparov provides an example of the first problem: he has named as a factor in his loss to IBM's Deep Blue machine the fact that he erroneously assessed the algorithm as 'smarter' than it actually was. One reason was the clever tactical decision made by Deep Blue's programmers to occasionally delay its response in order to make it look like it was 'thinking'. Kasparov interpreted the delay as struggling, and then was thrown off-balance when the computer played a surprisingly strong move. "The genius of the algorithm triumphed," Fry writes in a rare overstatement. In reality, the computer's programmers made clever use of human psychology.
Part of that psychology is 'algorithm aversion' -- our tendency to want to throw out machines entirely when they make mistakes such as when GPS instructions lead someone to nearly drive off a cliff. We are, Fry writes, less tolerant of the machine's mistakes than of our own. Well, why not? A moderately mature human at least has some sense of the kind and magnitude of mistakes they typically make. A machine whose shortcomings can't be predicted is more dangerous to trust.
Ultimately, what Fry, like many other scientists, is advocating is partnership. Human and machine working together can do better than either on their own. Fry pursues this discussion through chapters on power, data, justice, medicine, cars, crime, and art. In each of these, she explains how the relevant algorithms are constructed -- not in technical detail, but in sufficient outline to aid understanding.
For many of us, Fry's choice of title acts as a badge of street cred. Printing 'Hello, world' on-screen is the first thing we all learned to program. And sure enough: Fry was introduced to programming computers -- the ZX Spectrum -- when she was seven.
RECENT AND RELATED CONTENT
Inside the black box: Understanding AI decision-making
Artificial intelligence algorithms are increasingly influential in peoples' lives, but their inner workings are often opaque. We examine why, and explore what's being done about it.
The AI, machine learning, and data science conundrum: Who will manage the algorithms?
Artificial intelligence and machine learning are being adopted at a rapid clip, and the management headaches are just about to begin.
What is artificial general intelligence?
Everything you need to know about the path to creating an AI as smart as a human.
How GDPR will change the way we build machine learning algorithms (TechRepublic)
A new report from O'Reilly reveals that in order to keep pace with developing privacy needs, machine learning needs to evolve.
Is AI less biased than human recruiters? 56% of job applicants think so(TechRepublic)
Of applicants who have experienced discrimination, nearly half believe AI will give them a better chance of getting hired, according to a Montage report.
Instagram explains why it made you so mad with the algorithmic feed(CNET)
It only wants you to see what's most relevant to you, the photo-sharing app tells reporters.