Fry, a professor in the mathematics of cities at University College London, accepts O'Neil's and others' arguments that algorithms make mistakes and that black-box scoring systems can be unfair. But, she says, that's not a reason to stop using them. Instead, it's a reason to improve them and learn to work with, instead of against, them.
For example, take the COMPAS system used in some areas of the US to determine which arrested suspects are high-risk and which get bail. It's now widely understood that COMPAS returns biased results because decades of human bias are embedded in the data it was fed in training. Fry asks this: what would the algorithm look like unbiased?
To answer this sort of question, Fry begins by explaining what algorithms are, moving on quickly to how we interact with them. We think they are more capable than they are, but are quick to dismiss them when they make mistakes.
Garry Kasparov provides an example of the first problem: he has named as a factor in his loss to IBM's Deep Blue machine the fact that he erroneously assessed the algorithm as 'smarter' than it actually was. One reason was the clever tactical decision made by Deep Blue's programmers to occasionally delay its response in order to make it look like it was 'thinking'. Kasparov interpreted the delay as struggling, and then was thrown off-balance when the computer played a surprisingly strong move. "The genius of the algorithm triumphed," Fry writes in a rare overstatement. In reality, the computer's programmers made clever use of human psychology.
Part of that psychology is 'algorithm aversion' -- our tendency to want to throw out machines entirely when they make mistakes such as when GPS instructions lead someone to nearly drive off a cliff. We are, Fry writes, less tolerant of the machine's mistakes than of our own. Well, why not? A moderately mature human at least has some sense of the kind and magnitude of mistakes they typically make. A machine whose shortcomings can't be predicted is more dangerous to trust.
Let's work together
Ultimately, what Fry, like many other scientists, is advocating is partnership. Human and machine working together can do better than either on their own. Fry pursues this discussion through chapters on power, data, justice, medicine, cars, crime, and art. In each of these, she explains how the relevant algorithms are constructed -- not in technical detail, but in sufficient outline to aid understanding.
For many of us, Fry's choice of title acts as a badge of street cred. Printing 'Hello, world' on-screen is the first thing we all learned to program. And sure enough: Fry was introduced to programming computers -- the ZX Spectrum -- when she was seven.