X
Innovation

Why futurists are (almost) always wrong

A few weeks ago I looked at the Silicon Valley truism that "scale changes everything." But why does it? And how does that affect our ability to predict the future?
Written by Robin Harris, Contributor

Professionals are familiar with economies of scale and learning curves. These concepts were codified in the 1940s. But the scale changes we're talking about are exponential, not linear. Not a doubling, but 10x, 100x, and more.

Humans are poor at estimating the effect of exponential growth. Scale changes on the order of 10x are exponential and therefore, fall into the human cognitive abyss. We may know 10x growth is happening, but our brains simply can't imagine the impact.

Exponential scale

This concept of exponential scale impact has its roots in software engineering for Ultra-Large Scale (ULS) systems. In 2006, Linda Northrup, a researcher at the Software Engineering Institute and Carnegie Mellon University, delivered a talk entitled Scale Changes Everything.

Her focus was on Ultra-Large Scale (ULS) systems for the US military. But her conclusions resonated throughout the tech industry, where Google and Amazon were grappling with building warehouse-scale computers -- 10-to 1,000-times larger than enterprise data centers -- to handle exploding computing demands.

In Northrup's view, ULS systems undermined traditional software engineering assumptions. Her list of ULS systems requirements includes:

  • Decentralized control
  • Inherently conflicting, unknowable, and diverse requirements
  • Continuous evolution and deployment
  • Heterogeneous, inconsistent, and changing elements
  • Erosion of the people/system boundary
  • Failures are normal and expected
  • New paradigms for acquisition and policy

Sounds like how the cloud has evolved, doesn't it?

While "scale changes everything" is now a Silicon Valley truism, it is still not widely understood. Let's look at a stark example of how scale changed our world in ways we are still grappling with: machine learning (ML), a major subset of artificial intelligence.

The rise of machine learning

Essentially, ML has a software program to perform a task, such as facial recognition. But to "learn," the program needs to be trained, and the more training, the better the program performs. For facial recognition, the training requires millions of photos with faces that are labeled "faces" as well as many photos labeled "no faces."

Thus, even if ML software existed in the 1980s, and we had the enormous compute power it needed, we still couldn't build today's facial recognition systems, because the images required for training didn't exist. But with the rise of digital photography and the internet, billions of labeled photos are now available, at virtually zero cost, to train ML facial recognition.

Several things had to happen to create facial recognition systems. We needed billions of digitized and labeled photos, which required cheap digital cameras. Then there needed to be centralized, low-cost, and accessible storage: The cloud.

Once all those pieces were in place, the innovations in ML circa 2012 could rapidly be turned into valuable products. The lesson here is that not only does scale change everything in a single domain but that scale changes in adjacent domains -- digital photos, digital storage -- changes much more.

The Storage Bits take

Combining exponential change with multivariate phenomena -- two things humans are bad at estimating and understanding -- is a challenging analytical problem. That's why futurists are so often wrong: There are too many variables and unknown feedback and feedforward loops.

Yet, as we look to predict the future, we don't have to give up. Just because we can't predict everything doesn't mean we can't predict anything. The complexity of scale renders us humble, not powerless.

Courteous comments welcome, of course.

Editorial standards