Artificial Intelligence (AI) promises to make the human race smarter. Raymond Kurzweil has made predicting the Singularity -- when artificial intelligence exceeds human intelligence -- a cottage industry. Is AI going to make us all smarter, or are we already as smart as we can handle?
Repetitive stupidity syndrome
Some of our issues are cognitive, such as our inherent inability to estimate exponential functions. Many US political leaders, including the governor of Arizona, couldn't understand how quickly COVID-19 could spread -- pesky exponentials! -- so infections and deaths are spiking here, while other states and countries have successfully contained the disease.
Human cognitive gaps, well documented in The Logic of Failure, explain why we keep making the same mistakes over and over.
There's also our fast leaps-to-conclusions mind and our slow, analytical mind. The fast mind is usually correct on everyday issues, but complex problems baffle it.
This is why pundits love to offer simple, short, and irrelevant punchlines that inflame emotions rather than encourage thought.
Collecting evidence, analyzing data, teasing out causality, making hypotheses, and testing their validity is hard work. That's why we have highly trained people spending their lives doing just that. And they make mistakes too!
Cultural transmission vs evolutionary selection
In the paper, "Why Aren't We Smarter Already: Evolutionary Trade-Offs and Cognitive Enhancements" two scientists consider the potential of drugs for cognitive enhancement. They consider how evolution has shaped our cognitive functions, looking at two major problem examples.
The first type is "inverted U-shaped performance functions." These are common in problems where the goal is to maximize an outcome, but with a cost involved. For instance, you'd like to get married. When do you stop searching and settle on one person?
The other cognitive problem reflects the limits of the human IT system: Our brains. Cross-functional dependencies can cripple a cognitive enhancement, whether it is natural, artificial, or drug-induced.
One example: A man with a seemingly unlimited memory who found it difficult to remember faces. Instead of generalizing a face, he remembered faces as an ever-changing sea of patterns. Maybe infinite storage isn't a good idea, at least for people.
Or consider savants, who display extraordinary talents in some domains at the cost of huge deficits in other domains. Marvin Minsky's Society of Mind proposed this sort of cooperative model of human intelligence back in the 1980s. It looks like he wasn't too wide of the mark.
Perhaps intelligence isn't the unalloyed advantage smart people would like to believe it is. Likewise, maybe AI isn't going to be as beneficial as we'd like to believe.
Intelligence isn't an end in itself, but a tool that allows us to understand and shape our environment. If people willfully abandon facts, evidence, and logic, it doesn't matter how "intelligent" they are. They'll get the wrong answer and damage themselves, and, sometimes, the rest of us.
The two goals of AI should be to extend the fast-thinking part of our intelligence, so we can apply more intelligence to the areas where we are most likely to make mistakes. That is an ideal use case for embedded mobile device intelligence.
The second goal should be where we've made the most progress: Domain-specific expertise, such as interpreting medical imaging. This will put a lot of expensively trained professionals out of business, but improve the quality of life for the rest of us.
The idea of the "Singularity" where machine intelligence finally exceeds human capability and wonderful things start happening will fail because most people -- and cultures -- can only stand so much "intelligence." But let's take what we can get instead of expecting the impossible.
Comments welcome. What's your favorite example of unintelligence?