Chad Steelberg, Chairman and CEO of Veritone, is unlocking the power of AI for enterprise customers. Veritone's platform uses AI-based cognitive computing so that unstructured audio and video data can be processed, transformed, correlated, and analyzed to generate actionable intelligence.
I asked him to write a guest post about a subject that's poorly understood and rarely mentioned in coverage of Artificial Intelligence: The difference between Artificial Narrow Intelligence and Artificial General Intelligence, and the work underway to bridge the gap. Views contained herein are Chad's.
The advent of artificial general intelligence (AGI), a milestone signified by machines becoming intellectual equals with their human counterparts, is nearer than we think. This tipping point will transform most aspects of daily life and the technical path to achieving it is becoming clearer and for some scientists, more narrowly defined.
Modern machine learning algorithms are currently capable of mastering only narrowly-defined cognitive challenges, like playing chess. So long as the training corpus is large enough and the problem space is sufficiently narrow, most machine learning algorithms will quickly learn to outperform humans, at a fraction of the cost and at superhuman speeds. However, the sufficiently narrow qualifier has proven to be a significant roadblock to data scientists and has held the evolution of machine learning at the artificial narrow intelligence (ANI) levels.
Two competing, yet also complementary solutions are actively being developed to bridge this gap from ANI to AGI.
One path is focused on modifying the machine learning algorithms to eliminate the "narrowness" constraint, coupled with massively increasing the size of the training data sets. The alternative path involves teaching machine learning algorithms to collaborate with one another and to replicate themselves, with each new copy trained to master a unique yet narrow skill. So while scientific research diverges for the moment, the advent of AGI will most likely depend on breakthroughs on both fronts and the integration of the two.
In the meantime, hundreds of companies big and small now offer thousands of artificial narrow intelligence (ANI) cognitive engines, each of which can perform a single AI task, such as translation, transcription or object recognition. The engines already are used in multitudes of products, from personal digital assistants that understand spoken words, to facial-recognition systems that can authenticate smartphone users.
While the AGI and ANI paths may seem to be incompatible, they actually are leading to the same destination.
By combining the thousands of cognitive engines and orchestrating their capabilities to apply the best technology to the task at hand, ANI can approximate the capabilities of AGI.
There's strength in numbers when it comes to ANI, with each new engine arriving on the market bringing the world a bit closer to AGI. And the number of engines available on the market is growing explosively, from just five in 2012, to about 5,500 today. I'm predicting this will grow to several million in the next three to five years, as the machines learn to replicate and train themselves.
Each of these engines refines an existing capability or adds a new one. Employing multiple engines of the same class improves the precision of cognitive processing, delivering a higher quality translation of a speech, for example. When engines of different classes are used in combination, such as transcription and object recognition, AI can correlate the various types of processing to perceive things in a more sophisticated, multidimensional manner. This mirrors the way that humans use all their senses in combination to observe the world.
Using just one engine, someone can solve a single problem--such as transcribing text--with a reasonable degree of accuracy. With a half-dozen engines, that person can solve the same problem with high accuracy. Give that same person 1 million engines, and they can solve any problem on earth.
A profound and dramatic transformation is at work
While ANI is already impacting people's daily lives, the technology's transformation into AGI will have a much more profound and transformational effect on multiple areas, helping to enhance and build public trust in society's institutions.
For example, take the public-safety sector.
AI cognitive engines now on the market can conduct real-time monitoring of the video feeds of CCTV surveillance cameras. These machine-vision engines are capable of detecting activities such as criminal behavior, public drunkenness or signs of potential terrorism. Such AI-enhanced surveillance systems already have been deployed in Boston and Osaka, Japan.
In the near future, the addition of AGI will dramatically enhance AI surveillance capabilities, using a number of engines in concert. For example, face detection could recognize individuals included in a reference library of known individuals, such as missing people. When orchestrated with face-sentiment engines, these systems could perform sophisticated analyses, determining the identities and emotional states of individuals and groups, potentially undercovering signs of displaced refugees or victims of human-trafficking operations.
When it comes time for public disclosure, this symphony of cognitive engines could harmonize to identify and redact images of juveniles from the video.
AGI also will enable the creation of public-safety-oriented robots that combine multiple cognitive engine technologies to successfully navigate the real world. Such an advanced robot would be able to avoid the fate of an unfortunate security droid that recently navigated itself into a fountain while making the rounds at a Washington D.C. office building.
These robots also would be immune to fight-or-flight instincts that can compromise human behavior. Instead, AGI automatons would be able to remain effective under stressful circumstances, enhancing citizen safety in times of crisis. As a result, citizens walking city streets are likely to see police robots patrolling public areas in the future, courtesy of AGI.
AGI at work in daily life of a modern society
For example, in the legal profession, AGI will perform electronic discovery and analysis, using techniques including voice recognition, sentiment analysis and object recognition to sorting through massive amounts of data to identify phrases that are relevant to cases. This will allow lawyers to cut the amount of time spent reviewing documents or audio and video recordings--and will allow them to find evidence that otherwise might go undetected. As a result, the legal system will reduce wrongful convictions and free up more time to prosecute real criminals.
In politics, AGI technology will combat the plague of fake news on social media by recognizing, flagging and reporting false reports. For example, such systems could search speeches and news by keywords, faces, and objects in parallel to find incorrect information. This will ensure honesty and transparency, restoring public trust in the political process.
In advertising, AI is used for ad-buys and checking to see if advertising fulfillment has occurred. In the future, AGI technology will work to tailor and target ads to particular consumers. Imagine an electronic billboard that changes the product it's advertising based on the interests of nearby consumers.
As a result of this trend, AGI will allow consumers to view more ads that are relevant to their interests, enhancing brand sentiment.
While many believe the arrival of AGI is still decades away from reality, the fact is that existing ANI technologies will soon approximate the capabilities of artificial general intelligence. The result of this will be major enhancements to the everyday lives of people throughout the world.
Chad Steelberg is an entrepreneur and Chairman and CEO of Veritone, developer of the world's first artificial intelligence operating system.