AI jobs are on the upswing, as are the capabilities of AI systems. The speed of deployments has also increased exponentially. It's now possible to train an image-processing algorithm in about a minute -- something that took hours just a couple of years ago.
These are among the key metrics of AI tracked in the latest release of the AI Index, an annual data update from Stanford University's Human-Centered Artificial Intelligence Institute published in partnership with McKinsey Global Institute. The index tracks AI growth across a range of metrics, from papers published to patents granted to employment numbers.
Here are some key measures extracted from the 290-page index:
AI conference attendance: One important metric is conference attendance, for starters. That's way up. Attendance at AI conferences continues to increase significantly. In 2019, the largest, NeurIPS, expects 13,500 attendees, up 41% over 2018 and over 800% relative to 2012. Even conferences such as AAAI and CVPR are seeing annual attendance growth around 30%.
AI jobs: Another key metric is the amount of AI-related jobs opening up. This is also on the upswing, the index shows. Looking at Indeed postings between 2015 and October 2019, the share of AI jobs in the US increased five-fold since 2010, with the fraction of total jobs rising from 0.26% of total jobs posted to 1.32% in October 2019. While this is still a small fraction of total jobs, it's worth mentioning that these are only technology-related positions working directly in AI development, and there are likely an increasingly large share of jobs being enhanced or re-ordered by AI.
Among AI technology positions, the leading category being job postings mentioning "machine learning" (58% of AI jobs), followed by artificial intelligence (24%), deep learning (9%), and natural language processing (8%). Deep learning is the fastest growing job category, growing 12-fold between 2015 and 2018. Artificial Intelligence grew by five-fold, machine learning grew by five-fold, machine learning by four-fold, and natural language processing two-fold.
Compute capacity: Moore's Law has gone into hyperdrive, the AI Index shows, with substantial progress in ramping up the computing capacity required to run AI, the index shows. Prior to 2012, AI results closely tracked Moore's Law, with compute doubling every two years. Post-2012, compute has been doubling every 3.4 months -- a mind-boggling net increase of 300,000x. By contrast, the typical two-year doubling period that characterized Moore's law previously would only yield a 7x increase, the index's authors point out.
Training time: The among of time it takes to train AI algorithms has accelerated dramatically -- it now can happen in almost 1/180th of the time it took just two years ago to train a large image classification system on a cloud infrastructure. Two years ago, it took three hours to train such a system, but by July 2019, that time shrunk to 88 seconds.
Commercial machine translation: One indicator of where AI hits the ground running is machine translation -- for example, English to Chinese. The number of commercially available systems with pre-trained models and public APIs has grown rapidly, the index notes, from eight in 2017 to over 24 in 2019. Increasingly, machine-translation systems provide a full range of customization options: pre-trained generic models, automatic domain adaptation to build models and better engines with their own data, and custom terminology support."
Computer vision: Another benchmark is accuracy of image recognition. The index tracked reporting through ImageNet, a public dataset of more than 14 million images created to address the issue of scarcity of training data in the field of computer vision. In the latest reporting, the accuracy of image recognition by systems has reached about 85%, up from about 62% in 2013.
Natural language processing: AI systems keep getting smarter, to the point they are surpassing low-level human responsiveness through natural language processing. As a result, there are also stronger standards for benchmarking AI implementations. GLUE, the General Language Understanding Evaluation benchmark, was only released in May 2018, intended to measure AI performance for text-processing capabilities. The threshold for submitted systems crossing non-expert human performance was crossed in June, 2019, the index notes. In fact, the performance of AI systems has been so dramatic that industry leaders had to release a higher-level benchmark, SuperGLUE, "so they could test performance after some systems surpassed human performance on GLUE."