In our opener for the 2020s, we laid the groundwork to evaluate the array of technologies under the umbrella term "artificial intelligence." Now we'll use it to refer to some key developments in this area, starting with hardware.
The key thing to keep in mind here is that the proliferation of machine learning workloads has boosted the use of GPUs, previously used mostly for gaming, while also giving birth to a whole new range of manufacturers. Nvidia, which has come to dominate the AI chip market, had a very productive year.
First, by unveiling its new Ampere architecture in May, Nvidia claims this brought an improvement of about 20 times compared to Volt, its previous architecture. Then, in September, Nvidia announced the acquisition of Arm, another chip manufacturer. As we noted then, Nvidia's acquisition of Arm strengthens its ecosystem and brings economies of scale to the cloud and expansion to the edge.
The software side of things was equally eventful, if not more. As noted in the State of AI report for 2020, MLops was a major theme. MLOps, short for machine learning operations, is the equivalent of DevOps for ML models: Taking them from development to production, and managing their lifecycle in terms of improvements, fixes, redeployments, and so on.
Another key theme was the use of machine learning in biology and healthcare. AlphaFold, DeepMind's system that succeeded in solving one of the most difficult computing challenges in the world, predicting how protein molecules will fold, is a prime example. More examples of AI having an impact in biology and healthcare are either here already or on the way.
But what we think should top the list is not a technical achievement. It is what's come to be known as AI ethics, i.e. the side-effects of using AI. In a highly debated development, Google recently "resignated" Timnit Gebru, a widely respected leader in AI ethics research and former co-lead of Google's ethical AI team.
Yoshua Bengio, Yann LeCunn, and Geoffrey Hinton are considered the forefathers of deep learning. Some people subscribe to Hinton's view, that eventually all issues will be solved, and deep learning will be able to do everything. Others, like Gary Marcus, believe that AI, in the way it is currently conflated with deep learning, will never amount to much more than sophisticated pattern recognition.
With 2020 having been what it was, this work may not have gotten the acclaim it would normally have, but it was not a shot in the dark either. Marcus elaborated on this work, as well as background and implications, in an in-depth conversation we hosted here on ZDNet. Marcus' line of thought is not singular either -- similar ideas also go by the name of Neurosymbolic AI.
Bengio on his part published work on topics such as exploiting syntactic structure for better language modeling, factorizing declarative and procedural knowledge in dynamical systems, or even learning logic rules for reasoning on knowledge graphs in 2020. This seems like a tangible recognition of a shift toward embedding knowledge and reasoning in deep learning.
But there is another use of graphs that has blossomed in 2020: Graph machine learning. Graph neural networks operate on the graph structures, as opposed to other types of neural networks that operate on vectors. What this means in practice is that they can leverage additional information.
Last year was undoubtedly characterized by the advent of COVID-19. While COVID-19 may have catalyzed digital transformation, remote work, applications in biology, healthcare, artificial intelligence, and research, not all of its side-effects were positive.
Like all other technological drivers, COVID-19 has been a mixed bag for technological progress and adoption. The speed of adoption of related technologies, however, means that society at large is lagging in terms of an informed debate and full comprehension of the implications. Let's hope that 2021 can bring more inclusion and transparency to the table.