The good news for planet earth is thathas some brilliant tools that may help slow or reverse global warming. The bad news is that not much will happen unless AI somehow finds the right goals, what's known as the "objective function."
A workshop on AI in climate change in mid-December gathered hundreds of scholars in Vancouver during the NeurIPS AI conference, including some of the Illuminati of machine learning. The event was sponsored by Google's DeepMind, Microsoft, and ElementAI, the AI software and services firm co-founded by Yoshua Bengio, a star in the field of deep learning. Organizers were from Climate Change AI, a group of volunteer researchers from institutions around the world.
The participants discussed numerous ways to implement neural networks for climate science, including real-time weather predictions, making buildings more energy-efficient, and designing better materials for solar panels.
Here's the catch: All of the projects specify some task to be optimized that is not directly tied to reducing greenhouse gas emissions. But reduction of green house gas emissions is the stated goal of all global warming mitigation, and without it, it's not clear meaningful change can happen.
A report last year by the Intergovernmental Panel on Climate Change of the United Nations started that "without increased and urgent mitigation ambition in the coming years, leading to a sharp decline in greenhouse gas emissions by 2030, global warming will surpass 1.5°C in the following decades, leading to irreversible loss of the most fragile ecosystems, and crisis after crisis for the most vulnerable people and societies."
The keynote speaker at the event, Jeff Dean of Google, put the problem bluntly. He offered a chart, based on data from the IPCC report, showing how the planet has to take steps to reduce annual carbon dioxide emissions by as much as 10% a year, amounting to hundreds of "gigatonnes" worth of reduction in CO2, whereas the world is currently increasing CO2 by a couple percent per annum. This has to happen in the next decade to avoid those irreversible effects the IPCC speaks of. "We are effectively running out of time to take action," said Dean.
Also: The Internet of Wild Things: Technology and the battle against biodiversity loss and climate change (TechRepublic cover story) | Download the free PDF version (TechRepublic)
Many of the fifty two papers accepted for the workshop are breathtaking in the ingenuity with which they apply machine learning to climate issues, but they are far from the task of actually reducing greenhouse gas emissions.
For example, a paper authored by scientists at GE Global Research, the Georgia Institute of Technology, and others, and given a "best paper" recommendation, employs something called an "invertible residual network," a technique that was developed at Google's DeepMind in recent years. The I-ResNet program can ingest pictures of clouds at 1 kilometer in resolution and, going pixel by pixel, categorize what type of cloud it is -- "Altostratus," "Nimbostratus," "Deep Convection," etc. Types of clouds in the world affect climate models, so you can't actually model climate with great accuracy without knowing about which types are present and to what extent.
Such work has the potential to improve forecasting, but on its own it obviously is far from actually proposing action that will lead to a reduction in greenhouse gases. A lot of the work has that quality: it is laying the groundwork for years of research but it's not always clear how an optimization will lead to emissions reductions.
In fact, the organizers, lead by David Rolnick, a postdoctoral research fellow at the University of Pennsylvania, published a 100-page report this past summer that is chock full of fascinating projects, such as improving energy grid forecasting, or better forecasting of road traffic, or how to design better agriculture. In every one of those cases, a single optimization may not lead to any emissions reduction. For example, improving the "shared mobility" culture, such as Uber and Lyft, by making it more efficient, can potentially lead to more miles driven overall, as Lynn Kaack, a scholar with ETH Zürich points out in a piece on machine learning in transportation. This is known as the "Jevons Paradox," which they describe as a "situation where increased efficiency nonetheless results in higher overall demand." In other words, optimizing something for the purposes of productivity or for the sake of increased profits can actually worsen the greenhouse gas situation.
ZDNet reached out to panelists and to Rolnick and the other organizers by email. Representatives for Yoshua Bengio, and Andrew Ng of LandingAI, said that they could not respond in time for this article. The other organizers did not reply to multiple emails.
However, there is interesting perspective to be gained from the panel discussion that was held that day, involving Bengio and Ng and Dean, along with Carla Gomes of Cornell University and Lester Mackey of Microsoft. One of the organizers, Priya Donti, a doctoral student at Carnegie Mellon in computer science and public policy, asked the panelists an insightful question: How can AI as a discipline incentivize work on climate change given that the focus for the discipline is often on the number of papers published versus the tons of carbon reduced?
Bengio replied, "change your objective function," which elicited a lot of laughter. "The sort of projects we're talking about in this workshop can potentially be much more impactful than one more incremental improvement in GANs, or something."
It was a wry observation about the field, but the panelists acknowledged a deeper problem, that merely making good neural networks won't lead to emissions reduction on its own. Rolnick asked the panelists what should be done about AI that improves fossil fuel discovery, thereby potentially leading to a Jevon's Paradox of increased CO2. Bengio replied, again, to much laughter, "public shaming." Gomes replied that AI has "unfortunately developed for a single objective [...] we should really develop systems that can understand the impacts across different dimensions."
Even that may not be enough. AI may need some external forces to direct and shape its optimizations. That may mean aligning the cost benefits of "smarter everything" -- IoT,, ride sharing, etc. -- with the goal of emissions reduction. And that may require increased regulation, if private enterprise can't commit itself in earnest.
It's easy to be both thrilled with the work on display in December and also discouraged by the lack of imminent progress in emissions reduction. However, one of the invited speakers, Felix Creutzig, who is an author of the IPCC report, had a more upbeat view of the big picture.
Creutzig was asked by an audience member if the field is just fooling itself, "wasting time digressing from the important issues that are going wrong in policy?"
"I would be not too pessimistic about it," he replied. "We have technologies that are already available" such as electric vehicles, "and there is a lot of pressure to change, so I wouldn't be too pessimistic about anything not happening."
Update: Following the publication of the article, organizer Priya Donti was in touch via email. Donti writes that "there is still much work to be done, both within and outside of machine learning" and that the kinds of work shown at the workshop need to be "applied in parallel to (or to accelerate) action of other kinds, such as policy." Donti also refers to multiple "practical challenges" for the field. "These include forging meaningful connections between ML practitioners and those from other relevant fields, unifying and standardizing data from disparate sources, integrating proposed solutions with legacy systems, and changing incentives within the ML field to encourage impactful work on climate change."