Google and Amazon Web Services (AWS) have highlighted their respective work on machine-learning (ML) models that may help nations deal with environmental crises happening with increasing regularity across the world.
The companies flagged up their efforts to tackle climate change effects such as floods and wildfires as the UN Climate Change Conference UK 2021 (COP26) wraps up this week.
Google has published a non-peer-reviewed paper about its flood forecasting system with machine-learning models that it claims provide "accurate real-time flood warnings to agencies and the public, with a focus on riverine floods in large, gauged rivers". The paper was written by researchers at Google Research and the Hebrew University of Jerusalem in Israel.
Google's flood-forecasting initiative, launched in 2018, sends alerts to smartphones of people in flood-affected areas. It's part of Google's Crisis Response program, which works with front-line and emergency workers to develop technology.
Since 2018, the program has expanded to cover much of India and Bangladesh, encompassing an area populated by some 220 million people. As of the 2021 monsoon season, this has further expanded to cover an area where 360 million people live.
"Thanks to better flood prediction technology, we sent out over 115 million alerts -- that's about triple the amount we previously sent out," says Yossi Matias, Google's VP engineering and crisis response lead, in a blogpost.
Google's alerts don't just indicate how many centimetres a river will rise. Thanks to its new machine-learning models that use Long Short-Term Memory (LTSM) deep neural networks, it can now provide "inundation maps" that show the extent and depth of flooding as a layer on Google Maps.
The researchers contend that "LSTM models performed better than conceptual models that were calibrated to long data records in every basin".
"While previous studies provided encouraging results, it is rare to find actual operational systems with ML models as their core components that are capable of computing timely and accurate flood warnings," Google's researchers said.
AWS, meanwhile, has been working with AusNet, an energy company based in Melbourne, Australia, to help mitigate bushfires in the region.
AusNet has 54,000 kilometres of power lines that distribute energy to about 1.5 million homes and businesses in Victoria. It's estimated that 62% of the network is in high bushfire risk areas.
AusNet has been using cars equipped with Google Maps-style LiDAR cameras and Amazon SageMaker machine learning to map out the state's vegetation areas that need to be trimmed to stem bushfire threats. Its previous system relied on a GIS (Geographic Information System) and used custom tools to label LiDAR points.
AusNet worked with AWS to automate the classification of LiDAR points by using AWS's managed deep-learning models, GPU instances and S3 storage.
AusNet and AWS built a semantic segmentation model that accurately classified 3D point cloud data for conductors, buildings, poles, vegetation, and other categories, AWS notes in a blogpost.
"The team was able to train a model at a rate of 10.8 minutes per epoch on 17.2 GiB of uncompressed data across 1,571 files totaling approximately 616 million points. For inference, the team was able to process 33.6 GiB of uncompressed data across 15 files totaling 1.2 billion points in 22.1 hours. This translates to inferencing an average of 15,760 points per second including amortized startup time," AWS states.
"Being able to quickly and accurately label our aerial survey data is a critical part of minimizing the risk of bushfires," says Daniel Pendlebury, a product manager at AusNet.
"Working with the Amazon Machine Learning Solutions Lab, we were able to create a model that achieved 80.53% mean accuracy in data labeling. We expect to be able to reduce our manual labeling efforts by up to 80% with the new solution."