How Google plans to use its latest AI model to help people in crisis

Models like MUM and BERT, Google says, can more effectively deliver information on crisis assistance to people searching for help.
Written by Stephanie Condon, Senior Writer

Google on Wednesday outlined ways it's using AI models to deliver better Search results to people in crisis. That means more effectively giving people trustworthy and useful information when they need it -- regarding topics like suicide, sexual assault or substance abuse -- while avoiding potentially shocking or harmful content. 

In the coming weeks, Google said it plans to deploy MUM (Multitask Unified Model), its latest AI model, to improve Search results for users in crisis. First, Google says MUM can better understand the intent behind people's questions. 

Additionally, MUM in the coming months will improve Google Search's spam protections, potentially bringing down the portion of unhelpful or dangerous results. 

Google will be able to roll out these improvements globally, given that MUM can transfer knowledge across the 75 languages it's trained on. 

"When we train one MUM model to perform a task -- like classifying the nature of a query -- it learns to do it in all the languages it knows," Google fellow and VP of Search Pandu Nayak wrote in a blog post.  

When Google introduced MUM last year at its Google I/O conference, it gave a more commercially-oriented example of how the AI model could improve Search results. The model is multimodal, meaning it can understand information from different formats like web pages or pictures. With MUM powering Search, a user could, for example, take a photo of a pair of boots and ask if they could be used to hike Mount Fuji. The system would analyze the boots in the photo and provide the requested answer. 

Meanwhile, Google said it's already significantly reduced potentially unwanted results with its  BERT language model. In the last year, BERT has reduced "unexpected shocking results" by 30%, Google said. 

"It's been especially effective in reducing explicit content for searches related to ethnicity, sexual orientation and gender, which can disproportionately impact women and especially women of color," Nayak wrote.

Editorial standards