X
Innovation

Google's new AI tool could help decode the mysterious algorithms that decide everything

The search giant launches "Explainable AI" to make algorithms more transparent and customers less confused.
Written by Daphne Leprince-Ringuet, Contributor

While most people come across algorithms every day, not that many can claim that they really understand how AI actually works. A new tool unveiled by Google, however, hopes to help common humans grasp the complexities of machine learning.

Dubbed "Explainable AI", the feature promises to do exactly what its name describes: to explain to users how and why a machine-learning model reaches its conclusions. 

To do so, the explanation tool will quantify how much each feature in the dataset contributed to the outcome of the algorithm. Each data factor will have a score reflecting how much it influenced the machine-learning model.

SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

Users can pull out that score to understand why a given algorithm reached a particular decision. For example, in the case of a model that decides whether or not to approve someone for a loan, Explainable AI will show account balance and credit score as the most decisive data.

Introducing the new feature at Google's Next event in London, the CEO of Google Cloud, Thomas Kurian, said: "If you're using AI for credit scoring, you want to be able to understand why the model rejected a particular model and accepted another one."

"Explainable AI allows you, as a customer, who is using AI in an enterprise business process, to understand why the AI infrastructure generated a particular outcome," he said.

The explaining tool can now be used for machine-learning models hosted on Google's AutoML Tables and Cloud AI Platform Prediction.

Google had previously taken steps to make algorithms more transparent. Last year, it launched the What-If Tool for developers to visualize and probe datasets when working on the company's AI platform.

By quantifying data factors, Explainable AI unlocks further insights, as well as making those insights readable for more users. 

"You can pair AI Explanations with our What-If tool to get a complete picture of your model's behavior," said Tracy Frey, director of strategy at Google Cloud.  

In some fields, like healthcare, improving the transparency of AI would be particularly useful. 

In the case of an algorithm programmed to diagnose certain illnesses, for example, it would let physicians visualize the symptoms picked up by the model to make its decision, and verify that those symptoms are not false positives or signs of different ailments.

The company also announced that it is launching a new concept of what it calls "model cards" – short documents that provide snap information about particular algorithms. 

SEE: Google makes Contact Center AI generally available

The documents are essentially an ID card for machine learning, including practical details about a model's performance and limitations. 

According to the company, this will "help developers make better decisions about what models to use for what purpose and how to deploy them responsibly."

Two examples of model cards have already been published by Google providing details about a face detection algorithm and an object detection algorithm.

google-face-detection.png

The face detection model card explains that the algorithm might be limited by the face's size, orientation or poor lighting.  

Image: Google Model Cards

Users can read about the model's outputs. performance, and limitations. For example, the face detection model card explains that the algorithm might be limited by the face's size, orientation or poor lighting.

The new tools and features announced today are part of Google's attempts to prove that it is sticking to its AI principles, which call for more transparency in developing the technology.

Earlier this year, the company dissolved its one-week-old AI ethics board, which was created to monitor its use of artificial intelligence. 

Editorial standards