X
Innovation

Google says it will address AI, machine learning model bias with technology called TCAV

TCAV is short for Testing with Concept Activation Vectors and its a technology that can identify high concept signals and therefore defend against bias.
Written by Larry Dignan, Contributor

Google CEO Sundar Pichai said the company is working to make its artificial intelligence and machine learning models more transparent as a way to defend against bias.

Also: The Pixel 3A is official: Here's what you need to knowAndroid Q: Everything you need to know

Pichai outlined a bevy of artificial intelligence enhancements and moves to put more machine learning models on devices, but the bigger takeaway for developers and data scientists may be something called TCAV. TCAV is short for Testing with Concept Activation Vectors. In a nutshell, TCAV is an interpretability method to understand what signals your neural network models use for prediction.

In theory, TCAV's ability to understand signals could surface bias because it would highlight whether males were a signal over females and surface other issues such as race, income and location. Using TCAV, computer scientists can see how high value concepts are valued

Primers: What is AI? | What is machine learning? | What is deep learning? | What is artificial general intelligence?    

Bias is a critical concept in AI and some academics have called for more self-governance and regulation. In addition, industry players such as IBM have pushed for more transparency and a layer of software to monitor algorithms to see how they work together to produce bias. Meanwhile, enterprises are striving for explainable AI. For Google, transparency matters because of its technologies such as Duplex and the next-gen Google Assistant. These tools are increasingly able to carry out tasks for you. Transparency of the models can mean more trust and usage of Google technology. 

Bottom line: Transparency and defending against bias will be critical for enterprises as well as all the cloud providers that will be providing most of our models as services

TCAV, which doesn't require models to be retrained to use it, is an effort to dissect models and illustrate why a model is making a certain decision. For instance, a model that identified a zebra may identify it using more high level concepts. Here's an illustration:

google-ai-zebra-model.png

"Building a more helpful Google for everyone means addressing bias. You need to know how a model works and how there may be bias. We will improve transparency," said Pichai.

He added that Google's AI team is working on TCAV, a technology that will allow models to use more high-level concepts. TCAV's goal is to illustrate the variables that underpin a model.

"There's a lot more to do, but we are committed to building AI in a way that works for everyone," Pichai said.

With its ability to cram model size down so they can reside on a device, Google is working to lower latency and using techniques such as federated learning to use less data and enhance user privacy.  

Google I/O 2019: The biggest announcements from the keynote

More from Google I/O:

More on AI bias:

Editorial standards