X
Innovation

Google CEO Sundar Pichai: This is why AI must be regulated

Google CEO weighs in on AI regulation debate in Europe.
Written by Liam Tung, Contributing Writer

Google CEO Sundar Pichai has explained why the world's governments need to impose regulations on the use of artificial intelligence (AI) beyond principles published by a company. 

Pichai outlined his thoughts on AI regulation in the Financial Times today, reflecting on Google's own AI principles, which it published in mid-2018 following an outcry from employees over its work on the Pentagon's Project Maven. The project applied Google-developed object recognition AI to drone surveillance technology. 

Google vowed in its AI principles not to create AI that would harm people, but Pichai noted that "principles that remain on paper are meaningless" without action, pointing to the tools Google has developed and open-sourced to test AI for "fairness"

SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

But he also admits that with every major innovation comes potential negative side effects.

"There are real concerns about the potential negative consequences of AI, from deepfakes to nefarious uses of facial recognition. While there is already some work being done to address these concerns, there will inevitably be more challenges ahead that no one company or industry can solve alone," he writes. 

Pichai argues that governments can adapt existing legislation, such as EU's General Data Protection Regulation (GDPR), to the oversight of AI rather than writing new laws from scratch.  

"Good regulatory frameworks will consider safety, explainability, fairness and accountability to ensure we develop the right tools in the right ways. Sensible regulation must also take a proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities."

Microsoft's recent calls for government regulation have focused on the use of facial-recognition technology in public spaces, arguing that if left unchecked it will increase the risk of biased decisions and outcomes for groups of people already discriminated against. 

The timing of Pichai's post is unlikely to be a coincidence. Euractiv reporters last week published a leaked European Commission proposal touting a three- to five-year ban on facial-recognition technology by public and private-sector organizations in public spaces until regulators can develop solid methods for assessing the risks of the technology and risk-management approaches.     

"This would safeguard the rights of individuals, in particular against any possible abuse of the technology. It would be necessary to foresee some exceptions, notably for activities in the context of research and development and for security purposes (subject to a decision issued by a relevant court)," the Commission wrote. 

"By its nature, such a ban would be a far-reaching measure that might hamper the development and uptake of this technology. The Commission is therefore of the view that it would be preferable to focus at this stage on full implementation of the provisions of the General Data Protection Regulation."

SEE: IT pro's guide to GDPR compliance (free PDF)    

The paper also highlights GDPR and other European data laws that could be used as the foundation of laws to regulate the use of AI. Europe wants to promote the adoption of AI but also ensure it is used in a way that "respects European values and principles", while also accepting that the biggest players in AI are from the North America and Asia, where investments in AI dwarf Europe's. 

The proposal highlights five regulatory options that the European Commission is considering, including voluntary labeling; requirements on public authorities using facial recognition; mandatory risk-based requirements for high-risk applications such as healthcare and transport as well as predictive policing; adapting existing product safety and liability legislation; and governance issues.     

Today, the European arm of the Computer & Communications Industry Association (CCIA) – which represents Amazon, Google, Facebook, Mozilla, Intel and Uber – sent a letter to the Commission urging "targeted regulatory intervention rather than a one-size-fits-all" approach. 

CCIA said regulatory action should be "risk-based and focus on the most sensitive types of AI applications and sectors, eg, public health". 

googlesundarpichaicbs.jpg

Google CEO Sundar Pichai: "There are real concerns about the potential negative consequences of AI."  

Image: CBS/YouTube
Editorial standards