Google CEO Sundar Pichai has expressed support for Europe's proposed temporary ban on facial recognition, but Microsoft's top lawyer, Brad Smith, has cautioned against using a 'meat cleaver' for what should be a surgical operation.
The two tech execs on Monday responded to the European Commission's proposal to ban the use of facial recognition in public spaces for three to five years or until sufficient risk-assessment and risk-management frameworks can be developed.
Pichai on Monday wrote that there were "real concerns about the potential negative consequences of AI, from deepfakes to nefarious uses of facial recognition" and argued for "sensible regulation" that got the right balance between the opportunities of AI and its potential harms.
Speaking at a conference in Brussels on Monday, Pichai said it was important for governments to tackle regulatory questions over facial recognition and, more broadly, AI "sooner rather than later", and that the ban can be "immediate but maybe there's a waiting period before we really think about how it's being used".
Pichai argues that the EU can adapt existing legislation such as the General Data Protection Regulation (GDPR) to manage the risks of AI and facial-recognition technology. He also said regulation should be used to back up AI principles such as those outlined by Google last year in which it committed not to release AI that could harm people.
"Accountability is an important part of our AI principles. We want our systems to be accountable and explainable and we test it for safety," Pichai told the thinktank Bruegel, which organized the conference.
"I think inevitably doing that we assume it will involve human agency and humans to review it, and we specifically mention we want these systems to be accountable to society at large. And I think regulation should play a role in that as well."
The European Commission acknowledges in its proposal that a temporary ban on facial recognition would "be a far-reaching measure that might hamper the development and uptake of this technology", therefore it would prefer to use existing regulatory instruments available under GDPR.
Microsoft vice president and chief legal counsel, Brad Smith, has previously called for regulations on the use of facial recognition. However, yesterday he cautioned against the European Commission's temporary ban.
Smith said facial recognition was useful for NGOs to find missing children, Reuters reported.
"I'm really reluctant to say let's stop people from using technology in a way that will reunite families when it can help them do it," he said.
"The second thing I would say is you don't ban it if you actually believe there is a reasonable alternative that will enable us to, say, address this problem with a scalpel instead of a meat cleaver."
Smith has previously argued that facial-recognition laws should require tech companies to provide transparent documentation that explains the capabilities and limitations of their facial-recognition tech.
He aired his opinions on the technology in December 2018 in the wake of employee protests against Microsoft's work developing facial-recognition technology for US Immigration and Customs Enforcement (ICE).
While Smith opposes the EC's proposed temporary ban on facial recognition, his other views on regulating the technology aren't that different.
The European Commission has proposed voluntary labeling, requirements on public authorities that use the technology, as well as mandatory risk-based requirements for its use in healthcare, transport and predictive policing.
Smith has called for legislation that mandates impact assessments for using the technology, notifying the public when facial recognition is in use, and a requirement for people to give consent to the technology's use when entering a premises.
He has also called for laws restricting the use of facial recognition when monitoring people of interest in public spaces and that this use of the technology should be only allowed with a court order.
The White House earlier this month called on Europe to "avoid heavy-handed, innovation-killing models" and to consider a similar approach to the US's, which discourages federal agencies from taking regulatory actions that hamper AI innovation and growth.