Video: Microsoft calls for government regulation of facial tech
On the heels of criticism of its work with the U.S. Immigration and Customs Enforcement (ICE), Microsoft is advocating for government to take a role in regulating facial recognition technology.
Microsoft officials have said that the company's work with ICE doesn't include any facial-recognition work, in spite of a company blog post about ICE being a Microsoft customer that mentioned the potential for ICE to use facial recognition.
ICE has come under fire for its work around separating immigrant children from their families. Microsoft officials haven't responded to calls by some employees and others outside the company to cease all work with ICE, which, frankly, isn't too surprising. Even though Microsoft has been stepping up its work to position itself as a champion of ethical uses of AI, government contracts are a key part of the company's business.
In a July 13 blog post, Microsoft President and Chief Legal Officer Brad Smith noted that political issues like immigration enforcement aren't going away, nor will the role of facial-recognition technology -- something Microsoft has been actively developing -- in those issues.
Smith's proposed solution: Congress should appoint a bipartisan committee to study facial recognition and make recommendations around potential regulation of that technology. In his blog post, he lists a number of issues that potentially could be addressed, including whether companies obtain prior consent before collecting individuals' images for facial recognition; the right of individuals to know what photos already have been collected and stored; whether facial recognition systems be subject to minimum performance levels on accuracy; and more.
Read also: Windows 10: A cheat sheet - TechRepublic
Meanwhile, Microsoft will continue to work to improve facial recognition accuracy levels and to develop principles governing its own facial recognition work. Microsoft has turned down some unspecified customer requests for use of its facial recognition technology when officials decided there were greater human rights risks, and it will continue to do so, Smith added.