Microsoft calls for government regulation of facial recognition because of 'potential for abuse'

Facial recognition of mask

Microsoft president Brad Smith has called on government to regulate facial recognition technology, citing concerns that it is open to abuse. While he acknowledges that technology company have a role to play, he that it is down to elected representatives to put rules in place.

Using a terrible analogy ("All tools can be used for good or ill. Even a broom can be used to sweep the floor or hit someone over the head.") Smith points out that while facial recognition technology is undeniably useful, there is also potential for it to be "misused and abused by private companies and public authorities alike". He wants government to do something about it.

See also:

Smith is not just saying that something needs to be done; Microsoft has a number of suggestions and recommendations for moving forward. Clearly well-aware that governments and agencies around the world are rolling out facial recognition systems for a range of purposes, he paints a bleak picture of what this could mean:

Imagine a government tracking everywhere you walked over the past month without your permission or knowledge. Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first. This has long been the stuff of science fiction and popular movies – like "Minority Report", "Enemy of the State" and even "1984" -- but now it's on the verge of becoming possible.

He points to some of the issues that have been discovered with facial recognition as reasons to be concerned -- such as the fact such systems are more accurate when it comes to recognizing white men than women, and better at recognizing people with lighter skin in general. He goes on to say that even if elements of "bias" can be ironed out, recognition systems will always feature some rate of error.

Microsoft is particularly interested in this issue at the moment as it found itself at the center of the immigration debate recently -- concerns were raised about whether the company had a contract involving facial recognition with ICE (Smith is quick to point out that "we've since confirmed that the contract in question isn't being used for facial recognition at all").

Despite concerns about how governments might make use of facial recognition technology, Smith looks to government to keep things in check:

The only effective way to manage the use of technology by a government is for the government proactively to manage this use itself. And if there are concerns about how a technology will be deployed more broadly across society, the only way to regulate this broad use is for the government to do so. This in fact is what we believe is needed today -- a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.

While we appreciate that some people today are calling for tech companies to make these decisions -- and we recognize a clear need for our own exercise of responsibility, as discussed further below -- we believe this is an inadequate substitute for decision making by the public and its representatives in a democratic republic. We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology. As a general principle, it seems more sensible to ask an elected government to regulate companies than to ask unelected companies to regulate such a government.

He goes on to suggest a number of questions that need to be considered when drawing up regulation and implementing controls:

  • Should law enforcement use of facial recognition be subject to human oversight and controls, including restrictions on the use of unaided facial recognition technology as evidence of an individual's guilt or innocence of a crime?
  • Similarly, should we ensure there is civilian oversight and accountability for the use of facial recognition as part of governmental national security technology practices?
  • What types of legal measures can prevent use of facial recognition for racial profiling and other violations of rights while still permitting the beneficial uses of the technology?
  • Should use of facial recognition by public authorities or others be subject to minimum performance levels on accuracy?
  • Should the law require that retailers post visible notice of their use of facial recognition technology in public spaces?
  • Should the law require that companies obtain prior consent before collecting individuals’ images for facial recognition? If so, in what situations and places should this apply? And what is the appropriate way to ask for and obtain such consent?
  • Should we ensure that individuals have the right to know what photos have been collected and stored that have been identified with their names and faces?
  • Should we create processes that afford legal rights to individuals who believe they have been misidentified by a facial recognition system?

This is clearly not a problem that's going to be solved overnight because it involved a highly complex set of inter-related issues. But with Microsoft having said its piece, we can expect other technology companies to do the same in an attempt to be seen to be on the right side of the debate.

Image credit: ra2studio / Shutterstock

21 Responses to Microsoft calls for government regulation of facial recognition because of 'potential for abuse'

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.