Microsoft to retire some facial recognition technology as it takes a more responsible approach to AI
Microsoft has publicly shared its Responsible AI Standard which includes its guidelines for building AI systems. The company says it is publishing the standard in order to "to share what we have learned, invite feedback from others, and contribute to the discussion about building better norms and practices around AI".
In addition to publishing the Responsible AI Standard, Microsoft has also announced that it is closing down some of the capabilities of its Azure Face facial recognition service. Features that are being retied include those that can be used to "infer emotional states and identity attributes such as gender, age, smile, facial hair, hair, and makeup".
- Sun Valley 2's tabbed Explorer could help to speed up Windows 11
- Microsoft issues emergency KB5016138 and KB5016139 patches for Windows 11 and Windows 10 to fix problems caused by Patch Tuesday updates
- Microsoft is bringing an amazing new Privacy Auditing feature to Windows 11
Microsoft says that taking a responsible approach to artificial intelligence means keeping people and their goals at the center of decisions relating to design. But on top of this, it is also important to respect "enduring values like fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability".
The decision to block access to some of the facial recognition features has been taken with this in mind. Anyone who signs up to use the framework will no longer have access to these detection features, while existing customers will lose access after June 30, 2023.
In a blog post announcing the changes, Microsoft says:
Taking emotional states as an example, we have decided we will not provide open-ended API access to technology that can scan people’s faces and purport to infer their emotional states based on their facial expressions or movements. Experts inside and outside the company have highlighted the lack of scientific consensus on the definition of "emotions", the challenges in how inferences generalize across use cases, regions, and demographics, and the heightened privacy concerns around this type of capability. We also decided that we need to carefully analyze all AI systems that purport to infer people's emotional states, whether the systems use facial analysis or any other AI technology. The Fit for Purpose Goal and Requirements in the Responsible AI Standard now help us to make system-specific validity assessments upfront, and our Sensitive Uses process helps us provide nuanced guidance for high-impact use cases, grounded in science.
These real-world challenges informed the development of Microsoft's Responsible AI Standard and demonstrate its impact on the way we design, develop, and deploy AI systems.
The company concludes by saying: "Better, more equitable futures will require new guardrails for AI. Microsoft's Responsible AI Standard is one contribution toward this goal, and we are engaging in the hard and necessary implementation work across the company. We're committed to being open, honest, and transparent in our efforts to make meaningful progress".