Microsoft is eliminating emotion detection capabilities from its facial recognition technology

When Microsoft said this week it would remove some emotion-related facial recognition technologies, the head of its artificial intelligence projects warned that the science of emotions was far from settled.

Portrait of a young and beautiful woman and face recognition system

Natasha Crampton, chief artificial intelligence officer at Microsoft, wrote in a blog post: “Experts inside and outside the company have pointed to the lack of scientific consensus on the definition of ’emotions’, the challenges related to the generalization of inferences across use cases, regions, and demographics, as well as heightened privacy concerns surrounding this type of capability. »

Microsoft’s action, which was part of a larger announcement about its “Responsible AI Standard” campaign, became the highest-profile case of a company abandoning emotion recognition AI, a technology very modest that has been the subject of significant academic criticism.

In order to automatically assess a person’s emotional state, emotion detection technology often looks at a variety of characteristics, including facial expressions, tone of voice, and word choice.

Many tech companies have created software for business, education, and customer service that claims to be able to read, recognize, or quantify emotions.

One of these technologies is supposed to provide real-time analysis of callers’ emotions so that call center staff can adapt their conduct accordingly. Students’ emotions during classroom video chats are monitored by a second service so teachers can gauge their performance, interest and participation.

This technique has been met with skepticism for many reasons, including questionable effectiveness. Sandra Wachter, associate professor and senior researcher at Oxford University, said emotional AI “has no scientific basis at best and is complete pseudoscience at worst.” She called its implementation in the private sector “very worrying”.

Like Mr. Crampton, she pointed out that the AI’s vagueness of emotions is far from its only flaw.

“Even if we found evidence that AI is able to accurately predict emotions, that wouldn’t justify its deployment,” she said. Our innermost thoughts and feelings are protected by human rights such as the right to privacy.

It’s unclear exactly how many major tech companies are using emotion-reading technologies. In May, more than twenty-five human rights organizations issued a letter asking Zoom CEO Eric Yuan not to deploy emotion AI technology.

The letter was sent after a report by technology news site Protocol suggested that Zoom might adopt the technology due to its recent research in this area. Zoom did not provide comment in response to a request.

In addition to challenging the scientific basis of emotional AI, human rights organizations have argued that emotional AI is misleading and discriminatory.

Assistant professor of information systems at the Robert H. Smith School of Business at the University of Maryland, Lauren Rhue, discovered that through two facial recognition software programs (including Microsoft’s), emotional AI interpreted consistently black subjects as having more negative emotions than white subjects. An AI interpreted black individuals as more angry than white ones, while Microsoft’s AI interpreted black subjects as more dismissive.

Microsoft’s policy changes primarily target Azure, its cloud-based platform for selling software and other services to businesses and organizations. Azure’s 2016 announcement of AI for identifying emotions said it could detect “happiness, sadness, fear, anger, and more.” »

Microsoft has also committed to reassessing emotion-sensing AI across all of its systems to establish the dangers and benefits of this technology in various areas. Microsoft plans to continue using emotion-sensing AI in Seeing AI, which helps visually impaired people by verbally describing their surroundings.

Andrew McStay, professor of digital life and director of the Emotional AI Lab at Bangor University, said in a written statement that he would have preferred Microsoft to cease all emotional AI development. Because emotional AI is recognized as ineffective, he sees no need to continue using it in products.

“I’m quite curious if Microsoft is going to eliminate all types of emotion and psychophysiological sensing from all of its operations,” he wrote. It would be an easy win.

Another change in the new standards is the promise to achieve equity in speech-to-text technology. According to one study, the error rate of black users is about twice that of white users. Microsoft has also banned the use of its custom neural voice, which allows near-exact imitation of a user’s voice, for fear that it could be used as a tool of deception.

Crampton noted that these changes were essential in part because of the lack of government control over AI systems.

“Artificial intelligence is becoming more and more important in our lives, but our laws are lagging behind,” she noted. “They have not yet caught up with the particular dangers of AI or the demands of society. While there are signs that government action on AI is growing, we recognize that it is also our responsibility to act. We believe it is necessary to ensure that AI systems are designed responsibly. »

Leave a Comment