Microsoft aims to eliminate push-to-face analytics tools for ‘responsive AI’.

For years, activists and educators have been concerned that facial analysis software that claims to identify a person’s age, gender and emotional state may be biased, unreliable or Can be aggressive – and should not be sold.

Acknowledging some of these criticisms, Microsoft said Tuesday that it plans to remove these features from its artificial intelligence service in order to detect, analyze and recognize faces. They will stop being available to new users this week, and will be phased out for existing users within a year.

The changes are part of a push by Microsoft to tighten control over artificial intelligence products. After two years of review, a Microsoft team has developed the “Responsive AI Standard”, a 27-page document that sets out the requirements for AI systems to ensure that they do no harm to society. It will not have any effect.

Requirements include ensuring that systems provide “accurate solutions to the problems they are designed to address” and “the same standard of service for identified demographic groups, including disadvantaged groups.”

Prior to their release, the technology that will be used to make important decisions about a person’s employment, education, healthcare, financial services or access to life opportunities, said Microsoft’s chief AI officer. A team led by Natasha Krampton is subject to review. .

There were a lot of concerns around the emotion recognition tool at Microsoft, which labeled someone’s expression as anger, contempt, hatred, fear, happiness, neutrality, sadness or surprise.

“The way we present ourselves has a huge amount of cultural and geographical and individual variability,” said Ms. Crompton. This raises credible concerns, as well as the big question of whether “facial expressions are a reliable indicator of your inner emotional state,” he said.

Age and gender analysis tools are being phased out – along with other tools for detecting facial features such as hair and smiles – for example to interpret visual images for the blind or visually impaired. It may be useful, but the company decided it was difficult to make. Profiling tools are generally available to the public, Ms. Krampton said.

In particular, he added, the system’s so-called gender classification was binary, “and it does not conform to our values.”

Microsoft will also introduce new controls on its facial recognition feature, which can be used to verify identity or search for a particular person. For example, Uber uses software in its app to verify that the driver’s face matches the ID on that driver’s account file. Software developers who wish to use Microsoft’s facial recognition tool will need to apply for access and state how they intend to deploy it.

Users will also need to request and specify how they will use other potentially abusive AI systems, such as Custom Neural Voice. The service can print a human voice based on a sample of someone’s speech, so that authors, for example, can create artificial versions of their voice so that they can read their audiobooks in languages ​​they do not speak.

Because of the potential misuse of the tool – to give the impression that people have said things they haven’t said – to verify that speakers are allowed to use their voice, and The recordings include watermarks that are recognizable by Microsoft. .

“We’re taking concrete steps to live up to our AI principles,” said Ms. Krampton, who worked as a lawyer at Microsoft for 11 years and joined the Ethical AI Group in 2018. “It’s going to be a big journey.”

Microsoft, like other technology companies, has stumbled upon its own artificial intelligence products. In 2016, it released a chatbot on Twitter called Tay, designed to teach users to “understand communication”. Boot immediately launched racist and aggressive tweets, and Microsoft had to end it.

In 2020, researchers discovered that Speech to Text tools developed by Microsoft, Apple, Google, IBM and Amazon work less for black people. Microsoft’s system was the best, but for white people 15% of the words were misidentified, compared to 27% for black people.

The company had collected a variety of speech data to train its AI system, but did not realize how diverse the language could be. So he hired a sociolinguist. From the University of Washington to explain the types of languages ​​that Microsoft needs to know. It goes beyond demographics and regional nature to how people speak in formal and informal settings.

“Thinking about race as a determining factor in how one speaks is actually a little misleading,” Ms Crompton said. “What we’ve learned in consultation with the expert is that a wide range of factors actually affect the linguistic nature.”

Ms. Krampton said the journey of eliminating textual discrepancies from speech has helped inform the company’s set of new standards.

Referring to Europe’s proposed regulations on the use of artificial intelligence, “this is an important set of standards for AI.” “We hope that we will be able to try to use our standards and participate in the bright, necessary discussion that technology companies need to have about these standards.”

There has been a lively debate in the technology community over the years about the potential harms of AI, due to mistakes and errors that have a real impact on people’s lives, such as algorithms that determine the benefits to people. Meet or not Dutch tax authorities mistakenly snatched childcare benefits from needy families when a false algorithm punished people with dual citizenship.

Automated software for recognizing and analyzing faces has been particularly controversial. Last year, Facebook shut down its decades-old system for identifying people in photos. The company’s vice president of artificial intelligence cited “many concerns about the place of facial recognition technology in society.”

Many black men have been wrongly arrested after false face recognition matches. And in 2020, just as the Black Lives Meter protests erupted in Manipolis after George Floyd’s police assassination, Amazon and Microsoft banned the use of facial recognition products by police in the United States. There are clear rules on this. Needed use.

Since then, Washington and Massachusetts have passed legislation that requires, among other things, judicial oversight of police use of facial recognition devices.

Ms Krampton said Microsoft had considered whether to make its software available to police in states with book rules, but had decided not to do so at present. He said that this could change as the legal landscape changes.

Arvind Narayanan, a Princeton professor of computer science and a leading AI expert, said companies were moving away from face analytics technologies because they were “more vulnerable” to various types of AI, which may be questionable but We don’t do that. It definitely feels in our bones. “

“Companies may also realize that, at least for the time being, some of these systems are not commercially viable,” he said. Microsoft could not say how many users it has got rid of for its facial features. Mr Narayanan predicted that companies would be less likely to abandon other offensive technologies, such as targeted advertising, which profiles people to choose the best ads to show, because they were “cash cows”.

Leave a Comment

Your email address will not be published.