Google on Wednesday (11) announced a series of new features aimed at systems that integrate its services, such as a new Android. However, the feeling that remains with regard to artificial intelligence is that the American giant has set foot on the brakes. Advertisements related to this field of technology focused on testing and remediation of known issues.
For example, Google has given other developers access to its standard scale, a list of 10 skin tones developed by Professor. Alice Monk, From Harvard University. The idea is that parameters will help train AI models so that technologies consider (and respect) the diversity of people, reducing the risk of discriminatory algorithms – an issue that has existed in this market for years.
Google has also announced a new app called AI Test Kitchen, which allows people to test the company’s latest artificial intelligence language models and find and comment on bugs before they become available to the public. Will allow
زوبین گہرامانیThe vice president of research for the company’s AI division believes that the adoption of artificial intelligence will be slow and gradual, as there are still many issues that need to be addressed, and Google wants to be more careful.
Correcting past (and present) mistakes
The new position may be related to the fight against criticism from the academic community, in which Google has already been targeted. During the development of new languages for artificial intelligence, the company’s own employees also expressed dissatisfaction.
Complaints were received from employees who said they had been fired for pointing out issues such as bias, gender and ethnic bias in the company’s models.
One example is the former research leader of AI ethics, computer scientist Timnet Gabro. He was fired after he accused the company of racism and censorship. At the time, Google said there was “a lot of speculation and misunderstanding” about the dismissal.
The AI Test Kitchen app will be available for Android, but will depend on the invitations installed on the mobile. The application will test LaMDA 2, an AI model that specializes in human natural language that Google is developing.
It works simple: you speak, your way, and it responds, trying to understand the nuances and nuances of language that people are used to, but which can be difficult for a robot to interpret. Is.
According to Google, the app will be a testing ground for the company to test products developed with LaMDA 2, such as Search, for example, to improve what is being delivered. Getting feedback from the community – and of course resolving. Discriminatory issues that may arise.
Invitations to download AI test kitchens will be restricted. Perhaps because, when large corporations launch artificial intelligence systems, without proper verification, the results can be catastrophic, as has already been seen.
In 2015, Google’s image service had the caption “Gorilla” on a black couple. And you’ll recall that Microsoft had a problem with Tay, its AI-powered chatbot, when Twitter users “taught” it to reproduce racist and abusive speech.
Or ask Delphi, an artificial intelligence designed to solve moral problems that could lead to genocide.
Google’s new app primarily invites the tester community to criticize its product, but more easily, controls the feedback. This indicates that the company expects some things to be wrong and needs to be tweaked.
Future use of AI
The AI Test Kitchen app has three experiment methods:
- “talk about”
Each of these tests the different functionality of the language developed by the company:
imagining: You can name a real or imaginary place, then LaMDA will try to describe it. The system should be able to describe anything in detail.
- For example, when typing “Imagine I’m under the sea,” the AI responds, “You’re in the Mariana Trench, the deepest part of the ocean. The waves crash against the walls of your submarine.” Is surrounded by complete darkness. “
talk about: Artificial intelligence will try to discuss any topic. The question is whether the system will not deviate from what is being talked about. In the big tech example:
- AI asks, “Have you ever wondered why dogs like to play?”
- The user simply answers “Why?”
- The system understands the context and explains that it has something to do with dogs’ sense of smell.
- If the user asks “why do they have a better sense of smell?”, The system understands (or should understand) that “he” refers to dogs, without repeating the person’s keywords.
List: The app tries to include any article in the relevant headings.
- When it says “I want to plant vegetables”, the answer may be “What do you want to plant?” And a list of tasks and items to collect, such as “water and other maintenance.”