At best deals,no tail tied
During the I/O 2022 conference, held in May, the Google promised to release access to its controversial model of artificial intelligence for conversations LaMDA 2. That day has come. Users interested in exploring the technology’s resources can now sign up for a beta program to gain access to it.
If you’re out of the loop, here’s a quick explanation: LaMDA is an artificial intelligence engine designed to allow the conversation between a person and a machine to be realistic. The proposal is that this interaction is as natural as a communication between two humans.
For Google, it is interesting that the LaMDA is accessed by people outside the company, so the technology can be trained. On the other hand, it is prudent that this access be given to selected groups in order to prevent the responses generated by artificial intelligence from getting out of control.
To put it another way, public testing with conversational systems is risky because artificial intelligence can give disjointed, unethical and even prejudiced answers. Hence the importance of public testing being carried out in phases.
The AI Test Kitchen
Having a controlled testing environment is probably one of the reasons why Google announced earlier this year the AI Test Kitchen. It is an application that allows the latest version of the technology, LaMDA 2, to be tested.
Those interested in interacting with Google’s artificial intelligence can now sign up on the AI Test Kitchen website for a waiting list. But, at least in the initial phase, the release will only be made for groups of users based in the United States, starting with those who use Android. iPhone users should have access to the feature in the coming weeks.
For now, three types of tests are available in AI Test Kitchen:
- imagine it: name a place and LaMDA will try to describe it;
- List It: offer a theme and LaMDA will create a list of related subtasks;
- Talk About It: Offer a subject to test if the LaMDA stays on topic (only dog conversations are possible for now).
Why is LaMDA controversial?
The idea of talking to a chatbot based on artificial intelligence is tempting to many people who are interested in the subject. However, there are those who look at this scenario with suspicion and concern.
An emblematic example is that of Blake Lemoine. As a Google engineer, he had access to LaMDA. Its main function was to help adjust the artificial intelligence to prevent it from generating hostile responses.
However, Lemoine began to worry about the responses he received. For him, the system had become sentient, that is, capable of expressing emotions, opinions and subjective experiences. It’s as if LaMDA has taken on a life of its own.
The engineer’s statements had great public repercussion and raised reflections on the limits of artificial intelligence. Unsurprisingly, Blake Lemoine is no longer an employee of Google after the controversy.
With information: The Verge, TechCrunch.
Is Google’s AI Really Alive?
This is the question that guides the discussion of Tecnocast 249. In it, we talked to Augusto Baffa about the case of Blake Lemoine, a Google engineer who has claimed that the company maintains a conscious artificial intelligence. The episode is also a good opportunity to understand some philosophical and ethical aspects of AI development.