How LaMDA works, Google’s artificial brain ‘accused’ by an engineer of having a conscience of its own

BBC

Alicia Hernández @por_puesto – BBC News Mundo

posted on 06/17/2022 17:21


LaMDA is an artificial brain housed in the cloud. His feed is made up of millions of texts and he trains himself – (credit: Getty Images)

A thinking and conscious machine. That’s how Google engineer Blake Lemoine defined LaMDA — Google’s artificial intelligence system.

Lemoine was removed from his duties by the company.

“Our team — which includes ethics and technology experts — has reviewed Blake’s concerns in line with our AI Principles and informed him that the evidence does not support his claims,” ​​said Brian Gabriel, a Google spokesperson, in a statement.

But how does this machine work?


If we remember the old science fiction movies, we can imagine LaMDA as a robot that assumes human form, opens its eyes, gains consciousness and speaks. Or like HAL-9000, the supercomputer from the movie 2001: A Space Odyssey — which, in a parody of the simpsonsvoices Pierce Brosnan in the English original, falls in love with Marge and wants to kill Homer.

But the reality is a little more complex. LaMDA is an artificial brain and is housed in the cloud. His food is composed of millions of texts and he does his training himself. But on the other hand, it acts like a parrot.

Very complicated? Let’s go by parts, to understand better.

Lines of code and circuits on a dark background

Getty Images

LaMDA is a huge neural network that trains itself

superbrain

LaMDA (Language Model for Dialog Applications) was designed by Google in 2017. Its basis is a transformer, that is, a tangle of deep artificial neural networks.

“This neural network trains itself with large amounts of text. But the learning has a goal, which is presented in the form of a game. It has a complete sentence, but a word is missing, and the system has to guess it.” explains Julio Gonzalo Arroyo, professor at Uned (National University of Distance Education) in Spain and principal researcher in the department of natural language processing and information retrieval.

The system plays with itself. He puts words in by trial and error, and when he misses, he acts like a children’s activity book — he looks at the last few pages, sees the correct answer and so he goes on correcting and refining the parameters.

And he also “identifies the meaning of each word and observes the other words around it,” according to Gonzalo Arroyo. Thus, he becomes an expert at predicting patterns and words. It’s a process similar to predictive text on cell phones, but raised to the nth power, with a much larger memory.

Quality, specific and interesting responses

But LaMDA also creates fluid and spontaneous responses — and, according to Google, with the ability to recreate the dynamism and recognize the nuances of human conversation. In short: that they don’t look like they were created by a robot.

Network structures and semiconductors

Getty Images

LaMDA has an extraordinary ability to intuit which words are most appropriate in each context.

This fluidity is one of Google’s goals, according to its technology blog. And they say they can achieve that goal by ensuring that the answers are quality, specific and show interest.

In order for them to have quality, the answers need to make sense. If I say to LaMDA, for example, “I started playing guitar,” he should respond with something related to my information, not anything meaningless.

In order for the second objective to be met (the specific answer), LaMDA should not respond with “very good”, but something more specific, such as: “which guitar brand do you prefer, Gibson or Fender?”

And for the system to provide answers that demonstrate interest and insight, it must reach a higher level. For example: “The Fender Stratocaster is a good guitar, but Brian May’s Red Special is unique.”

The key to responding at this level of detail is self-training. “After reading billions of words, [o sistema] has an extraordinary ability to intuit which words are most appropriate in each context.”

AI robot

Getty Images

‘It makes no sense to anthropomorphize current conversation models’, according to Google

For experts in artificial intelligence, transformers such as LaMDA pose a challenge because “they allow [de informações ou textos] very efficient and produced a veritable revolution in the field of natural language processing”.

Security and trend

Another goal of LaMDA’s training is not to create “violent or inhumane content, not to promote slander or hate speech against groups of people, and not to contain blasphemy,” according to Google’s artificial intelligence (AI) blog.

You also want answers to be fact-based and to have known external sources.

“With LaMDA, we’re taking a careful and thoughtful approach to better address valid concerns about fairness and truthfulness,” said Brian Gabriel, a Google spokesperson.

He argues that the system has already passed 11 different revisions to the AI ​​Principles, “in addition to rigorous research and testing based on fundamental measures of quality, safety, and the system’s ability to produce fact-based statements.”

But how can a system like LaMDA not be biased or present hateful messages? “The secret is to select which data [quais fontes de texto] must be fed into the system”, says Gonzalo Arroyo.

But this is not easy. “The way we communicate reflects our trends and the machines learn that. It’s difficult to eliminate them from the training data without losing representativeness”, he explains.

That is, it is possible for trends to emerge.

Circuit of an artificial intelligence brain

Getty Images

‘We human beings are relatively easily deceived’, says Spanish professor Julio Gonzalo Arroyo

“If you feed news about Queen Leticia [da Espanha]all commenting on what clothes she wears, it is possible that when someone asks the system about her, he repeats this sexist pattern and talks about clothes and not other things”, highlights the professor.

singing parrot

In 1966, a system called ELIZA was designed that applied very simple patterns to simulate the dialogue of a psychotherapist.

“The system encouraged the patient to tell more, whatever the topic of conversation, and triggered patterns such as ‘if you mention the word family, ask how your relationship with your mother is'”, says Gonzalo.

Some people even thought that ELIZA was really a therapist — they even claimed that she had helped them. “We human beings are relatively easily deceived,” says Gonzalo Arroyo.

For him, Lemoine’s claim that LaMDA has gained self-awareness “is an exaggeration.” According to the professor, statements like that of Lemoine do not help to maintain a healthy debate on artificial intelligence.

“Listening to this kind of nonsense is not beneficial. We run the risk of it becoming mania and people thinking we are in Matrix mode, with the machines ready to finish us. This is something remote, it is utopian. I don’t think it helps to have a thoughtful conversation about the benefits of artificial intelligence,” according to Gonzalo Arroyo.

As fluid, high-quality and specific as the conversation is, “it’s just a giant formula that adjusts the parameters to better predict the next word. He has no idea what he’s talking about.”

Google’s answer is similar. “These systems mimic the types of exchanges found in millions of sentences and can talk about any fantastic topic. If you ask what it’s like to be a frozen dinosaur, they can generate texts about melting, roaring, etc.,” explains Google’s Gabriel.

American researchers Emily Bender and Timnit Gebru compared these language creation systems to “random parrots,” which repeat words at random.

That’s why, as Spanish researchers Ariel Guersenvaig and Ramón Sangüesa said, transformers like LaMDA understand what they write as much as a parrot singing.


This text was originally published in https://www.bbc.com/portuguese/geral-61845144

Did you know that the BBC is also on Telegram? Subscribe to the channel.

Have you watched our new videos on YouTube? Subscribe to our channel!

Footer BBC

Source link

About Admin

Check Also

we tested (and approved) Huawei’s “AirPods”

TWS headphones (acronym for “fully wireless stereo”) were once exclusive items, with Apple’s AirPods. Today, …

Leave a Reply

Your email address will not be published.