When Microsoft added a chatbot to its Bing search engine last month, people noticed that it offered all kinds of false information about the Gap clothing company, Mexican nightlife and singer Billie Eilish. Then, when journalists and other early testers engaged in lengthy conversations with Microsoft’s artificial intelligence (AI) bot, it engaged in some disturbingly creepy behavior.
In the days since the Bing bot’s behavior became a worldwide sensation, people have had a hard time understanding the weirdness of this new creation. More often than not, scientists have said that humans deserve much of the blame. But there is still a bit of a mystery as to what the new chatbot can do and why it would do it. Its complexity makes it hard to dissect and even harder to predict, and researchers are looking at it through a philosophical lens, as well as the hard code of computer science.
Like any other learner, an AI system can learn bad information from bad sources. And that strange behavior? It can be a chatbot’s distorted reflection of the words and intentions of the people who use it, said Terry Sejnowski, a neuroscientist, psychologist and computer scientist who helped lay the intellectual and technical foundations of modern artificial intelligence.
“This happens when you go deeper and deeper into these systems,” said Sejnowski, a professor at the Salk Institute for Biological Studies and the University of California, San Diego, who published a research paper on this phenomenon last month in the journal Neural. computer. “Whatever you’re looking for—whatever you want—they’ll provide it for you.”
Google also flaunted a new chatbot, Bard, in February, but scientists and journalists quickly realized that it was writing nonsense about the James Webb Space Telescope. OpenAI, a San Francisco startup, kickstarted the chatbot boom in November when it introduced ChatGPT, which also tells falsehoods.
The new chatbots are powered by a technology scientists call the Large Language Model, or LLM. These systems learn by analyzing vast amounts of text taken from the Internet, including false, biased, and toxic material. While analyzing that sea of good and bad information, an LLM learns to do one thing in particular: guess the next word in a sequence of words.
It works like a giant version of autocomplete technology that suggests the next word when you text on your phone. But when you chat with a chatbot, it’s not just based on everything it’s learned from the internet. It’s based on everything you’ve told him and everything he’s told you back. The longer the conversation gets, the more influence a user has on what the bot says.
If you want him to get mad, get mad, Sejnowski said. If you persuade him to become scary, she becomes scary.
The reactions to the strange behavior of the Microsoft chatbot overshadowed an important point: it has no personality. It offers instant results spit out by a complex computational algorithm.
Microsoft seemed to curtail the weirder behavior when it put a limit on the length of conversations with the Bing chatbot. Microsoft’s partner OpenAI and Google are also exploring ways to control the behavior of their bots. With chatbots learning from so much material and putting it together in such complex ways, researchers aren’t sure how they’re producing results. Microsoft and OpenAI have decided that the only way they can find out what bots will do in the real world is to drop them and return them to the hoop when they stray.
These systems are a reflection of humanity. But that’s not the only reason chatbots generate problematic language, said Melanie Mitchell, an AI researcher at the Santa Fe Institute in New Mexico. When they generate text, they don’t repeat word for word what’s on the internet. They produce new text by combining billions of patterns.
Even if these systems were trained solely on text that was truthful, they could still produce falsehoods. Even if they learned only from a healthy text, they might generate something scary.
“There’s nothing stopping them from doing it,” Mitchell said. “They’re just trying to produce something that sounds like human language.”
By: CADE METZ
BBC-NEWS-SRC: https://www.nytsyn.com/subscribed/stories/6599242, IMPORTING DATE: 2023-03-06 22:00:07