This week, the engineer of the Google Blake Lemoine told the Washington Post that a tool developed by the company had developed conscious stimuli. So the Language Model for Dialogue Applications (LaMDA) was speaking as if it were a human.
See too: 17 news that Apple released with iOS 16 on the iPhone system
This quickly began to circulate on social media, giving rise to various conspiracies. Many believed that there was indeed a threat and even popular media outlets created an alert regarding what had happened, increasing the curiosity of those interested in AI.
By some oversight, it seems that the Google engineer exaggerated the facts
Videos and audios of Artificial Intelligence communicating with the employee made the case even more suspicious. Despite the contents being real, apparently there was a lack of sensitivity on his part, generating misinterpretations that there was a risk of threat.
The complexity of a machine of this level can be easily explained by the patterns used in its lines. Words and behaviors are retrieved from an online database and all your actions are 100% predictable, ensuring everyone’s safety.