When it comes to artificial intelligence, there is a question – which can be extended to game NPCs – that many find difficult to answer: should these “beings” be respected by us? For a techno-philosopher gave an interesting (and even controversial) opinion on the subject that from time to time is addressed by some work of science fiction or makes headlines for the most diverse reasons.
The debate over a sentient artificial intelligence has recently resurfaced after a Google engineer revealed his experience with LaMDA (Language Model for Dialogue Applications). Impressed with the machine’s ability to carry on a coherent conversation, Blake Lemoine tried to bring the world’s attention to what the Mountain View Giant had created and regardless of the sensationalism generated around the story, she got some people to think.
Among those curious is Katie Wickens, author on the PC Gamer website and who decided to talk to Australian David Chalmers. Professor of Philosophy and Neural Sciences at New York University, he has published several articles related to consciousness and technology, being one of the references when it comes to how artificial intelligence should be treated in the future.
For Chalmers, the day will come when NPCs (non-controllable characters) will be aware of their existence and when that happens, the way we look at them will need to change completely.
“I think their lives are real and that they deserve rights,” defended the academic. “I think any conscious being deserves rights or what philosophers call moral status. Your lives matter.”
Also according to Chamers, as the idea of the metaverse spreads and our lives become more dependent on these virtual spaces, our contact with artificial intelligence will become constant. Therefore, he believes that “if you simulate a human brain on silicon, you will have a sentient being like us,” so “this suggests that these beings deserve rights.”
In the case of the metaverse (or even MMOs), the idea of characters that will continue to carry out their activities even when we are not present makes the discussion even more complicated and leads us to another question: it would be fair to consider that the “lives” of these less important NPCs than ours?
The first reaction to this question may be somewhat obvious, but what if we eventually reach a point where artificial intelligence is aware of its existence? Will we continue to treat these characters as inferior beings? Will we not have the slightest empathy for their feelings?
For most people, it doesn’t make any difference whether a soldier toppled over by his side during a raid on an enemy base, or if we ran over half a dozen NPCs during a breakout in the GTA. We usually see these characters as mere extras, a bunch of zeros and ones that are there just to populate our virtual worlds, with their lives, or rather, existences, worth little or nothing.
This changes a bit when we get closer to some characters, with RPGs being probably the best example of how we can get attached to them. By offering us some level of freedom in interactions, games like this can give us the feeling that the behavior, history and attitudes of NPCs are so real that we are dealing with living beings, beings that we will do our best to protect or at least not displease.
This happened to me, for example, while playing the trilogy Mass Effect. The more I got to know the character Tali’Zorah nar Rayya, the more I became interested in her, her background, and the opinions she had to offer. At no time did I have the illusion that that quairian had a conscience, even so I cared for her, I wanted to always have her by my side during missions or while exploring the game’s universe.
Therefore, if we are capable of worrying about not losing a “friend” during a suicide mission in a game or of being emotional while the HAL 9000 takes its last “sighs”, even though we are aware that their consciences are “rehearsed”, which what would we do if artificial intelligence manages to reach the level of them becoming thinking beings?
Will the slaughters in a Grand Theft Auto will they continue to be so fun and will we act so recklessly, even though NPCs fear for their lives? Would we remain oblivious to their fears simply because their place would be promptly replaced by another artificial intelligence?
Perhaps our lack of empathy with a bunch of random characters is due to their own irrelevance, knowing that if we blow up a little lady in the street using a grenade, we’ll probably find someone identical around the next corner. Deep down, we know that all those puppets are there just to make a number, to fill a big open world that exists simply for our amusement.
Therefore, even for ethical reasons, we may never see the implementation of such advanced artificial intelligence in games. Even though some people might be uncomfortable with the idea of ending the existence of a sentient NPC, there will obviously be many others who will not give the slightest importance to the pleas of whoever is on the other side of our gun barrel, even with it trying to captivate us. telling a compelling story about their past and reacting realistically to our conversation.
Series like Westworld or the game Detroit: Become Human are there to show how machines can be treated by us, even when they behave in such a… human way. One could argue that both are works of fiction, but one only has to look at the countless cases of how people are looked down upon for the most varied (and stupid) reasons to understand that the future does not look very beautiful for human brains simulated in silicon.