Deepfake that hit Bonner uses artificial intelligence to forge fake videos | elections


William Bonner and Renata Vasconcellos were used in deepfake video playback

Published 08/30/2022 16:02 | Updated 08/31/2022 16:03

Rio – The presenter of ‘Jornal Nacional’, on TV Globo, William Bonner, had his voice faked to spread disinformation. The technique used for manipulation is called “deepfake”. These are fake images, usually on video, produced by artificial intelligence.

In another fake video, presenter Renata Vasconcellos appears on the ‘JN’ bench narrating the result of a false electoral poll. The video circulated on August 17 on social networks such as WhatsApp, Twitter and Youtube. The edit, however, is not considered a deepfake because it is not artificial intelligence media. In this case, an edition was made in which the order of Renata’s speech was changed, giving another context to the news. The technique is called shallowfake. The product is a material with high power to pass for real by the voter.

“Renata’s is not deepfake, but it’s not real either. It’s a video with simple editing, taken from the original context to give another effect. It’s not deepfake because it doesn’t have artificial intelligence”, said deepfake expert Bruno Sartori.

The benchmate, William Bonner, was the target of the deep fake at the end of July, before the election period. The assembly simulates a JN edit. In the fake edition, Bonner calls ex-president Lula (PT) and candidate for vice on the ticket, Geraldo Alckmin (PSB), thieves. The publication inserted the simulation of the anchor’s voice in an old excerpt from the newspaper.

“Bonner’s, yes, it was made by artificial intelligence with a database. The system used was text to speek, a process that transforms text into speech. You type and it reproduces”, says Sartori.

Digital sociology researcher Francisco Kerche explains that deepfake uses artificial intelligence to simulate the speech and movements of people’s faces, usually personalities. The UFRJ researcher says that the program is fed with a database about the person you want to imitate and thus, it becomes possible to create fake videos.

“You have a large set of images that have the person in as many angles as possible: talking, bending over, getting angry, calm, with different expressions on their face. It’s as if you give the computer all these images and it will identify possibilities of facial movement”, he says.

The executive director of the Negócios Accelerados marketing agency Larissa DeLucca corroborates: “There is a system that is fed with photos, videos and audios and this system can reproduce how the person moves and speaks”. It is possible to project the image of the person to be simulated on a video.

The technique emerged in pornographic applications. This phenomenon began in 2017, when porn actresses’ faces were replaced without consent by international stars like Scarlett Johansson and Emma Watson.

In addition to pornographic and political use, the technique can also put people in incriminating situations or assist scammers. “I can record a video and use a program to change the image of my face to someone else’s and put them committing a crime”, exemplifies Larissa.

Obama was the first politician to have the image used by a deepfake - AFP/MONEY SHARMA
Obama was the first politician to have the image used by a deepfakeAFP/MONEY SHARMA

The first political use of deepfakes was applied to the image of former US President Barack Obama. In 2018, filmmaker Jordan Peele posted a video in which Obama allegedly called Donald Trump “an imbecile” during a speech. The former American president has never publicly declared this, but a simulation was made in which Jordan Peele’s speech was applied to the image of the former president and modulated in the voice of Obama.

Deepfake political risks

Kerche points out that this type of content usually circulates outside the scope of the press and checking agencies. “When a video like this goes to a major platform, it is exposed to scrutiny agencies. The risk is when it circulates in an underworld, like on Telegram and WhatsApp”, he warns.

Political marketing expert Larissa DeLucca also points out another detriment of the spread of deepfakes: the political use aims to manipulate the masses to achieve a certain agenda. Thus, a political group obtains electoral advantages not through its proposals, but through hatred towards its opponents.

“This type of content is associated with fake news. The motivation is to generate divisions and create hate speech. To induce emotion, feelings of revolt, of struggle, in a specific group”, he explains. Specialist in technology and security, Larissa points out that the human brain makes decisions based on emotion, not reason. Therefore, deep fakes can influence voters even if they later discover that it was a montage.

“The problem with this type of technology is that it creates discredit with what is most basic. We are used to believing our eyes, so people appropriate the emotion that was provoked in the video”, he points out.

how to prevent

Deepfake technology, points out Francisco Kerche, has developed and become more accessible as people increase the available data, such as videos, photos and audios, on social networks.

“The amount of images that people have on the internet and artificial intelligence itself have advanced a lot. An example is Face Swap, an application that exchanges and changes the user’s faces. This is artificial intelligence. It’s a simple deep fake” , exemplifies.

Despite being ultra-realistic, the researcher gives some tips that can be useful to unmask a deep fake. This is the observation of areas of the face that are usually more expressive in reality and that can be artificial in deepfakes.

Note whether:

. Cheeks and forehead are too smooth or too wrinkled
. Lips out of sync with the voice
. Eyes expression are disconnected
. Reflection in the glasses matches the environment of the video
. Around the face there may be small delays

As deepfakes can be sophisticated, the main tip for experts is to check other credible sources if that content has been reported. “It is necessary to validate the information in other vehicles. If the politician or personality has given that statement, it will be on credible portals and newspapers”, he says. Those who are most harmed by deepfakes, completes Larissa, are those who have less technical knowledge.

Source link

About Admin

Check Also

Musical show from Roraima performs at Teatro Amazonas

‘The King of the Show’ will have two presentations in Manaus, on the 23rd and …

Leave a Reply

Your email address will not be published. Required fields are marked *