08/12/2022 – 19:24
Since artificial intelligence was created in 1956, after studies by Professor John McCarthy, from the University of Dartmouth, in Hanover, USA, it has been used for many purposes, always having in mind creating copies or models of behaviors, patterns and human thoughts.
Currently, technology has also been used to copy people’s faces and voices.
The digital art director and CMO of FaceFactory.AI Fernando Rodrigues de Oliveira, known as Fernando 3D, explains the deepfake technique. “It’s an artificial intelligence technique that uses neural networks to generate images and videos that look real, usually of familiar faces, but are actually fake. Created in mid-December 2017, it has been used to create fake videos and images that look real,” he said.
“There are several examples of ‘Deepfakes’, but one of the most famous are the videos that show actress Gal Gadot, from the movie ‘Wonder Woman’, in pornographic films on the internet. Recently, the singer Anitta also had her image doctored with Deepfake, to simulate that she was participating in a pornographic film. Both were victims of counterfeiting intended to demoralize these women and jeopardize their careers,” she said.
Fernando 3D and business partner Bruno Sartori are pioneers in the use of the deepfake technique in Brazil. “Unfortunately, technology that, for example, could be used to allow great names in the art world to come to life again on the big screen, has been used for such nefarious purposes. Therefore, we are always warning and recriminating the misuse of technology”, points out Fernando.
The expert also highlights how deepfake has been used to create fake news, especially during elections. “Many political opponents may hire equally malicious people to create fake videos and audios in order to destroy political opponents or create false evidence to incriminate them. This new type of crime of misrepresentation may become more common during the election period in the next elections”.
In addition to being used for false image linking, the technique can also be used to steal identities and invade people’s privacy. “Facial recognition, used as a security measure on smartphones and other gadgets, is effective for most cases, but the ‘Deepfake’ technique is still evolving, and with the advancement of technology, it only tends to improve. This is because this evolution is daily. With this, some security systems can be deceived in the future, thus compromising access to sensitive information”, he warned.
“Although today ‘deepfake’ and ‘deepvoice’ require, in addition to a large computational power, advanced technical knowledge, ranging from algorithm programming, research and even video editing and post production, there is the possibility of spraying and disseminating the knowledge. This could become, in the not-too-distant future, something as simple as editing a still image in Photoshop.”