Deepfakes are montages of people’s faces in videos or other photos so realistically that it’s almost impossible to tell whether or not they’re true. Although this is not yet a big problem on social networks — it’s more common in WhatsApp groups and websites, for example — Facebook wants to protect itself against any threats.
The platform works together with members of Michigan State University, in the United States, to create a reverse engineering method whose objective is to identify deepfakes and find who was the author of the montage. Through accurate analysis with artificial intelligence, developers hope to know the specifications and data used in the creation to reach a real conclusion of who produced what.
If the survey brings the expected results (it is still in development), not only Facebook but all other platforms may be able to track criminals active in this segment. Deepfakes are responsible for spreading disinformation, attacking the reputation of public figures, and even producing non-existent incriminating evidence.
There are several reports on the web of pornography created on top of deepfake techniques, created to generate clicks. Sites of this type specializing in pornography tend to make thousands of dollars by deceiving users by putting the face of a famous person on the bodies of porn actors and actresses.
Previous studies in this area are already able to determine which existing AI model made that deepfake, but this work intends to go further to also collect the so-called hyperparameters. These data are unique and leave “fingerprints” on the image, which allows, with the crossing of various information, to know who the author was.
The research challenge is to establish a system capable of analyzing unknown models as well, after all, a professional criminal will not use something fancy to avoid leaving gaps in his/her performance. For now, the only way to do this is to seize the criminal’s computer.
As new strategies emerge every day, it is difficult to create something fixed and capable of covering everything that already exists. Therefore, the idea is to create a robot capable of constantly updating itself and collecting this evidence, almost like a police investigator, to put the puzzle together.
For now, nothing available on the market can do this job satisfactorily. In the last competition promoted by Facebook, the winning algorithm was only able to detect manipulated videos 18% of the time.
The way is to follow up to see how the research will evolve to curb what is already considered to be the successor of fake news. Do you believe deepfakes can be identified in the future? Leave your opinion in the comments.
Read also about : Artificial intelligence senses social media sarcasm and irony