We will try to question the newest and ironic legitimacy trial of deepfakes: Do deepfakes, which are produced with the help of artificial intelligence (AI) in digital audio, text, and video formats and is the common name for all lies, fraud, forgery, manipulation, and disinformation in the online world, reveal the hidden truths?
Facebook’s CTO, Mike Schroepfer, explains that Facebook will keep detection technology secret to avoid reverse engineering. However, Facebook does not show the same sensitivity for the most successful deepfake detection algorithms listed in DFDC. Schroepfer stressed that they will keep their algorithms confidential and admitted that they will offer reverse engineering opportunities by publishing them with open source code for algorithms developed in DFDC.
Those who develop deepfake detection models are expected to be faster than those who developed deepfake tools to gain an edge over the algorithm war before it is too late. For this, don’t they deserve some time and positive discrimination? Isn’t it necessary to tidy up the open-source platforms that provide unconditional and unregulated ammunition and speed up those who develop deepfake tools and produce deepfakes with the help of these that are not known by whom?
It seems to take time to develop a general detection model that will provide 100% security against deepfakes. So, can the personalized biometric video and audio security be a deterrent solution for targets, such that the production of their deepfakes will cause great harm?
Image is a crucial factor of the intelligence. If the flow of images from the field is interrupted, the national security operation remains blind, deaf, and dumb, and the chance of success is reduced. What happens if the intelligence images are not real but synthetically produced, altered, o r manipulated? Wouldn’t the war in the field be lost?
Deepfakes, which we describe in each sentence with the words “threat” and “danger,” disclose synthetic miracles that manipulate people’s perceptions of the media. Deepfakes have provided hope for the marketing world, which is losing its control of the perception management of societies because of the pandemic.
This question may seem to be a cause for concern for companies and, in a wider sense, corporations (since protocols are more crucial to corporate quality standards). However, when it comes to quality of life, individuals may soon have to ask the same question. You must have a protocol—even if not on paper—on how to act properly against foreseeable risks so as to preserve your quality of life.
Just as in the Wild West, there is no-one to protect us from the perils of deepfakes. It is not known when, how, or from where the assault will come. “You can preserve your perception of reality only with your intelligence and mind. You must protect yourself with the technology weapons you can muster,” we are warned by Globalization.
“Life under quarantine” has accelerated the digital transformation. The video conference app Zoom has largely replaced face-to-face meetings. Now there is one more thing that the world has to deal with: zoombombing! This may only be a foreshock. In the previous article, we raised the question, “Are you really meeting with the person you think you are meeting with on Zoom?” So, what happens if they are not the person you think you are talking to? What if the real tremors are still impending on Zoom?
Manage the risk, so you don’t have to manage the crisis. This motto does not belong to the author of this article. You are likely…