We tried to explain in the last article how fake synthetic media, produced with the help of artificial intelligence (AI) and known as deepfakes, is a great cyber threat to individuals and institutions, as well as to national security in their fight against enemies and terrorism. So, just as science is trying to develop measures against and solutions for COVID-19 to protect human existence and health on the earth, the online world is also trying to fight deepfakes. Particularly institutions that are at higher risk of deepfakes, such as the Armed Forces, are trying to strengthen their defenses with personalized solutions until an effective cybersecurity model is developed.
Do biometric videos and audio data recordings protect against deepfakes?
Cybersecurity initiatives to develop measures against and solutions for deepfakes are progressing through two channels. Under the leadership and with the support of the online world’s giants (e.g., Facebook and Google), R&D initiatives, on one hand, are developing detection models that can distinguish fake synthetic images and sounds from reality. On the other hand, security-based institutions are trying to create a target-oriented personalized cyber defense shield through biometric video and audio to eliminate corporate risks against deepfakes. At this point, the following question can be asked: It seems to take time to develop a general detection model that will provide 100% security against deepfakes. So, can the personalized biometric video and audio security be a deterrent solution for targets, such that the production of their deepfakes will cause great harm?
As the quality of the video becomes an authentication tool, the risk grows
In a previous article titled “Deepfake has also an eye on our money,” we mentioned that western banks, which have started to use a “Customer Video Authentication” system, assumed the risk of deepfakes. The International Financial Auditing Agencies have stretched the “Knowing Your Customer” (KYC) obligation to banks in the fight against “Fraud, Anti-Money Laundering, and Terrorist Financing” (AML). With the regulation of the Fourth Anti-Money Laundering Directives of the European Union, dated 20 May 2017, it paved the way for European Banks to identify its customers through “video calls.” Thus, the banks initiated the service for banking transactions through video calls on the condition that they scan their legal identification documents, without coming to the branch, via the web or mobile application. But this time, the main problem has become how to distinguish whether the person in the video is real or a realtime deepfake.
This concern (whether a person is real or not) is already the threat of deepfakes. The U.S. Army is preparing to bring a person-oriented solution to this problem and to resolve the suspicion of deepfakes. Within the scope of the 2020 Defense Budget signed by Trump, the President of the United States, the U.S. Congress, which was especially concerned about China and Russia, loosened its purse strings. An annual budget of $5 million was provided by the Intelligence Advanced Research Projects Activity (IARPA) affiliated with the National Intelligence Office as the prize in the competition to develop deepfake detection models. Researchers of the U.S. Defense Advanced Research Projects Agency (DARPA) are also working on technologies that aim to automatically detect the integrity of images or videos, whether they have been altered or not, within the scope of the MediFor Project. The MediFor Program aims to create an end-to-end media forensics platform that can detect manipulations and deta il how manipulations are done. The Pentagon did not find all these initiatives sufficient or believed that they would give results in the short term, and it has put in place person-oriented biometric deepfake measures simultaneously to at least secure its own structuring.
U.S. Army compares deepfake targets with biometric media database
At the Aberdeen Proving Ground War Ability Development Command in Maryland, researchers at the Science and Technology Directorate are making efforts to meet urgent operational demands regarding biometric security. The Intelligence Systems and Processing Department of the Directorate is working on two biometric systems to increase the security of the Armed Forces and Intelligence members on the field and protect them from deepfake traps. These two systems, Video Identity Collection Abuse Prevention (VICE) and Audio Biometric Identity Abuse Prevention System (VIBES), interact with authorized biometric identification platforms to detect media (e.g., photo, video, audio files). Biometric systems provide identity assurance by matching IDs with media stored in the database. This makes it safer to decide whether a person is empowered for a particular task or to access a system, or it is threatening.
10-year biometric media database
It is stated that both biometric systems, VICE and VIBES, contain 10-year data consisting of photographs, videos, audios, and other media. It is noted that the developed algorithms presented good results in matching with the media database. Thanks to the two biometric systems, it is possible to distinguish what is real from the deepfakes by matching the audio and video data. For example, in the video of an ISIS leader, it was determined that the sound did not match that person’s real voice identity. Undoubtedly, the video and audio biometric identification records of all those involved in the Armed Forces and Intelligence duties and powers are included in this strategic media database. Thus, many areas of national security are trying to avoid deepfake traps by matching the biometric audio and video identity, from the reality of the commander’s instructions to the security of the on-site staff, as well as the correct and effective determination of search-and-rescue targets.
The national security role of biometric video authentication in the United States is not limited to the Ministry of Defense. Tygart Technology, which is a server-based video and photo forensic analysis system and whose main product is MXSERVER, provides state and federal government customers in the United States with a biometric recognition infrastructure. The company contributes to security through image matching by processing video and photo collections into searchable sources.
Deepfake’s enigma in the biometric security market
The global biometric market is expected to increase by around $30 billion, with a growth of over 30% annually from 2018 to 2023. With the cloud-based subscription services security market, Pentagon’s demand for Advanced Authentication Technologies is expected to accelerate growth in the biometric security industry. The Australian Digital Transformation Agency (DTA) has also announced that it will integrate biometric authentication with myGov citizen services.
Many experts in the field of biometric security emphasize that the most important threat expected from 2020 is “synthetic identity.” It is stated that synthetic identity fraud, which is considered one of the most difficult types of fraud, will increase, and financial organizations will apply to the companies that develop technology in the field of biometric authentication to prevent this. Naturally, the organizations will be looking for a powerful and secure biometric screening system that cannot be easily deceived by deepfakes. Biometric security companies will have to invest in more sophisticated authentication technologies to guard against synthetic identities. For example, Ben Cunningham, Pindrop’s Marketing Director, working in the field of anti-counterfeiting technologies for voice biometric authentication and calling, shared his efforts to increase Pindrop’s activities against the deepfake threat at the SpeechTEK Conference in November 2019. Cunningham stated that while analyz ing the outputs of the voice calls coming to the call center for biometric authentication, they are now concentrating on sound data that the ear cannot hear.
The claim that 95% of Deepfakes overcomes biometric security
The biometric security industry may be very reliant on biometric processes with precise mathematical models that meet high-security criteria. Through biometric technologies, they will come up with various theses, arguing that the input can be a real person or a computer-generated video. This is a billions-of-dollars market where the customers from the armed forces of the countries that invest in the most advanced technology to the international financial institutions make purchases with abundant zero budgets. On the other hand, there is significant research that is revealing that biometric security is incapable of detecting deepfakes.
In Switzerland, the Idiap Research Institute, a non-profit organization, researches speech processing, computer vision, biometric authentication, multimodal interaction, and machine learning. According to research done by the Institute of Pavel Korshunov and Sebastien Marcel, 95% of deepfakes easily overcome biometric facial recognition systems. According to Korshunov and Marcel, current facial recognition systems are vulnerable to high-quality fake videos created using Generative Adversarial Networks (GANs). Consequently, it is not enough to match the synthetic fake faces created by GAN to the database, but a model that will be detected automatically is needed. The researchers used open-source software to create deepfake videos with faces transformed with a GAN-based algorithm. They explained that the state-of-the-art facial recognition systems based on the VGG and Facenet neural networks are vulnerable to deepfake videos as a result of 95% false acceptance rates.
As a matter of fact, fighting deepfakes is a war of algorithms based on artificial intelligence (AI) between attackers and defenders, regardless of its method, model, and technology. And in this war, none of the parties can claim absolute superiority for now. Whoever has the upper hand instantly, the other party pays a heavy price.