Deepfake Safari is On, But How Many of The Hunting Weapons Are Loaded?

Humankind is full of contradictions. One cannot help but be astonished at the global reaction to the deepfake that is threatening to set off global chaos with a flight from reality. It is almost as if a mastermind is pitting goodness against evil and encouraging both sides to play their hands. 


Universities and R&D groups are developing artificial intelligence (AI) algorithms that facilitate deepfake synthetic media creation in different versions. But at the same time, they are publishing them for open use on the internet without any filters to guard against “malicious intent,” leaving the data, methods, and codes susceptible to use on new R&D initiatives with such goals. Meanwhile, efforts to come up with cyber-security solutions that focus on deepfake detection tools are conducted introvertedly and behind closed doors, and most of the time, as individual ventures.


Particularly the US House of Representatives and US national security agencies (DARPA, IARPA), as well as US and European universities, international cyber-security organizations, and information technology and social media giants are leading the way with a number of drives to protect international society against deepfake synthetic media attacks. These ventures all share the common goal of promoting and supporting the development of cyber-security solutions that distinguish the original from the fake by identifying deepfake synthetic media. The most recent is the Deepfake Detection Challenge (DFDC) organization launched at the Neural Information Processing Systems (NeurIPS) conference in the Vancouver Convention Center in Canada on December 8-14. 

Can locks deter a burglar?


The competition that is held in collaboration with Partnership on AI with 90 members, including Facebook, Microsoft, Amazon Web Services (AWS), Apple, Google, Intel, IBM, and Sony, is like a Deepfake Safari. In October, sponsors published a special data set that included more than 100,000 videos for cyber-security developers and announced that they would be testing the participating deepfake detection tools on a giant deepfake pool until the end of March. In this way, cyber-security development efforts that were being conducted behind closed doors will come to light for testing, and their “deepfake detection” performance will be tested and verified. 


Now, here is the real question: In the past two years, has there been any development of any cyber-security weapons strong enough to catch deepfake media? Or will deepfake detection tools on the DFDC perform so poorly that people will lose hope that manipulated synthetic media could be distinguished from the original just as a burglar always finds a way to break into a house.

Deepware Scanner defies deepfake synthetic media by itself.


Zemane’s Deepware team is very ambitious about its participation in the DFDC. So, it would be useful to establish our own Deepware Scanner as a benchmark for developing a deepfake detection tool to present to other players on the stage. First, we should emphasize that there is no other deepfake synthetic media detection tool such as Deepware.ai, which is accessible free of charge to anyone for use online. No other cyber-security solution, including seemingly commercial ones with registered trademarks and patents, can claim that it can defy deepfake synthetic media, unlike Deepware Scanner, which has done just that even before the launch of the DFDC. 


Deepware Scanner, the product of over one year of AI R&D development by Zemane, is programmed to immediately catch manipulated videos, thanks to its multi-layered mechanism. Deepware Scanner can be integrated into all platforms with versions developed for social media networks, media outlets, and government institutions. Its integration to any platform allows it to work in the background to detect fake videos and speech, which are becoming increasingly difficult to tell apart from the original before they create chaos. It facilitates the online scanning of a file that is viewed or listened to on a particular platform. Once Deepware Scanner detects the manipulated or fake synthetic media, it warns the viewers or listeners, preventing them from being deceived by fabricated content. Consequently, it gives a chance to expose AI-fraud in digital content and prevent any damage beforehand. 

Deepware #1 among 133 teams at the DFDC. 


The number of participating cyber-security teams had already reached 133 when this article was being written. Zemana’s Deepware Team is in first place with a score of 0.69297 following a 10-hour performance on the first day of the competition, set to continue for three months until the end of March 2020. The kaggle.com website provides the real-time team leader board among the participants of the DFDC competition listed on kaggle.com, one of the world’s leading online data science platform.

Some deepfake detection tools cannot even make the list. 
Regardless of the word you enter on the search engine, you do not see many cyber-security solution alternatives for the detection of deepfake synthetic media. Some of the R&D initiatives can be categorized as follows:

Commercial solutions with a trademark and/or patent: A pre-application registration is required to use the promised new cyber-security technology, or a request must be made to use their deepfake detection tools for a fee. 
Academic R&D studies by universities: These are mostly conducted in collaboration with several universities under the leadership of scientists who are regarded as opinion leaders. Rather than serving as ready-to-use deepfake detection tools, they merely provide academic studies that describe the unbranded new technologies or the methods claiming to offer a solution.

Mixed R&D initiatives (public, private sector, universities): These consist of academic studies that are supported and financed by public institutions in the international security and information technology management field or by technology companies. None of the initiatives manage to stand out after completing the R&D process.

In the aftermath of the US presidential elections, the world could likely face deepfake synthetic media attacks that pose a major threat to international stability, social peace, and individual rights. We know that the source of this threat is the AI technology. Human intelligence transfers a higher intelligence capacity to machines through deep learning by mimicking the human brain. AI develops derivative synthetic outputs that acquire the ability to analyze a lot of data by modeling the operation of the human brain in any format, and they are getting increasingly difficult to discern from the original. Subsequently, it has become possible for human beings to be infused into a fake reality like pawns. With the help of AI, humans can synthesize biometric codes from head-to-toe and transfer them digitally. 

If the world’s cyber-security developers fail to come up with an AI-based deepfake antidote to get the better of synthetic media while it is mocking human intelligence, the task may very well fall on the lab rats to protect the trutha as we will explain in the next article.