There is no problem in sight. Everything was included in the specifications of the competition. All competitors, including our deepware.ai team, who participated in the competition on behalf of Zemana and outdistanced nearly 2000 participants, accepted the conditions from the beginning. But after the competition, it is not possible to ignore the big and disturbing contradiction as we can say “would that happen too?” This was the first and biggest competition to promote and support the development of detection models to protect the online world from the deepfake threat. Of course, it should not be difficult to understand and accept that the malicious deepfake developers should face consequences that can contribute to reverse engineering processes.
Deepfake videos whose algorithms had not been previously studied were used in DFDC final
In the previous article, we mentioned the results of the first global deepfake detection model development competition, which was hosted by Facebook and Microsoft and lasted for more than six months. In the final stage of the Deepfake Detection Challenge (DFDC), in which 2114 teams from each continent participated with more than 35,000 deepfake detection algorithms, the model that won the competition performed the deepfake detection with a 65.18% accuracy rate. This result revealed that the developed AI-based cybersecurity algorithms were able to catch up to two-thirds of all kinds of deepfakes.
However, the average performance of the algorithm that completed the first public stage of the competition in first place achieved 82.56%. This is because, in the first stage of the competition, the competitors were allowed to access the data set, which consisted of more than 100,000 videos, and to train the models they developed on this data set. In the final stage, competitor deepfake detection models were tested on a different data set of 10,000 videos, defined as “black boxes,” that had not been shared with competitors. Moreover, these videos had been modified to make detection even more difficult by using different deep frame rendering models, image enhancement techniques, and additional magnifications and distracters such as blur, frame rate change, and over coats. For this, they enlisted the support of experts from the DFDC academic stakeholders, Cornell Tech, MIT, Munich Technical University, UC Berkeley, Albany, SUNY University, Maryland University, Naples Federico II, and O xford University. This is how they measured the abilities of competitor algorithms to overcome the difficulty of generalizing from known samples to unknown samples. It has become very clear that the developed deepfake detection algorithms have not been as successful in detecting deepfake samples that they have not been trained for, as well as the data sets they have been trained in.
Does not the R&D of deepfake detection models include reverse engineering?
You have already seen that in the process of developing AI-based deepfake detection models, an important part of R&D studies is the effort of cybersecurity providers to solve the algorithms of deepfake tools. One reason for this may be considered to be the innovation initiatives that will lead to the beneficial use of deepfake technologies in areas such as education, health, sports, and communication. However, on the other hand, developing the determination methods in accordance with their working principles by solving the structure of the existing deepfake algorithms is one of the main objectives of R&D. In this respect, R&D for deepfake detection algorithms also serves as a reverse engineering activity for deepfake technologies.
The average performance of 65.18% achieved by the DFDC winner should not be the targeted hit rate for the most successful deepfake detection tool. However, Facebook, which hosted the competition with million-dollar awards, did not even expect this before the competition. Mike Schroepfer, who has been serving as Facebook’s Chief Technology Officer (CTO) since 2013 said they were pleased with the results achieved. Schroepfer stated that the results will guide their future work and said, “To be honest, the competition was more successful than I expected.”
Another interesting determination of Facebook’s CTO is that deepfakes are not a major problem for Facebook at the moment. Mike Schroepfer told journalists that they must be prepared to determine such content in the future. Facebook announced in January that it banned deepfake content, although its scope is controversial. Schroepfer emphasized the importance of being prepared and said, “The lesson, learned for the last few years the hard way, is that I have to be prepared for many bad things that will never happen.” It is likely that nobody except Facebook management sees the deepfake threat as a possible future problem anymore. The fact that Mark Zuckerberg, who is known to be against content restrictions on Facebook, has been among the victims of deepfake proves that it should not be so optimistic.
Facebook also announced that it will contribute to developers who do not participate in DFDC to develop an effective deepfake detection model to protect the online world and, of course, digital platform investments from future danger. Accordingly, it does not find enough to publish the special data set, which forces competitor algorithms only in the final stage, with open-source code after the competition. The CTO of Facebook announced that successful deepfake detection algorithms ranked in DFDC will be accessible to all developers working in this field with open-source code.
The code of the models, ranked in DFDC, is open, but Facebook’s codes are no
Facebook seems to be investing in the future of deepfake detection, along with other online world giants. However, it does not place its hope only in the models developed by the DFDC participants. In his assessment for journalists on the DFDC results, Schroepfer emphasized that Facebook has currently developed its own deepfake detection technology apart from this competition, and says that “We have deepfake detection technology in production, and we will improve it in this context.” Moreover, Facebook also announced that it included this deepfake detection model, which directly contributed to its development in the race at DFDC, excluding awarding.
The CTO of Facebook dropped the real bomb in relation to the deepfake detection model they have developed and explained that Facebook will keep its detection technology confidential to prevent reverse engineering. However, Facebook does not show the same sensitivity for the most successful deepfake detection algorithms that ranked in DFDC. Schroepfer emphasized that they will keep their algorithms confidential and admits in a sense that they will provide reverse engineering opportunities for the algorithms that were developed in DFDC by publishing them with open-source code.
Today, the race between the deepfake technologies to not be caught and the deepfake detection models to catch up is a race that is not conducted under equal conditions. As deepfake detection algorithms improve, reverse engineering will become more crucial for malicious deepfake manufacturers. It may seem that to be an open-source, the global corpus that comes to the forefront in the DFDC organization in this field is intended to contribute to the development of deepfake detection algorithms, and it serves for this. However, this strategy will also help deepfake tools to overcome detection algorithms more easily. Does not Facebook explicitly explain that it is for that reason it avoids to publish the algorithms they have developed?