Less than two months before the US Presidential Election, Microsoft announced the launch of two different deepfake detection systems. As the restriction of physical contact in the pandemic and impersonation with deepfake content increased the risk of cyber fraud, global financial institutions accelerated their investments in new digital authentication technology.
Facebook’s CTO, Mike Schroepfer, explains that Facebook will keep detection technology secret to avoid reverse engineering. However, Facebook does not show the same sensitivity for the most successful deepfake detection algorithms listed in DFDC. Schroepfer stressed that they will keep their algorithms confidential and admitted that they will offer reverse engineering opportunities by publishing them with open source code for algorithms developed in DFDC.
Those who develop deepfake detection models are expected to be faster than those who developed deepfake tools to gain an edge over the algorithm war before it is too late. For this, don’t they deserve some time and positive discrimination? Isn’t it necessary to tidy up the open-source platforms that provide unconditional and unregulated ammunition and speed up those who develop deepfake tools and produce deepfakes with the help of these that are not known by whom?