Knowledge Center
How much footage / images / data is needed to create a deep fake of someone?
Ten seconds of video would be enough to generate a decent deepfake. For a more convincing deepfake, five to ten minutes of footage would be ideal
What are the main contexts in which deepfakes are currently used or will soon be used? Who are the victims?
Currently, the vast majority of deepfakes are celebrity porn videos. This is followed by deepfakes of politicians. We believe that the victim of the deepfake videos can be anyone in the near future as deepfake generation tools become publicly accessible.
How deepfakes differentiate from traditional video editing techniques?
Traditional CGI technology is quite expensive and time-consuming while producing sub-par results. The main element that increases the risk is that deepfake generation is up to 100x cheaper and available to access easily, for much more convincing results.
How can consumers identify deepfakes, particularly on social media platforms where media is consumed quickly and in huge quantities?
It’s not realistic to expect social media users or the public, in general, to learn to identify deepfake videos. The solution must be provided by the social media platforms, or responsible large organizations and corporations.
How far off are we from deepfakes being used widely for targeted spearfishing attacks?
The technology still requires some manual labor and post-processes to generate highly realistic videos. The voice is usually imitated by an actor in the current deepfake videos. This is the major limitation for wide use of targeted attacks.
How would it work or need to be adapted to help ‘normal’ consumers identify deepfakes, particularly on social media platforms where media is consumed quickly and in huge quantities?
The main thing to reduce the harm of deepfakes is educating society. As deepfakes are very popular on social media platforms, they should inform users about what their users see online. There are already platforms educating people against deepfakes.
Is it possible to detect deepfakes without a software?
Some types of deepfakes that are known as cheapfakes can be distinguished by the human eye. However, as deepfakes get more convincing, technological help is required.
Are there ethical or practical challenges with this approach?
There is no such ethical debate for now, but once technology companies start to obtain more biometric information about the targeted person, its storage might create a problem in terms of personal data privacy.