Can Facebook’s Ban On Deepfakes Prevent Disinformation?

Facebook appears to be leading the global social media fight against the deepfake threat. Facebook initiated a Deepfake Detection Challenge (DFDC) organization with a $10-million award to promote the development of a tool that can detect manipulated synthetic media. It also announced that it would remove the artificial intelligence-based deepfake media content that poses a threat of disinformation. This may be considered a temporary and limited response. It may also be viewed as a conspicuous action against the great and imminent cyber-threat. Still, the question remains: Are Facebook’s measures, as well as Instagram’s, sufficient to slow the speed of fake media that aims to form a false public opinion?

It has become a joke in the US media. The press has said, “It’s not yet clear who the Democratic nominee is, but deepfake is already set to be the star of the US presidential election.” Facebook announced its new deepfake move on the first week of the year, maybe so that this third candidate will not overshadow the two real candidates and disrupt democracy by misguiding the electorate. Deepfake synthetic media content with certain criteria will be removed from Facebook and Instagram, though it is uncertain whether the move is related to the election or whether it will be done until the US presidential election as well. Monika Bickert, vice president of global policy management at Facebook, announced on January 6 that the new policy is based on two basic principles. 

The two basic principles for removal. 

Facebook will remove media content that has been forged beyond clarity and quality adjustments in a way to mislead an average person to think that “somebody said something they actually have not said.” Media content that largely contains humor, satire, or parody will not be removed. 
The second rule involves products that have been turned into a synthetic derivative by synthesizing, editing, or altering the media layers with the use of AI and machine learning-based deepfake technologies. Media content that has been altered with simple techniques or methods, such as photoshop, will be on the list of items to be removed. A slowed-down video that showed Nancy Pelosi, speaker of the United States House of Representatives, as if she was feeling hazy will not be removed according to these rules. Although Drew Hammill, Nancy Pelosi’s speaker, has objected to this decision, saying, “Facebook wants you to believe this is a video-editing technology issue, when, in fact, the real issue is that Facebook refuses to stop the dissemination of disinformation.” 

Others will publish it if not Facebook.

Monika Bickert, vice president of global policy management at Facebook, said that the policy of removal is only one of the methods Facebook uses in fighting disinformation. She asserted that

For removal, more than 50 independent inspectors worldwide conduct fact-checking in over 40 languages to identify the videos that do not meet the standards. If the inspector determines a photograph or video to be false or partly false, we reduce its distribution in the news feed drastically and reject it if it’s being run as an ad. And in terms of a critical approach, people who see it, and those who try to share it or have already shared it will see a warning that it is false. This strategy is a critical approach by us. If we simply remove all the manipulated videos that are flagged as false by our inspectors, the videos could still be used on the internet or the social media ecosystem. We provide a great service to people by flagging them as false. 

Nick Clegg, Facebook vice president, global affairs and communications, maintained, “We will share a statement or media content, especially those coming from politicians, even at the cost of breaching the community standards.” He emphasized that, as a general rule, they would treat these videos as content that needs to be seen and heard. Does this policy mean that even an AI-created deepfake video that clearly aims to mislead will stay on social media? What happens when there are security issues on the politicians’ social media accounts? It has long been known that Facebook wants to avoid becoming the arbitrator of what is real or fake or the determiner of the intention or truth about shared media.

Concerns arise from the questions waiting to be answered.

One of the first questions that need an answer is in regards to the technology that Facebook will use to distinguish deepfake videos and photos. Since the decision has already been announced, the result of DFDC is unlikely to prompt an approach that is based on a deepfake cyber-security solution in April.

According to a new policy, shared media’s misleading content is not the main criterion for removal. Manipulated media content that does not adhere to Facebook’s removal criteria will still be published and shared. There will only be a warning issued about the media content if the inspection identifies it as false. But when all is said and done, is a warning enough to avert all of the consequences?

Facebook allows its users to keep and share media content. Will Facebook’s removal policy be limited to the deepfake content uploaded by the user? Will the measure of removal or tagging be valid when a deepfake video on YouTube is shared on Facebook as a link? 

Google, which also owns YouTube, also issued a separate set of data for cyber-security developers last year. However, Facebook faces more questions and scrutiny because it is more at the forefront of the matter than Google. When looking into Facebook’s policies, it is beneficial to examine the online world’s profound contradictions on deepfake. Some put forth that when it is freely allowed to develop and produce a weapon, it will not be possible to prevent the crimes committed with that weapon by merely limiting its distribution. Others claim that such a weapon would not only twist the facts but also conceal them. As awareness for deepfake grows, politicians, businesspeople, famous artists, or athletes may be able to put the finger on deepfake when they are caught red-handed with dirty secrets. 

There is no end to being absorbed in deep thoughts and creating conspiracy theories, and Facebook is not the only party in the matter. It is time that powerful social media institutions decide on joint defense security measures against disinformation. Unless the social media giants come up with a model that allows them to act in unison in accordance with shared ethical values to prevent disinformation, the unilateral measures by each of them will not serve as anything other than efforts to dodge corporate responsibility.