During the transition to the age of social media hackers were immediately involved across social networks. Viruses and spyware were sneaked into social media share links. These links targeted the information, and computers, of those who accidentally clicked on them. Security software, like antivirus and antispyware, attempted to solve this problem. However, the target and dangers are now even greater. These attacks now target not only our computers and information but also ourselves. The new generation of cyber attackers are of the artificial intelligence (AI) age, and are directly threatening us: our personality, existence, safety, reputation and every other thing about us.
Now we are facing a fundamental question and problem. In the development of the modern age, while our communication and integration network was not this expanded, we were able to protect ourselves in our small lives using the state’s law enforcement officers to ensure our safety. Now the question is how much responsibility will social media institutions, that gather millions of people and ensure communication and interaction, take to ensure safety on these platforms relating to such threats?
“Do not make us feel like we are in a horror movie” – warning to social media giants.
AI algorithms with machine learning capabilities transcend the limits of human intelligence and efficiency. Productive networks (GAN) that can go beyond what humans are able to do could turn into an artificial monster if used with bad intentions.
US Senators Mark Warner, D-Va and Marco Rubio were called to make new policies and standards for social media companies in order to fight against the distribution of ‘deep fraud’ videos. The Senators send a letter to the top 11 most popular social networks: Facebook, Twitter, TikTok, YouTube, Reddit, LinkedIn, Tumblr, Snapchat, Imgur, Pinterest and Twitch. The letter indicated the potential threat level against American democracy related to social media, and called the networks to act and take precautions. Warner and Rubio expressed that they are concerned about deep fake attacks turning into a, “show arena” before the 2020 Elections. US politicians’ efforts to take precautions about deep fake videos before, “it is too late for everything” is on-going across a variety of different fields.
How can social media be freed of being an accomplice?
With the expansion speed of social networks they are, “desperate accomplices of deep fake” even though they try to avoid it. From the most optimistic perspective if social media institutions do not find a solution to prevent this, then fake synthetic medias used as distribution channels will lead them to a position where, “they became unwilling accomplices”. Recent international enterprises like Black Hat and the RSA Conference demonstrate that the deep fake threat has become a problem that requires an emergency solution, one that is spearheaded by the cybersecurity industry. Social media giants are inevitably in a position to support, demand and encourage security technology development against the deep fake risk.
Facebook and Google are searching for a solution.
Social media giant Facebook is taking steps to decrease the number of synthetic media which it considers to be disinformation. It announced in September 2019 that they will prepare a dataset for deep fake, and have started a competition to develop security in this area. A written announcement, published with the signature of Mike Schroepfer CTO of Facebook, announced that a dataset containing fake videos had been prepared by voluntary actors to allow them to perceive the quality of deep fake media, and to contribute to R&D work. Within this scope they announced the launch of the Deep Fake Detection Challenge (DFDC) to pioneer criteria to detect the accuracy of the information presented on online media, and to develop technologies based on this dataset. Mike Schroepfer announced that Facebook will award $10 million through this project which was started by Microsoft Partnership with the cooperation of scientists from Cornell Tech, MIT, Oxford University, UC Berkeley, Maryland Universi ty, College Park and Albany-SUNY University. A similar initiative was announced by Google in the same month. Google launched a ‘synthetic speech dataset’ in January that published a large deep fake synthetic video, together with a Jigsaw that has the vision, “for freer and safer internet” this was done along with Munich Technical University and Napoli Federico II University. Google offered FaceForensics to technology developers to help improve synthetic video detection methods.
We need to wait and see how much social media networks will help the security industry by undertaking their part in the social responsibility against the dangers of deep fake.