Microsoft And Banks Not Unwinding Against Deepfake During Pandemic

The deepfake challenge continues with full speed on two separate fronts. On the one hand, those who produce new deepfake technologies, on the other hand, those who try to develop a deepfake detection model. While one side is trying to produce artificial reality, the other side is trying to distinguish real from fake. As if the two groups had assumed the function of productive competing networks (GAN), which were initially most widely used for deepfake production. Technology investors are financing both sides and almost competing with each other. As in the case of Zemana, a small number of technology companies with a strong artificial intelligence (AI) infrastructure carry out R&D activities in two areas.

In this cat and mouse game between production (generation) and security (security), the pandemic process began to slightly change perception. As the possibility of physical contact has decreased due to Covid-19, deepfake has become a solution to achieving sustainability in many areas. Just as we were overcoming our fears and starting to relax, the technology and financial giants put their weight back on security.

Efforts to bring deepfake under control

Microsoft, with its Microsoft Video Authenticator announced on September 1, has come forward in its deepfake detection less than two months before the US presidential election. Because the most feared part of deepfake is that it can turn into a disinformation weapon that can lead to chaos, undermining peace and democracy in the global dimension. Microsoft is also leading the Project Origin initiative in collaboration with international media organizations aimed at “reliable news”. Microsoft has also taken a second important step in this area with its new digital watermark technology, which it has announced simultaneously.

The second development on the deepfake agenda, which indicated significance of security again in September, is new technologies that banks and financial institutions applied to counteract deepfake. With respect to this issue, HSBC partnered with Adobe and Mitek, a manufacturer of digital authentication technologies. ING, Rabobank and Aegon resolved on financial technology company iProov that opened its new Security Center on 3rd September. It is because another nightmare caused by deepfake facilitates impersonation and enables cyber fraud that specifically targets global companies. The pandemic not only increases the risk of disinformation greatly but also cyber fraud, due to the shift into online life. ‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎

Microsoft takes lead in deepfake detection

In the middle of this year, Microsoft ended its business relationship with dozens of content editors in the UK and USA, after which it chose to create these content in a kind of deepfake text format with AI algorithms. About 3 months later, Microsoft announced the deepfake detector Microsoft Video Authenticator, which was developed for Microsoft users and the international public.

Microsoft’s new video Authenticator is a kind of deepfake media scanner Like the Deepware Scanner which we made available for free months ago as Zemana on our website deepware.ai. But Microsoft doesn’t make the deepfake detection tool it developed available to everyone from the start, like Zemana. Microsoft’s detection tool also analyzes the photo or video to determine a percentage or reliability score for the possibility that it was manipulated. In video scanning, deepfake can determine the percentage of probability in real time in each frame while the video is playing. The new deepfake detection tool was announced in a blog post signed by Tom Burt, Microsoft’s vice president of security, and Eric Horvitz, director of Science. In the announcement it is declared that the detection tool is operated by detecting the blending limit of fine fading or grayscale elements that may not be detected by the human eye.

The technology was originally developed by Microsoft Research in coordination with Microsoft’s responsible AI team and Microsoft’s Engineering and Ethics and Effecta in Research (Aether) Committee, according to a statement from Microsoft. Microsoft Video Authenticator was created using a public dataset from face Forensic ++ and was also tested on the Deepfake Detection Challenge (DFDC) dataset.

Microsoft does not claim 100% detection

Microsoft predicts that the methods of producing synthetic media will continue to grow, which will become even more complex. As DFDC once again revealed, none of the existing AI-based deepfake detection methods can achieve 100% success. Microsoft emphasizes that it is necessary to find solutions to deepfake technologies that can bypass detection methods. Microsoft points out that in the long term, it is necessary to look for stronger methods to ensure the reality of news, articles and other media content. Against this background, Microsoft also cannot make a 100% detection claim against deepfake technologies as of today.

Digital watermark for reliable news

Microsoft also announced new digital watermark technology as part of its “reliable news” verification platform initiative, which it has implemented in conjunction with media organizations such as BBC, The New York Times, CBC and Canada-Radio. This new technology, announced by Microsoft, includes two components. The first one is a tool on the Microsoft cloud platform named Azure that allows user to add a digital sign or certificate to generated media content. These signs and certificates feature original descriptions of the content in the media’s online circulation environment. The second tool, as a browser extension or in other forms, identifies original content by checking signs and certificates added by the media manufacturer. In this way, it reports with a high degree of accuracy that the content is real and has not been changed, while also forwarding information about the one produced the media.

Democracy Coalition against the Deepfake threat

In addition, Microsoft is partnering with the AI Foundation, a San Francisco-based nonprofit organization with a mission to ensure the power and protection of AI to everyone around the world. Through this partnership, the AI Foundation’s “Reality Defender” 2020 (RD2020) initiative will make the Video Authenticator available to all organizations involved in the democratic processes, including news sources and political campaigns. Deepfake technologies and ethical guidance support will also be provided to organizations that will initially benefit only through RD2020.

Risk rises as contact declines in financial sector 

Financial institutions, including banks and financial technology companies establish partnership to combat criminals by the use of modified video and audio content. The Financial Times (FT) reported on 6th September that research indicates that cyber-attacks involving deepfake are the biggest concern among financial sector customers. According to a University College London report published last month, fake audio and video content now ranks first among the 20 methods of use of AI for criminal purposes, in terms of the harm it can cause, potential for profit it can provide, and the criteria for ease of use and production. Also, the Covid-19 pandemic makes people more vulnerable to impersonation scams. Because of the quarantine and accordingly face-to-face contact opportunities are limited; employees are subjected to more deepfake attacks related to payment confirmation.

The FT reported that a the beginning of September, UK-based multinational investment bank and financial services conglomerate HSBC also joined users of the biometric identification system developed by Mitek technology firm and offered in partnership with Adobe. HSBC has also integrated into its USA retail banking operation by joining the system, which allows it to check the identities of new customers using live images and electronic signatures. Those using Mitek’s biometric system include Chase, ABN Amro, Caixa Bank, Mastercard and Anna Money.

On 3 September 2020, a new security center was opened by the British financial technology company iProov.  The center, based in Singapore, aims to detect and block deepfake videos used to impersonate customers. Rabobank, ING and Aegon are among organizations that use this technology to make sure they are dealing with real people, not manipulated records. Bank customers are also aware of the dangers. In the survey of 2 thousand consumers in the USA and UK for IProov, 85% of respondents say that deepfakes will make it difficult to trust what they see online, and about one in three say that it will also make authentication more important. Andrew Bud, founder and CEO of iProov, said: “Very few organizations are likely to face such action because they are not aware of how fast this technology is developing. The latest deepfakes are so good they will convince most people and the system.”

Who makes technology monstrous?

Prejudices acquired through generalizations have been the most tragic danger to humanity for ages. It has become the most difficult obstacle before development to overcome. Not being able to distinguish hay and grass squeezed life between black and white. The ease of blind acceptance or rejection in its entirety did nothing but waste opportunities. Humanity’s stubbornness with science may have lost centuries.

No doubt, there were those who tried to use discovery or inventions arising from science in a way that served diabolical purposes in every era. And the crime was in technology, or was it in those who used it to commit a crime? Or those who laid the groundwork for it? It is important to be able to calculate the cost of benefits impartially and without bias. When science and technology are put in the service of humanity as a controlled force, the vicious cycle of creating its own monster is also overcome. This also applies to AI and its most controversial product; deepfake technology.