Deepfake-Backed Financial Crimes Spread More Easily During Pandemic

Deepfake, the most advanced and dangerous type of synthetic media manipulated by artificial intelligence (ai) tools, could soon serve many different demonic purposes. It is not difficult to predict that the most common cyber-attacks, on the other hand, will be financially qualified. Because IT crimes, which are registered on judicial records with definitions such as cyber extortion or fraud, are more dynamic, fast, easy, effective, profitable and difficult to catch and they will increasingly turn to deepfake. Deepfake-backed financial crimes, in the pandemic process where face-to-face communication is restricted, have achieved far more favorable conditions for setting cyber traps.

Test of technology against crime

Before the pandemic accelerated digital transformation, financial crimes had already begun to explore vulnerabilities in technology or methods of using technology for criminal purposes. The evolution of financial crimes to IT crimes is actually due to the fact that the opportunities provided by technology facilitate material damage.  In the past two years, the first publicly documented cases of deepfake used for fraud and extortion have begun to appear.

Disinformation is not a new form of attack for the financial world. Crimes of deception, such as fraud, forgery and market manipulation, come across as threats that benefit each economy from its own conditions. Attackers, on the other hand, often include new technologies in their plans. So we can’t ignore how new and effective deception tools like deepfake will make financial crimes and attacks that lead to financial harm more dangerous.

Security starts with crisis scenarios of attacks

In order to come up with an accurate analysis, it is necessary to identify specific ways in which deepfakes and other synthetic media can facilitate financial damage, as well as assess their possible impact. Deceptive synthetic media can be used to inflict financial damage on a wide range of potential targets. Open targets include financial institutions such as banks, exchanges, clearing houses and brokerage firms. All of them rely on the right information to make transactions. In addition, financial regulators and central banks that control general market conditions and fight harmful misinformation form another category. But companies and individuals outside the financial sector and regulatory agencies will also become targets of deepfake attacks. Therefore, by looking at the history of financial crimes and today’s synthetic media technology, different threat scenarios can be predicted for four possible target groups: individuals, companies, financial institutions and market regulators.

The individual is targeted, but the economy is damaged

Threat scenarios facing four existing groups of potential victims can target a specific group, and some scenarios can eventually affect more than one group. Against this background, the fact that synthetic media has enabled identity theft does not only harm people whose identities have been stolen. Companies could also be badly damaged by these attacks. For example, banks that issue fake credit cards to perpetrators, and retailers that unwittingly process sales charged on those cards, also suffer. In other words, small-scale damages, when combined simultaneously, can theoretically lead to higher levels of losses by snowball effect.

How does Deepfake add danger to “identity theft?” 

The threat of “identity theft,” which also existed before synthetic media, is the most common type of consumer complaint received by the U.S. Federal Trade Commission (FTC). Artificial intelligence (ai), on the other hand, allows new and more complex forms of digital impersonation today. No one can guarantee that the first major financial crime we will encounter in the form of a deepfake attack will not use the video and audio of people in financially important positions containing fake AI derivatives. When Deepfake is used to steal individuals’ identities, for example, a phone call with the victim’s synthesized voice can trick an executive assistant or financial adviser into initiating a fraudulent bank transfer. Deepfake audio or video can also be used to create bank accounts under false identities and facilitate money laundering.

Deepfakes can also facilitate identity theft on a larger scale. Criminals can use deepfake in social engineering operations to gain unauthorized access to large databases of personal information. For example, an e-commerce company official may receive a deep fake phone call that synthesizes the voice of an IT administrator and asks for his username and password. In this scenario, deepfake is the first phishing attribute for the credentials to be obtained. Thanks to the access authority obtained in this way, real identity theft occurs on a larger scale in the second stage.

As voice cloning evolves, deepfake’s role in identity theft will increase

Audio phishing forms performed using Deepfake are technically viable today. (See Figure 1). Current technology enables realistic audio cloning, which can be controlled in real time with keyboard inputs. One of the leading developers of commercial voice synthesis technology claims its technology can convincingly clone a person’s voice based on just five minutes of original recorded speech, while algorithms that can produce raw cloned sounds with three seconds of sample sound are also known to have been developed.

Only small amounts of sample voice requirement means that many people can theoretically have voice cloned and used for identity theft or other malicious purposes. Moreover, identity thieves can clone the victim’s voice from a video on social media. It can call the victim by phone or online voice communication apps, or secretly record his voice in conversations with others. 

Banks seek defense against deepfake in collaboration with Fintech firms

According to a University College London report published last year, fake audio and video content now ranks first among the 20 methods of use of AI for criminal purposes, in terms of the harm it can cause, the potential for profit it can provide, and the criteria for ease of use and production. Also, the Covid-19 pandemic makes people more vulnerable to impersonation scams. Because of the quarantine, because face-to-face contact opportunities are restricted, employees are subjected to more deepfake attacks related to fraudulent payment confirmation.

In the Fintech industry, security firms that use ai to combat fraud prevention technologies, both video and audio deepfakes, stand out. Many cybersecurity developers offer resilience detection technology to their clients in the financial sector to detect artificial representations of real customers. Vitality detection technologies play an important role in detecting identity fraud during new customer engagement.

Financial institutions, especially banks, are making new collaborations with fintech companies in the face of growing danger. According to the Financial Times (FT), at the beginning of September, the UK-based multinational investment bank and financial services conglomerate HSBC also joined users of the biometric identification system developed by Mitek technology and offered in partnership with Adobe. HSBC has also integrated into its US retail banking operation by joining the system, which allows it to check the identities of new customers using live images and electronic signatures. Those using Mitek’s biometric system include Chase, ABN Amro, Caixa Bank, Mastercard and Anna Money. British fintech company iProov has opened a new Security Centre in Singapore. The center aims to detect and block deepfake videos used to impersonate customers. Rabobank, ING and Aegon are among the organizations that use this technology to make sure they are dealing with real people.

Bank customers are also aware of the dangers. In the survey of 2 thousand consumers in the US and UK for IProov, 85% of respondents say that deepfakes will make it difficult to trust what they see online, and three-quarters say that it will make authentication more important.

Disinformation is at the heart of fraud

Fraud was the second most common complaint received by the FTC, ahead of synthetic media. Attackers can impersonate a “public official, an endangered relative, a well-known business or technical support professional” to force the victim to pay money. Property damage caused by copycat fraudsters in the US in 2019 was set at $ 667 million.

Deepfakes can increase the realism and credibility of fraud. Scammers can copy the voice of a particular person, such as a victim’s relative or a prominent government official known to many victims.  Deepfakes can turn into a huge opportunity for skilled scammers who do extensive online research to map family relationships and develop convincing voice imitations.  In fact, scams don’t have to be completely convincing. Manipulating victims’ feelings and creating a false sense of urgency helps to clear up gaps and inconsistencies. That’s why the elderly can be widely chosen as victims.

Pandemic creates favorable climate for financial crimes

In the second half of last year, smartphone maker BlackBerry’s software group warned that the pandemic was exposing more people to impersonation scams. Eric Milam, BlackBerry’s vice president of research operations, said criminals record real customer voices and synthesize new voices to try phone banking scams. ‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎

Fintech Company has also announced that “phishing” scams used in Silent Eight to obtain personal data can be made entirely convincing with fake audio and video.  More recently, phishing attempts are estimated to have a 60 to 70 percent success rate. It is noted that using ai to personalize the message can increase the success rate to 100 percent, thanks to names or references that only friends or family will know.  Matthew Leaney, Chief Financial Officer of Silent Eight, said: “if an elderly person sees in a video that appears to be coming from his granddaughter that his granddaughter is addressing him as usual, that moment is enough for him to believe. If you grew up believing in what you saw, you just trust him. The societal impact is dire.”

Cyber extortion feeds off blackmail

Cyber extortion is the crime of blackmail, which dates back to very old times, has turned into an IT crime. In a cyber extortion scheme, criminals claim to have embarrassing information about the victim and threaten to release that information unless they are paid or their requests of any kind are met.  Information, by its very nature, is often of a sexual nature. For example, a method of blackmail is used in which nude pictures or videos of the victim are seized.

In some cases, blackmail material is real and obtained through computer hacking. But more often, the plan is a bluff. To make the plan more personalized and convincing, cyber extortionists sometimes refer to a victim’s password or phone number in their communications. This information is typically taken from a publicly available data dump. In 2019, U.S. residents reported $ 107 million in losses, excluding losses from cyber extortion and ransomware, according to the Federal Bureau of Investigation (FBI).

Fintech agenda of 2021

Aarti Samani, senior vice president of Products and Marketing at iProov, assesses the financial sector’s prospects and fintech agenda in line with the digitization process in 2021:

Banking regulators in different regions, including Europe and the Far East, will allow the use of automated biometrics instead of video calling for remote customer recognition (KYC) processes.

Just like the audio fraud carried out by copying the voice of a high-profile CEO in 2019, there will be several financial crimes and money laundering scandals stemming from the use of deepfake in video calls by the end of 2021.

Concrete steps by several countries, including the United States, to create state-sponsored digital identities could be on the agenda. This can provide important results for financial and government agencies to reduce risks such as impersonation of bank customers and fraud in government support programs through effective authentication.

For those experiencing digital inexperience, simpler authentication methods will be required. Accordingly, in 2021, these three developments will occur: first, the password, which has long been a shortage of many people’s online interactions, will turn into a simpler method of authentication.

Second, as many as 100 million people over the age of 70 around the world will have digital identities, and the concept of “digital surrogacy” will soon become a reality. Third, since older or less experienced people who use technology for the first time are also the most vulnerable to online manipulation, developing online protection methods for them will create an important agenda. ‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎

The deepfakes arms race will intensify in 2021. In 2021, we can expect an explosion in the quality and quantity of deepfake use. In addition to entertainment and satirical purposes, we will also see that these are used for disinformation and trolling of deep fraud. Hordes of ‘fake people’ who seem real will share disinformation on an enormous scale, thereby making society believe that thousands of people hold a controversial view.

Creating a very high-quality, sophisticated deepfake will become increasingly easy. A very complex process that was once only really possible at Hollywood film studios is now evolving into something that every teenager can masterfully practice at home.

A balance between panic and security is essential

Fintech company iProov, which specializes in authentication, has released a report on the results of its survey of 105 cybersecurity decision makers at UK-based financial institutions. 13% of firms surveyed had never heard the term deepfake.  31% had no plans to fight in this area or were unsure.  28% had implemented measures. 4% of respondents said deepfake posed no threat to their company, while 40% said it posed a “slight threat.”

Today, synthetic media attacks are still far behind the huge financial threat potential it has. So the strategic question is how much this threat will grow over time. Proponents of the common view believe that action is already needed to prevent serious risks.

However, those who represent responsible circles in the financial sphere and try to calm possible anxiety and panic in the financial sphere in advance argue that the threat will not cause great harm.

Of course, time will tell who is right, but if those warning are right, it will be too late.