“Deepfake may give you cancer!” With such a headline, you might not bother reading the rest of the article. You might reduce the message of the article to, “If a false perception about you gets out through a phony video, you will be wrecked to the point that even your immune system may collapse.”
We already know that such a risk exists in the not-so-distant future for people who have obsessive enemies or competition, and enjoy some level of fame, wealth, and reputation. The emotional and physical toll that a deepfake takes on someone could eventually give them cancer. Although it may seem pointless to write an article about this to those who already know what deepfake may bring on them, this piece does not aim to do that.
Deepfake’s relationship with tumors
Deepfake technologies developed by artificial intelligence (AI) and deep learning are such a sharp double-edged sword that the promised benefits and the resulting pitfalls leave you between a rock and a hard place. The correlation between deepfake and cancer is an example of this. Important academic studies claim that deepfake technology may be useful for early and effective diagnoses of cancer. On the other hand, being a target of deepfake may destroy someone to the point of leaving them vulnerable to cancer. But there is even another possibility. Cyber attackers may go too far and make a person think that they have cancer, or trick them into believing that they do not have cancer when they do.
Deepfake for fast and early detection of cancer
Some academic studies, especially last year, proved that AI and deep learning could serve as a revolutionary tool in faster, more accurate, more detailed clinical diagnosis in radiology. An article published on the MIT Technology Review presented that medical diagnoses could benefit from Generative Adversarial Networks (GAN)—an AI-developed, deep-learning-based deepfake technology—and the algorithms that provide them with the ability to synthesize realistic images. The article noted how exceptional deep-learning algorithms are at pattern-matching in images. GANs can be trained to identify different types of cancer in a computed tomography scan (CT scan), to detect diseases in magnetic resonance imaging (MRI), and to diagnose anomalies in an x-ray. Due to privacy concerns, researchers do not have sufficient training data, and this is where GANs come into the picture. GANs can effectively multiply a data set to the necessary quantity by synthesizing more medical images that cannot be distinguished from the real ones.
However, deep-learning algorithms have to be trained on high-resolution images so as to produce the best predictions. Also, highly configured, special, and expensive hardware is required to synthesize these high-resolution images, especially in 3D. The high cost renders deepfake-supported diagnoses useless for hospitals. Thus, researchers from the Institute of Medical Informatics at the University of Lübeck decided to divide the process into stages to make it less intensive. In this way, GAN first produces the whole image in low resolution, then acquires the details in high resolution in smaller batches. Through experiments, the researches proved that their method created realistic high-resolution 2D and 3D images with low computing resources while keeping the cost stable, independent of the image size.
Medical imaging under threat of deepfake
Deepfake technologies consolidating clinical diagnosis through deep learning is only the flip side of the coin. The dark side of the coin came to light with a CT-GAN project by an Israeli research group from Cornell University. The group, consisting of Yisroel Mirsky, Tom Mahler, Ilan Shelef, and Yuval Elovici, conducted a practical experiment, infiltrating the radiology network of an active hospital and manipulating the CT scans for their academic paper.
They got the idea from the multiple cyber-attacks that resulted in serious data breaches and disruptions in the medical services of clinics and hospitals in 2018. Their goal was to prove that an infiltrator with access to medical records could do more than just hold data for ransom or sell it on the black market. They wanted to show how deep learning could be applied to add or remove synthetic medical condition evidence in high-resolution 3D medical scans. The infiltrator could do this to hamper a political candidate or commit insurance fraud, an act of terror, or even murder. The researchers explained in their paper that they used a 3D conditional GAN to conduct the experimental assault on the radiology system of an active hospital. Their action demonstrated how the frame of CT-GAN—the subject of the project—could be automated. Despite the complexity of the body and large size of the 3D medical scans, CT-GAN produced realistic results that could be completed in milliseconds. The y focused on injecting and removing lung cancer on the CT scans to assess the experimental attack. The experiment established the susceptibility of three radiologists and the state-of-the-art technology system, as well as the lack of security of the modern radiology network, including internet connections, against an attack. In conclusion, their covert incursion accomplished the interception and manipulation of the CT scans on the radiology network of an active hospital.
Radiologists knowingly fall for it
The CT-GAN project proved how easy it is for health hackers to infiltrate the Picture Archiving and Communication Systems (PACS). Moreover, the researchers conducted an experimental attack on the systems of an active hospital by using a shared computer, known as Raspberry Pi, worth less than $50. Although the participating hospital had consented to the experiment beforehand, they were still displeased with how easy it was for the hackers to access the network. The realistic images produced by the deep-learning models were equally concerning. Despite being aware that the images might have been manipulated, the radiologists still struggled to spot them. Three seasoned radiology specialists, unaware that they were looking at fabricated lung cancer images, confirmed a cancer diagnosis at a 99% rate.
The project affirmed that an infiltrator would have the power to alter the diagnosis results of patients once accessing the scan results, seeing as 3D medical scans provide strong evidence of the patient’s state of health. Thus, an infiltrator could add or remove in the digital medical records evidence of aneurysms, heart disease, blood clots, infections, brain tumors, or other cancers. Hackers of the health system may have several motivations for carrying out such an attack. For instance, they may jeopardize democracy by fabricating an illness that changes the outcome of an election or topples a political figure.
A new and different test of cancer with deepfake. Today, cancer is a prevalent disease amongst us. Although it can be difficult to know how to, we have to protect our digital records against the threat of deepfake so that we do not face the menace of cancer more than we must.