Artificial intelligence (AI) is defined as the ability of computers and machines to perform tasks that traditionally required human intelligence. Basic AI technologies are developed as computer systems that perform certain tasks with the help of artificial neural networks (similar to the human brain), mathematical optimization, and statistical methods. AI often uses large amounts of data to train and feed algorithms that perform human cognition-related tasks and processes. Most of the current AI technologies serve a “narrow” purpose, designed to perform a particular task, such as identifying objects in images. AI technologies for specific narrow targets are increasingly being used, especially in the field of national security.
Most of the AI and political and scientific articles focus on deadly autonomous weapon systems, such as “killer robots” that detect and destroy targets without human intervention. However, AI is widely used in a variety of national security missions. AI consists of an important part of the software used to operate physical systems such as autonomous aircraft or ships. As in the example of using machine learning to classify targets in satellite images, sometimes AI is considered a part of analytical processes. In either case, AI is not a military element on its own, although it has become an important power and basis for the efficiency of national security tasks and systems.
Is the war won on the field or in the operation center?
You have definitely seen war in action movies, serials, or war and terrorism on the news. A digital operations center is established for national security operations. Fights in the field with global terrorists or military elements of the enemy state are operated through these AI-aided bases with the most advanced and integrated digital image analysis possibilities. National security units in the field act according to the instructions of their superiors in the digital operations center. And, using AI-based algorithms, the operators in the operations center make strategic decisions by analyzing the image data obtained from digital sources in the sky, such as satellites or unmanned aerial vehicles. Image is a crucial factor of the intelligence. If the flow of images from the field is interrupted, the national security operation remains blind, deaf, and dumb, and the chance of success is reduced. What happens if the intelligence images are not real but synthetically produced, altered, o r manipulated? Wouldn’t the war in the field be lost?
Naturally the national security does not just consist of military operations beyond the borders of the country or outside the residential areas. Threats targeting social peace and comfort, which we describe as public order in a broad sense, are diverse. Terrorist actions in cities, social movements leading to anarchy, protest demonstrations, street conflicts, major traffic accidents, acts of God (i.e., floods, fire, earthquakes), and even public-order-oriented attacks (such as robbery, extortion, looting) pose a great danger to public safety. The public authority, which is responsible for the security of the cities and the country, monitors, inspects, and controls the cities, highways, and all strategic areas, especially the critical areas, with CCTV camera security systems. The ability of security forces, health care teams, and firefighters to respond to predicted or unexpected hazards in the fastest and most effective way depends on the video data flow obtained from security camera s. In the article titled “How safe are security cameras against deepfake?” we talked about the risk of the infiltration of manipulated synthetic video content into the CCTV security camera network. But the social threat of deepfakes is not limited to the CCTV security camera network in residential areas.
Can air defense be defended against deepfakes?
In the COVID-19 coronavirus pandemic process, quarantine, social distancing, and mask control in the city centers of many countries are carried out with the image flow obtained from drone clusters. Critical security decisions can be drained when the national security network in the sky that keeps life under surveillance turns into a source of disinformation that could mislead public authority. Hyper-realistic synthetic images can eliminate the possibility of performing correct and necessary intervention in a timely manner. Deepfakes, the biggest threat to the digital vision of national security in the sky, are likely to become the most dangerous secret enemy of nation-states in the near future. When it came to air defense in the past, the first thing that comes to mind would be the fleet power consisting of high-tech aircraft. Now, it is necessary to defend the air defense system against deepfakes.
Citadel Defense, offering AI-backed autonomous drone security technologies to military, political, commercial, and international customers against airborne espionage attempts and attacks, took an important step last month. The company, which serves the U.S. Special Forces, Air and Naval Forces, also provides systems to protect valuable assets, such as merchant marine fleets, oil wells and refineries, against airborne attacks. Citadel Defense is creating a new defense structure against identity fraud tactics. At the end of last month, the company announced the release of a new software that contained artificial neural networks in deepfake technologies as a strong defense against enemy fraud tactics. The company explained that the software is designed to fight increasing enemy tactics that tend to tamper with the security intelligence equipment. The solution, called “Adversarial Networks,” will be distributed to the Citadel’s Titan C-UAS drone defense system platform.
Signal classification model that separates the enemy drone from deepfakes.
Christopher Williams, the CEO of Citadel Defense, said that just as anti-virus programs have methods to detect software abuses, Titan has also automated methods that proactively defend against identity fraud abuses. Williams stated that they added new deep-learning skills to Titan for this purpose, and thus, they aim to blind the drone-equipped enemy. He also stated that they aim to prevent the enemy from gaining any advantage or secure action opportunity in this way in controversial and complex radiofrequency environments. Classification models that are developed in the system and use registered image creation algorithms measure the authenticity of a detected signal. They help to determine whether the signal is a real drone or a fake signal synthetically produced by the enemy.
The U.S.A. spends billions of dollars on the Cyber Forces Command.
The U.S.A. Armed Forces established the Cyber Command (ARCYBER) exactly ten years ago. Billions of dollars have been spent each year in ARCYBER’s cyber training and cyber drills that increase digital war power. The U.S.A. Armed Forces took another big step to move digital war and sky intelligence superiority beyond the atmosphere. With the approval of the 2020 Defense Budget, also known as the National Defense Authorization Act by the U.S.A. president Donald Trump, the U.S.A. Space Force Command has been officially established this year.
The progressing military use of AI is also changing the way military operations are carried out by states. AI-based technologies, autonomous drone clusters, and algorithms that quickly review large amounts of information will undoubtedly increase military fighting speed and efficiency. Research and development work on AI promise to provide sophisticated accuracy and efficiency to complex and dangerous tasks in the field of national security. However, solutions for potential risks that may arise in the context of national security or multinational military operations are not yet sufficient.
Data-poisoning risk makes AI’s reliability questionable.
As AI-backed war becomes widespread, military cooperation seems to be getting harder and harder. It may not be easy for states, having AI technology, to overcome political obstacles in sharing sensitive data required to develop and operate AI-aided systems with their allies. In AI-aided international security operations, the data shared by the allies can leave them vulnerable, especially against manipulation. They are concerned that the enemy can enter their own data repositories and “poison” the data, threatening their national security by adding fake data or deliberately making existing data defective, so they are not wrong at all.
Enemy forces can use AI to launch deception campaigns aimed at interfering with an alliance’s military command and control processes. They can also use AI to launch misinformation campaigns that will lead to disputes among allies. In this way, the enemy can mislead AI-based target recognition systems, lead the system to overlook military targets or to classify them as “not military targets.” It can poison image data to identify civil infrastructure as military facilities. The worst part is that it can cause the wrong targets to be hit. Those who plan and manage military operations have tried to deceive their enemies during times of war and crisis for a long time. In World War II, the allied forces applied to a complex deception that contained imaginary army images equipped with swelling tanks and aircraft traps to deceive Nazi planners about the location of the D-Day landings. There is no longer a need for such physical deception in military operations because AI offers a powerful pe rsuasive weapon (e.g., deepfakes) for setting up digital traps and disinformation.
Deepfakes is a dirty weapon in AI wars.
Deepfakes can be used in unexpected ways, as a dirty weapon that does not actually comply with war ethics. The enemy can generate the deepfake of senior commanders to give false or contradictory orders to the troops in the field. For this, they can use the video or audio recordings of a real commander, obtained from the media or through intelligence. Or, to create unreal satellite intelligence images, it can build hyper-realistic synthetic media through productive contended networks. These fake commands and intelligence reports transmitted via video teleconferencing, phone, email, or radio can cause troops to be deployed to assist the enemy.
Alliance military forces can be even more vulnerable against the disinformation through deepfakes because it is more difficult to ensure coordination in multinational command and control processes. The time pressure, stress factors, and complexity of the national security operations increase the probability of people being enlisted who are not experienced enough to perform deepfake commands without any question.
As the quality of deepfake production increases and it becomes harder to decode the reality of falsified content, national security problems will increase for all states. Not only our online cybersecurity but also our life and freedom will be in danger.