The Ethics of AI in Warfare
Last Updated on April 22, 2024 by Editorial Team
Author(s): Nimit Bhardwaj
Originally published on Towards AI.
The Ethics of AI in Warfare
Analyzing the moral dilemmas of using AI in military applications and autonomous weapons.
Artificial Intelligence (AI) continues to develop as a transformative force across many spheres of life, already beginning to revolutionize industries and reshape the way we live and work. As technological changes continue, AI in warfare emerges as a focal point demanding heightened scrutiny from governments, policymakers, and international bodies alike. Central to this are the significant advancements in the development of autonomous weapons systems (AWS), which use algorithms to operate independently and without human supervision on the battlefield. More broadly, AI in its many forms has the potential to enhance a range of military activities, from the likes of robotics and weaponry to intelligence gathering and decision-making.
With such diversity of application comes its own set of ethical dilemmas. The benefits of AI in warfare are increased precision, reduced human casualties, and even deterrence against entering into armed conflict in the first place akin to the threat of nuclear war. However, this would mean giving machines the ability to make deliberate life-and-death decisions, blurring the lines of accountability, and possibly going against the fundamental principles of morality in warfare.
In this article, we will discuss how technology has changed warfare and how AI will now do so too. We will focus on the moral implications of incorporating AI into military scenarios and weapons themselves, as well as regulatory solutions.
A Brief Overview of AI in Warfare
As the Stockholm International Peace Research Institute outlines, AI has become a crucial part of military strategies and budgets, contributing to the wider βarms raceβ [1]. Combined with the likes of nuclear and atomic threats, geopolitics must therefore question the ethics of the continued weaponization of technology. Some believe that these advancements will ultimately lead to zero-sum thinking dominating world politics. This logic is not new; Alfred Nobel hoped the destructive power of dynamite would put an end to all wars [2].
AI has already started to be incorporated into warfare technology such as in drone swarms, guided missiles, and logistical analysis. Autonomous systems have been incorporated into defensive weaponry from even longer ago, e.g. antivehicle and antipersonnel mines. Future developments will continue to aspire to increasing levels of autonomy. The US is testing AI bots which can self-fly a modified version of the F-16 fighter jet; Russia is testing autonomous tanks; and China too is developing their own AI-powered weapons [3].
The goal is to protect human life by continuing to mechanise and automate battlefields. βI can easily imagine a future in which drones outnumber people in the armed forces pretty considerably,β [3] said Douglas Shaw, senior advisor at the Nuclear Threat Initiative. So, instead of soldiers being deployed on the ground we saved lives by putting them in planes and arming them with missiles. Now with AI, militaries hope to spare even more human life from its forces.
Moral Implications of AI in Warfare
This sounds great so far. Save lives by using AI to direct drones. Save lives by using AI to launch missiles. The difference between this technological jump in warfare and past innovations is the lack of human input in decision-making. With AWS and lethal autonomous weapons systems (LAWS), we are handing the power to kill a human being over to an algorithm which has no intuitive humanity.
There are several ethical, moral, and legal issues that arise here.
Is it fair that human life should be taken in war without another human being on the other side of that action? Does the programmer of an algorithm in a LAWS have the same responsibility in representing their country as a fighter pilot, and/ or the same right to contribute to taking enemy life?
Like with the ethical dilemmas surrounding autonomous vehicles [4], is it morally justifiable to delegate life-and-death decisions to AI-powered algorithms? From a technological point of view, this will come to depend in part on the transparency of the programming of AWS: training, datasets used, coded preferences, and errors like bias, in these models. Even if we reach an adequate level of accuracy and transparency, should AWS and LAWS be considered moral in warfare?
Moral Implications of Just War Theory
Just War Theory, credited to St Augustine and Thomas Aquinas in the 13th century [5], evaluates the morality of warfare and ethical decision-making in armed conflict. Across guidelines for jus ad bellum (justice of war) and jus in bello (justice in war), the most notable considerations are:
- Proportionality: The use of force must be proportional to the objective being pursued and must not cause excessive harm or suffering relative to the anticipated benefits.
- Discrimination: Also known as non-combatant immunity, this principle requires that combatants distinguish between combatants and non-combatants, and only target the former while minimizing harm to the latter.
It could be argued that the use of AI-powered weapons and LAWS do not guarantee adherence to these conventions.
On proportionality, AI-backed weaponry would possess the ability to deliver force with greater speed, power, and precision than ever before. Would this level of force necessarily match the threat posed/ military objective, especially if used against a country with less technologically advanced weaponry? Similarly, what if a LAWS is fed erroneous intel, or hallucinates and creates an inaccurate prediction? This could lead to the formation and execution of unnecessary military force and disproportionate actions.
On the point of discrimination, these technologies are not 100% accurate. When firing a missile at an enemy force, what happens if facial recognition [6] technologies cannot distinguish civilians from combatants? This would undermine the moral distinction between legitimate military targets and innocent bystanders.
Case Study
A Panel of UN Experts reported the possible use of a LAWS β STM Kargu-2 β in Libya in 2020, deployed by the Turkish military against the Haftar Affiliated Forces (HAF) [7]. Described as being βprogrammed to attack targets without requiring data connectivity between the operator and the munitionβ [8], the drone units were eventually neutralised by electronic jamming. The involvement of this remote air technology though changed the tide for what had previously been βa low-intensity, low-technology conflict in which casualty avoidance and force protection were a priority for both partiesβ [7].
While causing significant casualties, it is not clear whether the unmanned attack drones caused any fatalities [8]. Still, it highlights issues with unregulated, unmanned use of combat aerial vehicles and drones.
HAF units were not trained to defend against this form of attack, had no protection from the aerial attacks (which occurred despite the drones being offline), and even in retreat continued to be harassed by the LAWS. This alone begins to breach the principle of proportionality, and even more so when considering that the STM Kargu-2s changed the dynamic of the conflict. Reports go so far as to suggest that βthe introduction by Turkey of advanced military technology into the conflict was a decisive element in theβ¦ uneven war of attrition that resulted in the defeat of HAF in western Libya during 2020β [7].
International Cooperation and Regulation of AI in Military Applications
Since 2018, the UN Secretary-General AntΓ³nio Guterres has maintained that LAWS are both politically and morally unacceptable [9]. In his 2023 New Agenda for Peace, Guterres has called for this to be formalised and actioned by 2026. Under this, he suggests a complete ban on the use of AWS which function without human oversight and do not comply with international law, and regulation of all other AWS.
This type of international cooperation and regulation will be necessary to help overcome the ethical concerns we have discussed. For now, the use of AWS without human oversight will cause the most immediate issues. The lack of a human decision-maker creates issues of responsibility. Without a chain of command who takes responsibility for the malfunctioning or general fallibility of an AI-powered system?
Moreover, there would be an ensuing lack of accountability. Especially in traditional warfare where there are defined moral principles like the Just War Theory, here there would be no clearly culpable agent for actions taken by autonomous systems.
Finally, while there are benefits to increasingly adopting AI in military applications, how these technologies end up being used will define whether it becomes a utopic solution or a proliferation of the already politically destabilising arms race.
Therefore, continued discussion around international, legally binding frameworks for ensuring accountability in AI warfare will arguably be one of the most crucial areas of AI regulation in the near future.
References
[1] R. Csernatoni, WEAPONIZING INNOVATION? MAPPING ARTIFICIAL INTELLIGENCE-ENABLED SECURITY AND DEFENCE IN THE EU (2023), EU Non-Proliferation and Disarmament Consortium
[2] A. Roland, War and Technology (2009), FPRI
[3] M. Hirsh, How AI Will Revolutionize Warfare (2023). Foreign Policy
[4] N. Bhardwaj. Ethical AI and Autonomous Vehicles: Championing Moral Principles in the Era of Self-Driving Cars (2024), Hackernoon
[5] A. Moseley, Just War Theory (n.d.), Internet Encyclopaedia of Philosophy
[6] N. Bhardwaj. Bias in Facial Recognition Tech: Explore How Facial Recognition Systems Can Perpetuate Biases (2024), Hackernoon
[7] P. Reslova, Libya, The Use of Lethal Autonomous Weapon Systems (n.d.), ICRC Law & Policy
[8] H. Nasu, THE KARGU-2 AUTONOMOUS ATTACK DRONE: LEGAL & ETHICAL DIMENSIONS (2021), Lieber Institute
[9] Lethal Autonomous Weapon Systems (LAWS) (2023), United Nations
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI