Introduction
The renowned movie director James Cameron’s iconic film The Terminator presents a bleak future where the evil artificial intelligence (AI) entity known as Skynet triggers a nuclear conflict against humanity to ensure its own survival. Released in 1984, the film demonstrated remarkable foresight by anticipating concerns that now dominate discussions about intelligent computer systems.Footnote 1 The late esteemed scientist Stephen Hawking characterized AI as the foremost threat to human civilizationFootnote 2—an assertion consistent with historical perceptions of scientific and technological advancements, such as the invention of nuclear weapons, which stand as existential risks. In recent years, activists, scientists, and governments have advocated for United Nations (UN)-level prohibitions on “killer robots,” including Lethal Autonomous Weapon Systems (LAWS). While the technology depicted in The Terminator remains speculative, the emergence of “general AI,” a self-aware form of AI, is anticipated in the future.Footnote 3
According to the United States Department of Defense (US DOD), “AI refers to the ability of machines to perform tasks that normally require human intelligence—for example, recognizing patterns, learning from experience, drawing conclusions, making predictions, or taking action […].Footnote 4 AI has grown significantly during the past decade. It is used in many spheres of our lives, including banking, healthcare, and transport.Footnote 5 However, certain applications of AI remain highly controversial, particularly in the development of LAWS,Footnote 6 often referred to as killer robots. Once developed, LAWS are considered the third revolution in warfare after the invention of gunpowder and nuclear weapons.Footnote 7 AI is the principal driver behind LAWS, which can independently target and resort to deadly force. LAWS are classified as fully autonomous weapons due to their capacity to make targeting decisions independent of immediate human control. These weapon systems can swiftly and appropriately determine whether and how to respond to a specific threat without requiring any intervention from human operators.Footnote 8 The rapid progress of these weapon systems has the potential to completely transform the way warfare is conducted and could significantly impact law enforcement operations. Furthermore, it raises serious apprehensions regarding the infringement of human rights, potentially compromising the right to life and various other rights.Footnote 9
The rise of autonomous weapons has prompted grave concerns and has begun to receive criticism across international and non-governmental organizations (NGOs).Footnote 10 For instance, Christof Heyns stated that the use of lethal autonomous robots (LARs) “may not be acceptable due to the lack of an effective legal accountability system, and because robots should not possess the authority to make decisions regarding human life and death.”Footnote 11
Additionally, members of the international community, including international organizations and numerous States, are advocating for the explicit prohibition of LAWS. They are urging the adoption of international regulations to govern the development and deployment of these weapon systems. Given the multiple issues that occur with autonomous weapons, such as possible harm to civilians, lack of accountability, prospective arms races, and ethical concerns around granting machines the power to decide matters of life and death, resistance to these weapons is not surprising.Footnote 12 Concerning the deployment of killer robots, UN Secretary-General António Guterres opined that “the prospect of machines with the discretion and power to take human life is morally repugnant.”Footnote 13
However, strong military nations such as the US, the United Kingdom (UK), and Russia oppose new international norms on the regulation of autonomous weapons.Footnote 14 For them, the existing international humanitarian law (IHL) that regulates warfare is sufficient to adequately regulate future weapon systems laden with cutting-edge technologies. They also highlight the potential advantages of increased autonomy that AI weapon systems can bring. They argue that such weapons have the potential to be more selective in their targeting, thereby decreasing the threats posed to civilians.Footnote 15 Countries such as China, Israel, Russia, South Korea, the UK, and the US are among the countries making significant investments in the creation of autonomous weaponry. Furthermore, States such as Australia and Turkey are also allocating resources for the production of LAWS.Footnote 16
This paper proceeds as follows: The second part defines LAWS. The third part discusses the advantages and major concerns of robotic weapon systems. The fourth part analyzes the human rights challenges posed by using LAWS in armed conflict and law enforcement operations. The last part concludes the paper.
Defining LAWS: What are they?
Before defining LAWS, it is important to understand autonomy. Autonomy may be defined as a machine’s ability to function irrespective of any further human intervention after it has been initiated.Footnote 17 Different degrees of autonomy can be observed in various weapons. For instance, in the case of simple weapons like landmines, a lesser degree of autonomy is observed. These devices possess a minimal level of “autonomy,” as they are initially planted by a party and subsequently explode automatically when triggered without any further interaction needed with the party that deployed them. Semi-autonomous weapon systems, such as fire-and-forget missiles on aircraft that are commonly used in modern warfare, demonstrate a moderate level of autonomy—that is, after being targeted by the pilot, these missiles lock onto the designated target and carry out the attack without requiring human involvement.Footnote 18
However, certain existing weapon systems exhibit a higher degree of autonomy. For instance, the US Navy and the Israeli Army employ weapon systems capable of autonomously detecting, tracking, and firing at incoming missiles based on pre-programmed parameters. Similarly, autonomous sentry guns positioned at the border between North and South Korea automatically engage any objects within their range. Nevertheless, the weapon systems currently under design and development by many countries are expected to possess significantly higher levels of autonomy. They are designed to possess the capacity to control their movements, identify targets independently, and make decisions to engage and eliminate these targets without any human intervention. By incorporating robust AI, these future weapon systems might even learn and autonomously determine the most effective strategies for targeting.Footnote 19
There is not a single, universally accepted definition of LAWS.Footnote 20 However, the US DOD’s definition is widely discussed globally, which describes autonomous weapon systems as “weapon systems that, once activated, can select and engage targets without further operator intervention.”Footnote 21 The primary requirement of the US DOD definition is that a system for warfare must be capable of operating independently and must “select and engage targets” after their initiation without further human involvement. This is also the opinion of Human Rights Watch, which claims that any robot capable of selecting and striking targets without human assistance is a fully autonomous weapon.Footnote 22
Likewise, in a United Nations (UN) Human Rights Council report, the former UN Special Rapporteur on extrajudicial, summary, or arbitrary executions, Christof Heyns, stated that “the term LARs refers to robotic weapon systems that, once activated, can select and engage targets without further intervention by a human operator. The important element is that the robot has an autonomous ‘choice’ regarding selection of a target and the use of lethal force.”Footnote 23
One of the most vocal critics of LAWS, the Campaign to Stop Killer Robots (an NGO), stated that “[k]iller robots are weapon systems that would automatically select and attack targets without meaningful human control. This means the decision to kill would be made by a machine.”Footnote 24
However, various nations have developed their own definitions of LAWS based on distinct criteria that particularly focus on the technological complexity of the weapon system. According to these definitions, LAWS are considered weapon systems that have the capability to exhibit cognition at a level comparable to humans. Conversely, some argue that a definition of LAWS is unnecessary or even undesirable. Nonetheless, despite these disparities, most participants in the discussions on LAWS generally concur that the defining characteristics of LAWS encompass complete autonomy (without manual human control) and the capacity to cause lethal outcomes.Footnote 25
The Use of LAWS in Armed Conflict: Potential Advantages and Major Concerns
Advantages
Many States are investing significant amounts of money to develop LAWS because of their several advantages on the battlefield.Footnote 26 Firstly, LAWS are more capable of processing data quickly and making complex targeting decisions than human soldiers. Secondly, these weapon systems are machines. As a result, their decisions would neither be influenced by emotions nor fear or anger; rather, they would respond to stimuli better than armed forces.Footnote 27 Thirdly, LAWS would not possess the human instinct of self-preservation that could cause rash decision-making, thus leading to a shoot-first psychology. Also, LAWS would perform a military task for a longer duration without getting bored, stressed, or fatigued.Footnote 28
Additionally, LAWS would be more suitable to use in remote areas than conventional weapons because the former would not require a human operator to function.Footnote 29 Furthermore, LAWS would reduce the casualties of human soldiers by reducing the number of soldiers sent to the front lines. Moreover, LAWS would have access to the dangerous areas on the battlefield that are beyond the reach of armed forces. Undoubtedly, the use of LAWS can have economic advantages by lowering the costs of military missions; for example, fewer armed forces may be required to finish the same task, thereby enabling soldiers to relocate for more complex jobs.Footnote 30 In addition, LAWS would cost less than soldiers on the battlefield, thereby increasing the chances of LAWS deployment over human soldiers. For instance, an American soldier costs around US$850,000 per year for deployment in Afghanistan, while a TALON robot can cost around US$230,000 annually.Footnote 31
Lastly, the supportersFootnote 32 of autonomous weapons contend that these systems can adhere to international law more effectively than human soldiers in combat situations. They may act more cautiously and judiciously than soldiers because they do not possess the instinct of self-protection.Footnote 33
Major concerns
OpponentsFootnote 34 of LAWS, such as Human Rights Watch, the Campaign to Stop Killer Robots, and others, have argued that autonomous weapons should be outlawed altogether because they cannot be programmed to respect the laws of war. Further, complying with the rules of IHL requires human qualities that are absent in LAWS.Footnote 35 Essentially, critics argue that LAWS cannot comply with the rules of armed conflict: therefore, they should be banned. Moreover, one of the major concerns linked to the employment of LAWS pertains to the potential lack of accountability for IHL infringements.Footnote 36 Individuals are held accountable under IHL for any unjust killings of civilians. As a result, any weapon employed during warfare fails to satisfy the criteria outlined in the laws of war and, as such, must not be deployed.Footnote 37 Furthermore, any failures or glitches in LAWS might potentially lead to the loss of innocent lives. Undoubtedly, incorporating autonomy in LAWS would enable the differentiation between non-combatants and combatants. However, any catastrophic malfunction in the robots would have devastating consequences.
Indeed, these devices are susceptible to faults. Moreover, AI systems exhibit skepticism when it comes to adjusting to unanticipated conditions.Footnote 38 Additionally, an adversary might undermine the system via data corruption or software manipulation of AI weaponry. Furthermore, opponents of LAWS contend that not all probable scenarios that LAWS may encounter during a conflict can be programmed and that any mistake in the weapon systems might result in a significant number of civilian fatalities or an inadvertent attack on the enemy. Moreover, once triggered, LAWS cannot be turned off in the event of an autonomous weapon failure; hence, until their energy or ammunition runs out, they will continue the attack.Footnote 39 Critics further caution that the employment of LAWS may eradicate the feeling of compassion and human dignity in warfare. When an enemy is asleep or taking a shower, for example, a combatant would not assault them; but LAWS would not make similar moral decisions. Giving autonomous robots the authority to decide between life and death also strips troopsFootnote 40 of their human dignity and turns soldiers into inanimate things.Footnote 41
Human Rights Challenges Posed by LAWS in Armed Conflict and Law Enforcement Operations
This section analyzes the human rights issues of LAWS. In doing so, the authors argue that the deployment of LAWS could violate human rights, such as the rights to life, human dignity, and remedy, among others.Footnote 42 The argument is based on the grounds that LAWS are machines that lack the human attributes to comply with the rules of international law on the battlefieldFootnote 43 and law enforcement operations in the domestic sphere.Footnote 44 This is because there are infinite circumstances during war and law enforcement operations that cannot be entirely programmed into machines to comply with the rules of war.Footnote 45 In other words, endless scenarios in war and law enforcement operations cannot be coded into LAWS to comply with international law, especially in the area of human rights.
Right to Life and LAWS
In the context of warfare, the principles and provisions of human rights law and IHL are applicable. The principles of humanitarian law mandate that belligerent parties must consistently differentiate between combatants and civilians during armed conflicts; solely combatants are legitimate targets, while civilians are entitled to protection. Also, the parties must discriminate between civilian objectives (such as schools, hospitals, churches, etc.) and military objectives. This implies that only military objectives can be targeted in attacks. They must adopt steps that minimize harm to civilians and civilian objects, and they must also discriminate between civilians and combatants while attacking. The assault shall not inflict unjustified damage on persons and civilian targets.Footnote 46
However, LAWS would be unlikely to abide by IHL norms such as distinction and proportionality since assessing conditions during combat is a complicated process that requires human judgment, which is lacking in LAWS. Based on the statement of the former UN Special Rapporteur on extrajudicial, summary, or arbitrary executions, Christoff Heyns, the assessment of such situations necessitates human qualities and judgment, which are lacking in lethal robots. The absence of human qualities makes it difficult to meet international standards.Footnote 47 Therefore, the right to life may be violated in war. Even during war, the right to life remains intact due to its inviolable nature. Thus, even in situations of armed conflict, arbitrary killing amounts to unlawful acts of killing.Footnote 48 According to Rule 89 of the International Committee of the Red Cross’s customary IHL rules, “arbitrary deprivation of the right to life under human rights law includes the unlawful killing in the conduct of hostilities.”Footnote 49 As mentioned previously, the application of LAWS poses a risk to the compliance of rules of distinction and proportionality under IHL. As a result, there would be a possibility of LAWS arbitrarily killing civilians and thereby infringing their right to life during warfare.Footnote 50
Likewise, in the realm of law enforcement, it is imperative to strictly adhere to international human rights law (IHRL) and uphold generally accepted policing principles, such as the UN Basic Principles on the Use of Force and Firearms (UNBPUFF). The deployment of LAWS without much human control in such contexts would inevitably lead to illegal harm and fatalitiesFootnote 51—thus violating human rights law standards. The utilization of LAWS within the law enforcement domain may potentially endanger essential human rights, such as the right to life, as outlined in article 6(1)Footnote 52 of the International Covenant on Civil and Political Rights (ICCPR). Under the principles of human rights, only a limited number of situations warrant the application of deadly force. To start, such force must be legally permitted in conformity with accepted international standards. Secondly, it should be used to safeguard human life. Thirdly, it should only be used as a final option. Fourthly, the use of force should be proportionate to the threat at hand. Lastly, officials must be held responsible for the breach of the law.Footnote 53
Under Principle 9 of UNBPUFF, law enforcement officials are authorized to employ lethal force solely in situations where there is an immediate and substantial risk of death or serious injury.Footnote 54 This requires a thorough assessment of potential or imminent threats, including the identification of the source of danger, the exploration of non-violent alternatives, an evaluation of the necessity to use lethal force to neutralize the threat, the utilization of various communication methods to defuse the situation, a determination of appropriate weapon or equipment use, and prioritization of the preservation of the right to life. Given the constantly shifting, volatile, and unexpected nature of law enforcement operations, it is beyond the reach of lethal autonomous robots to adequately reproduce these essential decision-making skills that are unique to humans.Footnote 55 This means the right to life is at risk by using LAWS in domestic law enforcement. Moreover, before resorting to the use of force, law enforcement officials are expected to employ non-violent methods, such as negotiation, persuasion, and de-escalation. These methods rely on human qualities, such as empathy, negotiation skills, an understanding of crowd behavior, and extensive training to effectively respond to fluid and unpredictable situations. Such attributes cannot be effectively replicated by algorithms.Footnote 56 Thus, such machines may violate the right to life granted under international law.
Human Dignity and LAWS
Many scholars argue that the use of autonomous weapons would violate human dignity.Footnote 57 The preamble of the Universal Declaration of Human Rights (UDHR) states that “recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family is the foundation of freedom, justice, and peace in the world.”Footnote 58 All human beings have inherent rights of human dignity,Footnote 59 which may be under threat from LAWS, as delegating the power to take human lives to machines is like devaluing human life.Footnote 60 Killer robots can take the lives of human beings, though such weapons are just machines, not humans, and as a result, they would be unable to uphold the dignity of those humans being targeted.Footnote 61 These weapons are machines, so they would be unable to understand the importance of individual human life or the loss of human life.Footnote 62 LAWS not only risks the lives of civilians but also poses a threat to human dignity by giving power to machines to kill humans.Footnote 63 Likewise, Mary Ellen O’Connell argues that the mechanized killing of humans by LAWS raises concern for human dignity, as killer robots lack a human conscience when taking lives. In the US, an increasing number of robots slaughter cattle,Footnote 64 without a human conscience. In the future, machines could also decide who dies and carry out the killing. The Vatican’s Archbishop Silvano Tomasi emphasizes that human qualities like compassion and insight are essential for decision-making over life and death. Machines cannot possess these qualities, and even emotionally or psychologically impaired individuals remain human with dignity. It follows that machines taking human lives violates human dignity.Footnote 65
Additionally, there is a likelihood that the deployment of LAWS could infringe upon the prohibition of torture and other cruel, inhumane, or humiliating treatment or punishment. Similar to the ban on willful killing, torture is prohibited under all circumstances, including war, and cannot be excused or waived. No exceptions are allowed to this basic principle of international law, which imposes responsibilities on all countries regardless of their treaty obligations.Footnote 66 Moreover, per article 17 of the ICCPR,Footnote 67 which covers the right to privacy, LAWS may infringe on people’s right to privacy and their right to equality and freedom from discrimination under the ICCPR’s article 26.Footnote 68
It would take massive amounts of data to construct targeted algorithms and establish data patterns for LAWS to use this information to decide when and against whom to employ force. Due to this potential for enormous data collecting and indiscriminate mass monitoring, LAWS poses a serious threat to people’s right to privacy.Footnote 69 Moreover, the extensive gathering and analysis of individual data may also affect the principle of equality and non-discrimination. Machine-learning systems can significantly and swiftly reinforce or alter existing power dynamics, as the datasets utilized to train algorithms often encompass historical prejudices that are subsequently replicated and intensified.Footnote 70 As demonstrated in research conducted by the American Civil Liberties Union (ACLU), the facial recognition software known as “Rekognition” produced erroneous results by misidentifying twenty-eight members of the US Congress as individuals who had been previously arrested.Footnote 71 Notably, the inaccuracies were disproportionately skewed towards people of color, with six members of Congress’s Black Caucus among those falsely matched. Hence, LAWS have the capacity to perpetuate systemic discrimination, potentially leading to severe and life-threatening circumstances.Footnote 72
Right to Remedy and LAWS
The recourse of autonomous weapons presents striking obstacles to the realization of the right to seek remedies for human rights infringements in both armed conflicts and domestic law enforcement. Article 8Footnote 73 of the UDHR and article 2(3)Footnote 74 of the ICCPR stipulate that the right to remedy is universally applicable to all instances of human rights violations. This right is encompassed in several human rights treaties. Under the right to remedy, nations must guarantee personal accountability by prosecuting those implicated in severe breaches of human rights. According to General Comment no. 31 of the UN Human Rights Committee, it is incumbent upon States to conduct investigations into allegations of misconduct and, in the event that evidence of specific violations is discovered, to take appropriate measures to hold the offenders accountable as mandated by the ICCPR. The omission to conduct an inquiry and bring the offender(s) to justice may amount to a distinct violation of the ICCPR. The UN General Assembly’s 2005 adoption of the Basic Principles and Guidelines on the Right to a Remedy and Reparation requires upholding the duty to conduct investigations and pursue legal remedies. The norms necessitate States holding people liable who are proven to have committed human rights infringements.Footnote 75
The responsibility to initiate legal proceedings encompasses activities conducted in the context of law enforcement operations or military hostilities. According to the Basic Principles and Guidelines of 2005, action must be taken against grave IHRL breaches, including arbitrary killings. The principles also encompass “serious violations of international humanitarian law,” which specifically apply to armed conflicts.Footnote 76
The Fourth Geneva ConventionFootnote 77 and Additional Protocol IFootnote 78 are essential legal frameworks for safeguarding civilians under IHL. These agreements mandate that the parties undertake legal proceedings against “grave breaches,” which encompass war crimes that involve intentional attacks on civilians or assaults that are disproportionate.Footnote 79 The right to remedy goes beyond criminal prosecution and includes reparations, which encompass restitution, compensation, rehabilitation, satisfaction, and assurances of non-recurrence. Moreover, the Basic Principles and Guidelines of 2005 impose an obligation on States to implement decisions on claims brought by victims against individuals or entities.
The preceding norms highlight access to legal remedies and reparations for transgressions of IHL and IHRL. As a result, individuals who have suffered injury are eligible to receive some type of remedial action, which may go beyond penal action in the form of reparations. The right to remedy crystallizes two important purposes. Firstly, this right aims to deter future violations by implementing measures to prevent the recurrence of similar violations. Secondly, it provides retribution, giving victims the satisfaction of seeing someone held accountable for the harm they endured.Footnote 80 Further, punishment conveys to wrongdoers that they have committed an offense, acknowledging the victim’s suffering and upholding their moral claims.
However, when it comes to determining a remedy and establishing accountability for the actions of LAWS, robots lack agency, whether moral or otherwise.Footnote 81 In other words, there is a kind of accountability gap concerning LAWS usage, and as a result, such weapons are deemed illegal and unethical.Footnote 82 Not only does the deployment of LAWS raise concerns about an accountability gap for violations that occur during wartime or law enforcement scenarios, but the victim’s right to remedy would also be violated.
The decision-making process of these autonomous machines, which involves life-and-death determinations without meaningful human intervention, creates an accountability gap. It becomes unclear who should be held liable in such situations. Assigning responsibility to the autonomous machine itself is impractical since it cannot experience physical or psychological pain and cannot be punished like a human being. Other potential candidates for accountability, such as the manufacturer, programmer, superior officer, or commander, also face legal and practical obstacles. Anticipating all possible scenarios and predicting the actions of the autonomous machine becomes a challenge. Existing laws may not establish accountability fairly, leading to an accountability gap where no human can be accused or held responsible. The absence of accountability or retribution for the victim’s suffering could result in feelings of frustration. Furthermore, this approach could potentially compromise the objectives of deterrence and retribution, which are crucial for attaining justice in such circumstances.Footnote 83
Moreover, persons in authority who use or develop LAWS that are exclusively aimed at serious violations of the right to life would be guilty under relevant laws and be punished accordingly. However, proving criminal intent in such cases can be challenging and is likely uncommon, especially among representatives of States that generally adhere to international legal standards. The more concerning scenario arises when a fully autonomous weapon causes an arbitrary killing without any evidence of human intent or foreseeability. In such situations, no individual can be directly held accountable for the decision to initiate the attack, making it difficult to establish indirect liability as well.Footnote 84
For instance, the use of drones in recent times has demonstrated there is a lack of investigations into unlawful killings resulting from drone strikes. For example, an Amnesty International report highlighted grave concerns regarding US drone strikes in Pakistan. The US government failed to conduct proper investigations, and prosecutions were not pursued for the US’s human rights violations.Footnote 85 Even though drones are not considered to be fully autonomous weapons, there are difficulties in investigating associated human rights abuses, and those responsible for those abuses remain unpunished. As a result, Pakistani victims of the drone attacks (and their families) did not obtain justice. This happened even though humans had control over the US drones.Footnote 86
It is clear from the above that human control over the use of armed drones does raise issues of accountability and prosecutions for human rights abuses, but thus far, accountability for drone strikes and prosecution thereof have remained elusive. Thus, the removal of human control altogether from weapon systems like LAWS would render it almost impossible to establish accountability for IHRL violations and the right to remedy.Footnote 87
Conclusion
The advent of AI-powered LAWS marks a transformative yet deeply concerning shift in warfare and law enforcement. While LAWS’ compliance with IHL has been extensively debated, this paper has analyzed the challenges that LAWS poses to IHRL. LAWS, devoid of human qualities needed to interpret complex ethical and legal contexts, pose a grave risk to fundamental rights, including the rights to life, security, privacy, and human dignity. Additionally, LAWS’ lack of accountability mechanisms threatens to undermine affected individuals’ right to remedy.
To address these challenges, the international community must act decisively. This includes maintaining meaningful human control over LAWS to ensure adherence to international legal standards and prevent unlawful civilian harm. Furthermore, the authors underscore the urgency of prohibiting the use of killer robots against humans, advocating for the establishment of robust international regulations to safeguard human rights and dignity in the face of advancing AI technologies. Only through such concerted efforts can humanity ensure that technological progress does not come at the expense of our shared legal and ethical values.