Thursday, November 21, 2024

Will Autonomous Weapons Turn Humans into Passive Participants in Warfare?

Throughout history, nations have consistently harnessed new technologies to secure advantages in warfare. The current global focus on the development and deployment of lethal autonomous weapons systems (AWS) continues this trend. Those who master this technology may wield impressive military capabilities, with advocates arguing that it could foster peace through deterrence. Critics, however, assert that such systems might provoke conflict and strip away the humanity from warfare by allowing cold algorithms to dictate life-and-death decisions.

Both viewpoints may hold validity, as the impact of AWS is likely to vary depending on the context in which they are employed. A crucial factor is the extent to which human operators relinquish control to machines, particularly as the pace of conflict accelerates. One point of consensus regarding AWS is their potential to significantly speed up the nature of warfare.

In an open letter to the United Nations (UN) in 2017, over 100 experts in artificial intelligence (AI) and robotics cautioned that AWS could enable warfare at an unprecedented scale and speed, leaving human decision-making behind. This trend raises pressing questions about the effectiveness of arms control, as there is ongoing debate regarding humanity’s ability to regulate lethal technologies that can outpace human thought and, potentially, operate independently.

The Campaign to Stop Killer Robots asserts that stringent international regulations are essential to prevent the misuse and spread of AWS. This stance is supported by numerous smaller nations, Nobel Peace Prize recipients, and several scholars focused on peace and security. In contrast, military powers, including the UK, China, India, Israel, and the US, resist binding legal frameworks in favor of promoting responsible use through human-in-the-loop principles. This concept suggests that a human operator should always oversee and sanction the use of force by autonomous weapons systems.

However, advancements in AWS technology are already significantly shortening the OODA loop—military jargon for the phases of observation, orientation, decision, and action that dictate military operations. The phenomenon of automation bias—where users favor computer-generated decisions over their own insights—compounds the risks presented by faster decision-making. Consequently, it remains uncertain whether operators will retain full control over the AWS they command.

The danger is compounded by the notion of “addictive intelligence,” wherein AI systems may appear benign while subtly influencing human behavior in unforeseen ways. As described by MIT technologist Pat Pataranutaporn, this can lead to situations where military AI ignores human rationale in favor of more aggressive, algorithm-driven strategies, even suggesting scorched earth tactics for swift victories.

The landscape of warfare is already undergoing transformation. Russia’s aggressive actions in Ukraine have coincided with marked advancements in the accessibility of machine learning, visual recognition, and robotics. This conflict has created a uniquely data-rich environment for the testing and validation of AWS. Ukrainian defense strategies increasingly revolve around drone warfare, characterized by the establishment of their first Unmanned Systems Forces and collaborations with NATO to secure funding for drone procurement.

However, drones are deployed in various global conflict zones beyond Eastern Europe, including Gaza, Myanmar, Sudan, Ethiopia, and Syria. The US military utilizes AI-controlled vessels patrolling crucial maritime routes, while autonomous sentry guns are positioned in the Korean demilitarized zone. The utilization of fully autonomous drones has already been reported in Libya, showcasing the growing reality of intelligent weapon systems in modern warfare.

The integration of advanced technology into military operations signals a potential shift in military doctrine, with startup innovations in Silicon Valley upending traditional defense strategies. As expressed by former Google CEO Eric Schmidt and ex-US military chief Mark Milley, future confrontations may hinge not on troop numbers or equipment but on AWS and sophisticated algorithms that facilitate rapid military planning and execution.

Despite the potential benefits, caution exists regarding the implications of widespread AWS integration. The complex nature of warfare often requires human consideration of casualties, competing viewpoints, and bureaucratic processes, all of which can temper the impetus for military action. The introduction of AWS could eliminate these moderating factors, leading to more hasty and unchecked uses of force.

Concerns around accountability and transparency are paramount, as highlighted by experts who warn of difficulties in human oversight when AWS functions autonomously. The mishandling of AI decision-making processes can further obscure accountability, complicating the understanding of wartime actions.

While many military powers assert their commitment to human oversight of AWS, interpretations of this principle are inconsistent and often lack specificity. As technology advances, the distance between human operators and AWS management may increase, complicating the assertion of control in real-world scenarios.

Examples abound of emerging autonomous systems that can perform attacks independently, raising ethical questions around their use in conflict. Critics point to recent conflicts, such as Israel’s operations in Gaza, as evidence that AWS do not inherently lead to more precise or humane warfare. Reforms to ensure rigorous training and adherence to ethical principles are essential, according to Schmidt and Milley, who advocate for international standards governing AWS to align with human rights values.

To navigate these challenges, they suggest a proactive approach that involves initiating discussions around the ethical application of AWS, potentially paving the way for regulatory frameworks at both domestic and international levels. Recently, a summit in Seoul saw 61 countries, including the US, endorse a “blueprint for action” aimed at responsibly incorporating AI in military operations, emphasizing the importance of retaining human control over AWS.

As autonomous weapons become integral to 21st-century warfare, the future of human decision-making in military strategy remains uncertain. The coming decades will reveal whether humanity’s role in warfare can endure alongside the rise of self-governing weapon systems.