The debate over autonomous weapons raises significant ethical and legal questions, with calls for regulation amid concerns about civilian safety and the role of AI in warfare.

The ongoing debate surrounding the use of autonomous weapons and artificial intelligence (AI) in modern warfare has garnered significant attention, particularly concerning ethical, social, and legal ramifications. The dialogue includes insights from eminent figures in the field, such as Peter Asaro, an Associate Professor at The New School in New York and Vice-Chair of the Campaign to Stop Killer Robots, who has dedicated considerable time to examine the implications of advanced technology on human rights and global security.

Asaro defines autonomous weapons as systems capable of targeting and engaging without human intervention once activated. He contrasts these with guided munitions, where a human selects the target, emphasising that the unpredictability associated with autonomous systems raises substantial ethical concerns. His advocacy work at the United Nations focuses on pushing for an international treaty that would prohibit and regulate these weapons to minimise potential harm.

The campaign against autonomous weapons is currently at a critical juncture, having been discussed at the UN’s Convention on Certain Conventional Weapons (CCW) in Geneva for over a decade. However, progress has been hindered by the consensus-based approach of the CCW, which allows any participating state to veto advancements. Asaro’s team is now seeking to transition the discussion to the UN General Assembly in New York, a venue that facilitates broader participation and the potential for a more democratic conversation surrounding the implications of these technologies beyond military applications, including their use in policing and border control.

In terms of desired outcomes, Asaro envisions a legally binding treaty structured in two tiers. The first tier would ban weapons classified as inherently unpredictable or uncontrollable as well as anti-personnel systems, while the second tier would regulate automated targeting systems, establishing controls over deployment, munitions used, and ensuring that decisions involving civilian risk are always made by humans.

Recent reports from Libya, concerning the use of a Turkish drone that can function in both remote pilot and autonomous modes, underscore the urgency of these discussions. While the operational status of this drone remains unclear, it exemplifies the advancing capabilities of autonomous systems in warfare. Other instances, such as the use of “terminal targeting” drones in Ukraine, highlight a shift towards semi-autonomous capabilities in military operations.

Asaro also addresses the philosophical implications of AI tracking and targeting based on visual recognition, drawing parallels with the digital representations found in social media. He further elaborates on the challenges posed by AI “hallucinations”—outputs generated by AI systems that can misrepresent or create false scenarios—emphasising the unreliability of these technologies in high-stakes environments like warfare.

The application of AI in military contexts extends beyond autonomous weaponry. For example, in Gaza, AI systems are employed to analyse extensive surveillance data to produce “prepared targeting lists,” which involve heavy reliance on various data sources including facial recognition. The significant reliance on AI for decision-making has raised concerns about the potential for increased civilian casualties and the endorsement of biased practices.

Over the years, militaries have often promoted technological advances as tools that would render warfare more humane. However, Asaro challenges this narrative, noting research that suggests targeted killings can exacerbate terror recruitment and community resentment, perpetuating cycles of violence rather than addressing root causes.

The rapid advancements in technology have also led to a more asymmetrical nature of warfare, shifting risk from military personnel to civilian populations. This has prompted discussions on the evolution of warfare laws, which are increasingly seen as outdated in the context of urbanised and asymmetric conflicts where distinguishing combatants from civilians is inherently complex.

Asaro expresses concerns regarding society’s growing reliance on machines for moral and legal decision-making, cautioning against the psychological distance that can arise from delegating such responsibilities to technology. Although he acknowledges the potential benefits of technological advancements, particularly with the need for ethical oversight, he warns that relying solely on military frameworks may not address the broader social issues that contribute to conflict.

Looking ahead, Asaro is currently investigating AI’s role in deception and manipulation relating to data collection practices. He aims to develop regulatory frameworks that can protect individuals from exploitation in a landscape where vast amounts of personal data are harnessed for targeted influence.

Finally, Asaro reflects on the evolving relationship between knowledge and power in an era marked by digital media and AI. He highlights a concerning trend where narrative shaping has moved into automated spaces, urging the importance of media literacy and independent dialogue to counteract the challenges posed by automated decision-making processes in society.

Source: Noah Wire Services

More on this

Share.
Leave A Reply

Exit mobile version