x

Fifteen Eighty Four

Academic perspectives from Cambridge University Press

Menu
17
Jul
2020

Is an Autonomous Weapon System Just a Machine?

Tim McFarland

For some years now, countries around the world have been working to develop increasingly autonomous weapon systems (AWS) for use on future battlefields: that is, weapons which can do some of the job of selecting and attacking targets on their own, relieving humans of the need to manually perform every step of the process (as well as, perhaps, the need to closely supervise the weapons). The motivations for developing AWS include possibilities of enhanced operational effectiveness, force protection, financial savings and superior legal compliance, among others. However, they are also a source of considerable controversy, with calls for measures to ensure that ‘meaningful’ levels of human control are retained, or to impose moratoria on development, or even to put in place a complete pre-emptive ban. Whatever one’s personal views, it is difficult to deny the significance of this path of development, as a source of disagreement and international tension if nothing else. It is therefore troubling that the question of whether and how to regulate development and use of AWS is proving very difficult to answer.

The international debate about regulating AWS development has been ongoing for approximately seven years in United Nations forums, and interest among policy-makers, academics and others had been growing for several years prior to the inception of the UN process. Sadly, progress has been sorely lacking. Far from reaching an accord about appropriate regulatory measures, participants remain mired in disagreement even about how to define AWS for the purposes of discussing whether and how to regulate them. Meanwhile, technological progress continues and the precursors to tomorrow’s highly autonomous weapons are gradually being integrated into modern armed forces. There is a real chance that, if current regulatory efforts stall, there will be no further opportunities for comprehensive international regulation beyond what is provided by existing law.

Of course, in a complex international political process, there are many factors which may contribute to a lack of consensus, such as State-specific strategic concerns, differing ethical stances and other matters. A striking feature of this debate, though, has been the persistence of conflicting views about the fundamental question of what ‘autonomous’ means in the context of weapon systems and other machines.

Some parties take a mechanistic view, seeing AWS as simply computer-controlled weapon systems which employ highly capable, but still human-designed, control systems to perform complex tasks which would otherwise require manual intervention. From this perspective, AWS are not and will not be greatly different to other weapon systems. Indeed, weapon systems which have been in service for decades may be said to have some capacity for autonomous operation: the various types of point defence systems which can automatically detect and fire upon incoming missiles or aircraft to protect a military base or vessel are a prominent example.

Others see AWS in a more anthropomorphic light. According to this view, as machines which will ‘step into the shoes’ of human operators, acting with some degree of independence to perform increasingly complex tasks, including tasks to which legal obligations apply, AWS should be regulated as something more akin to the human decision-makers which they displace: Can AWS abide by the principles of distinction and proportionality? Is it morally permissible to allow a machine to ‘make decisions’ about the application of lethal force?

The difference is very important in a regulatory discussion. If autonomy simply refers to a capacity for self-management which a weapon system may possess to some degree, the legal questions to be addressed are those which apply to the adoption of any type of weapon. They revolve around whether the State possessing the AWS and its armed forces personnel are able to meet their existing legal obligations by using that weapon in the course of a conflict.

If, on the other hand, AWS are somehow fundamentally different to other weapons, some novel questions arise. Are they still to be assessed according to relevant rules applicable to weapons or means of warfare? Or should they be judged by their ability to meet the legal obligations that would otherwise have been borne by a soldier using a manually operated weapon for the same operation?

It is difficult to see how regulatory efforts can produce a useful outcome until this basic disagreement is overcome. Nor is the problem confined to regulation of weapon systems. It is likely that analogous questions will need to be answered in many fields in which increasingly complex tasks are being undertaken by artificial systems, such as finance and transport. Even if today’s autonomous systems are clearly human-operated machines, might they one day be seen as something else?

Autonomous Weapon Systems and the Law of Armed Conflict by Tim McFarland
Autonomous Weapon Systems and the Law of Armed Conflict by Tim McFarland

About The Author

Tim McFarland

Tim McFarland is a Research Fellow in the University of Queensland Law School’s "Law and the Future of War" Research Group. He has a mixed technical and legal background, earning...

View profile >
 

Latest Comments

Have your say!