x

Fifteen Eighty Four

Academic perspectives from Cambridge University Press

Menu
21
Jul
2022

What happened to the Lubanga Case? Between malfunctions and errors of AWS.

Afonso Seixas-Nunes

What does it mean to say that a weapon or a system is autonomous? As simple as this question sounds, the term opens a pandora’s box, because ‘autonomy’ has different meanings according to the field of knowledge we refer to. Does ‘autonomy’ mean the acceptance of a new type of agent parallel to humans? That would be the philosophical and legal understanding of the term. However, is that the meaning roboticists give to the term? It certainly is not, although some roboticists and engineers envisage that machine deep learning will get to the point of replacing humans even in the most complex tasks on the battlefield such as distinction and proportionality. How does International Humanitarian Law answer these questions? Is the emergence of autonomous weapon systems incompatible with the IHL?

Since 2013, academics, NGO’s and States have been discussing at the Convention on Certain Conventional Weapons (CCW) how to answer questions such as the definition of autonomous weapon systems (AWS). Five different answers have been offered by different States (Ireland; Netherlands; Russia Federation; UK and USA), but until today very little agreement has been achieved. For the purpose of this article AWS is a weapon system that is designed and programmed for a mission to be adaptative and identify, select and engage military targets without human supervision.

The endeavour of understanding AWS demands a technical, philosophical, and legal approach to understand the implications of AI, namely machine-learning on the war of the future. Lawyers cannot remain isolated in the world of legal considerations without understanding the emergent reality of AWS. For that purpose, it is vital to look at the AI paradigm and to clearly distinguish between what is possible today and what is still a technical hypothetical possibility. That AWS does not yet exist makes any analysis difficult, but be that as it may, embedding AWS of any type of agency will take us to a new legal paradigm. Weapons will no longer be ‘instruments’, totally dependent on human agency. And now the pandora’s box opens and challenges our thinking.

Let us then imagine a situation in which IHL violations are caused by AWS. Who will be held accountable? The programmer? The military commander? The deploying State? The system itself? The latter is excluded even by those who postulate ‘agency’ for AWS. AWS will be programmed, probably with neural networks, and for a specific mission. The fact that an AWS will be designed for a mission may lead us to consider that the programmer or the military commander would be the persons liable for violations of IHL caused by AWS. However, it is important to consider the different ways AWS may be operated. They will be embedded with open algorithms (neural networks) able to collect and process new data unknown to the programmer or to the military commander; second, AWS will be able to collect and process new data the variety, volume, and velocity of which will be impossible for human operators to control. Third, in the event of violations of IHL caused by AWS, there will no intentionality or mens rea required by international criminal law (Article 25 and 30 Rome Statute). Fourth, neural networks operate in black boxes that do not allow human operators to understand the decision-making process; moreover, they are unpredictable by definition. Fifth, it is highly questionable whether a human operator will be able to exercise any type of control once the system is activated. AWS will operate according to their ‘independent’ algorithms and may make errors in the way it perceives reality. Such errors cannot be confused with mere malfunctions or attributed to human operators.

Therefore, should it be concluded, that AWS will lead to a responsibility gap? The ICJ had the occasion to explain the ‘intrinsic character of the legal principles in question which permeate the entire body of law of armed conflict [. . .] apply to all kinds of weapons, those of the past, those of the present and those of the future’ and AWS will be no exception. However, as far AWS are concerned, the traditional legal framework of accountability requires further reflection.

First, it is important to consider that AWS may originate violations on the battlefield caused by their own algorithms, and those violations cannot be attributed to any human operator other than the State. Second, AWS will introduce a new dissociation of risk: soldiers will no longer be subject to dull, dirty and dangerous missions. This advantage should not obscure the other side of the coin, that is, the dissociation of communication between human operators and the weapon system. Never before have human operators trusted in weapon systems to the point of not intervening with their decision-making process. This feature alone should not be taken lightly, but should raise questions about whether a higher level of responsibility should be required not only from the deploying State, but also from those designing and programming AWS. The Lubanga Case opened the door to that possibility when it accepted the category of ‘dolus eventualis’. Would not that be a fair consideration due to the level of responsibility that programmers and designers will have in the future deployment of AWS?

The Legality and Accountability of Autonomous Weapon Systems by Afonso Seixas-Nunes

About The Author

Afonso Seixas-Nunes

Afonso Seixas-Nunes, Jesuit Priest, graduated in law, Philosophy (Vitorino Sousa Alves Award) and Theology, and holds an Mst in Human Rights from the London School of Economics and...

View profile >
 

Latest Comments

Have your say!