Summary: I. Introduction. II. Mens Rea for War Crimes under the Statute of the International Criminal Court. III. Command Responsibility. IV. New Armament and War Crimes. V. Other Options. VI. Conclusion. VII. One Last Comment. VIII. Bibliography.
I. Introduction
Warfare has been an omnipresent phenomenon in human history and as such it is linked to the technological advance of civilizations. In this view, it is obvious to say that the more technologically advanced a civilization is, the more advanced its ways to kill the enemy will be.1 This sort of principle has not changed in present times, things are no different and the development of new weaponry continues.2
Sometimes it seemed that law struggled to follow this fast -paced development; however, this struggle showed to be false in the past; one common element persisted from the first rock used as a weapon to the latest piece of weaponry, one that could be repressed and that answered for every unlawful act. This human element remained intact for all the phases that warfare had been passed through.
For the first time in History though, technology is starting to explore real possibilities to reduce the presence of humans in the battlefield, removing this element and changing the nature of war. In this sense, it should not surprise anyone if current institutions demonstrate insufficient to cope with the new scenarios arising from this; nevertheless, the discussion still belongs to the speculative province and might not arise in reality or at least not in the way we imagine it.
One of these institutions is war crimes. Given the recent developments in weapons technology, questions have emerged about the propriety of war crimes to confront possible future conducts where the role of humans will be diminished. In the following lines, we try to show the current state of discussions around this topic focusing in the mental element of these crimes accordingly to the Statute of the International Criminal Court (SICC or Rome Statute)3 and its apparent lack of applicability.
For this, we first briefly analyse the mental element for war crimes as established in the SICC, and then refer to its applicability towards the most relevant examples of new weaponry: Autonomous Weapons Systems (AWS), to show that the some of the problems suggested might originate from a misconception of this armament.
II. Mens Rea for War Crimes under the Statute of the International Criminal Court
We take for granted several of the features of war crimes under the SICC in order to focus on their mental element. In this sense, given the characteristics of the issue, the existence of an armed conflict and the knowledge that the perpetrator has about it will be assumed.
War crimes are considered as breaches of the laws and customs applicable in armed conflicts that are of such a serious character that entail criminal responsibility for the perpetrator.4 This implies that not every breach is considered a war crime5 and that -because of the context they were born and are applicable in- they are deeply linked to International Humanitarian Law (IHL).6
The legal maxim actus non facit reum nisi mens sit rea, plays an essential role in the determination of criminal responsibility for the crimes contained in the SICC, including war crimes. Although it is true that there are many domestic criminal systems that recognize some instances where the perpetrator’s state of mind does not have an impact on his criminal liability, these are highly limited7 -for example in cases of possession of dangerous objects- and regarding the SICC -and International Criminal Law (ICL) in general- not recognized at all.8
Speaking on mens rea, the Rome Statute took a different path than the one followed by prior ICL instruments. Since 1945, no international instrument adopted contained a general provision establishing the mental element requirement for the commission of an international crime9 which led to the tribunals working under those instruments, to develop their own understanding of mens rea.10 For the International Criminal Court (ICC) though, a person must act accordingly to article 30 of the SICC to be “criminally responsible and liable for punishment”, in addition to fulfil the material element.
Article 3011 contains the minimum mental requirements for criminal liability before the ICC which is also the standard formula to be applied “unless otherwise provided”. In this sense, this article functions as a residual rule and as such is supposed to apply only in those cases where there are no rules for the specific crime.12
In the case of war crimes, during the negotiations of the Rome Statute, the question whether each war crime should contain its own mental element was debated but given the profound differences between domestic systems it proved hard to address the problem.13 As a result, war crimes fall under article 30 to determine their mental requirement. It is noticeable that some war crimes contain very specific terms such as “wilfully” or “wantonly”14 which could be interpreted as rendering those specific crimes outside the realm of article 30,15 nevertheless these additional terms are clearly product of the desire of the parties to respect and adopt the language used in the original treaties that contained such crimes16 and in consequence do not constitute any kind of special mental element.
Under article 30, for a crime -in this case a war crime- to be brought before the ICC, it must be committed with intent and knowledge. These subjective elements do not cover equally all the objective ones17 (conduct, consequence and circumstances)18 since intent is aimed to act over the conduct and the consequence, and knowledge over the consequence and the circumstances. In this sense, these concepts mean that the conduct is performed voluntarily but also that the perpetrator wanted to cause the consequence or at least was aware that it was going to happen -even if it was undesired- as a predictable result of his conduct19 and that the perpetrator had to -again- at least be aware that some circumstances surrounding his conduct existed (such as the excessive loss of civilian life as a consequence or the protected character of humanitarian personnel as a circumstance) no matter his legal appreciation of them.
By determining that the crime must be committed with intent and knowledge and that the latter requires that consequences not only might happen but will happen,20 the SICC sets the mental element bar very high21 and leaves out other forms of culpability -such as recklessness- that might be present in war crimes.22
Now, does this mean that the commission of war crimes happens exclusively with intent and knowledge or that the ICC will only know of those war crimes committed that way? It is true that the relevant article reads “unless otherwise provided” and that this should not be understood as limited to the SICC but also including customary international law or other treaties23 which would open the door to consider new forms of culpability where they are applicable. Nevertheless, this reasoning does not explain why recklessness was dropped in the negotiations of the Rome Statute when it was initially considered.24
In our opinion nothing in the reading of article 30 and its relation to other sources of international law, would prevent the ICC of knowing of war crimes committed recklessly but, given the very nature of the Court as not precisely being the first choice to trial perpetrators, chances are that in the case of war crimes it will be reserved to trial only the worst kind of war criminals, this is, those who acted with intent and knowledge. Moreover, the analysis over war crimes and new weaponry will be based on this kind of culpability.
In sum, it is possible to say that war crimes committed under the SICC must be committed with intent regarding the conduct and the consequences and knowledge of the latter, and of the circumstances.
III. Command Responsibility
Now, special mention is due to the concept of command responsibility25 that works in a different level and for different scenarios. According to this doctrine already part of customary international law “…military commanders and other persons occupying positions of superior authority may be held criminally responsible for the unlawful conduct of their subordinates…”.26 This responsibility can be divided in two types: direct and indirect responsibility.27
According to direct responsibility and under article 25.3 (b) of the Rome Statute it suffices that a person orders the commission of a crime to be considered criminally responsible as long as the crime is attempted or committed. In this sense, there is no much problem in finding the mental element required to consider the commander criminally responsible when he issues such an order -even the existence of that order might be inferred from the circumstances, so there is no need of an expressed order.28
Things change for indirect responsibility. In this case, and under article 28 of the SICC, the subordinates act in absence of any order while the superior fails to prevent such a behaviour, to punish it or to report the issue to the competent authorities when he knows, actively ignored or -given the circumstances- should have known about the conduct of his subordinates.
Indirect responsibility is based on three elements:29
Existence of a superior-subordinate relation which is not limited to its formal character, in the sense that it should include the “…material ability to prevent or punish criminal conduct…”30 and not only the ranks established legally.
Mental element. Article 28 establishes two different mens rea standards depending on whether the superior is military or non-military. In the first case, military commanders are criminally responsible if they “…either knew or, owing to the circumstances at the time, should have known…”31 while in the case of non-military superiors, the mental element required is “…either knew, or consciously disregarded information…”32 which by eliminating the possibility that non-military superiors can be held responsible on the basis of negligent supervision33 establishes a higher requirement than that for the military.34 Some authors35 have echoed this higher requirement into the realm of wilful blindness where a person claims ignorance over facts that he foresees or even know already in order to escape criminal liability.36
Failure to prevent, punish or report. This element of the superior responsibility determines that the superior will be responsible if he -considering the mental element- did not stop and repress the conduct of his subordinates. Why and? Because it is considered that these two tasks entail criminal responsibility each one on its own. In this sense:
This duty does not… permit a superior to choose... to either prevent the crimes or to await their commission and then punish. The superior’s obligations are instead consecutive: it is his primary duty to intervene as soon as he becomes aware of crimes about to be committed, while taking measures to punish may only suffice… if the superior became aware of these crimes only after their commission. Consequently, a superior’s failure to prevent the commission of the crime by a subordinate, where he had the ability to do so, cannot simply be remedied by subsequently punishing the subordinate for the crime.37
Regarding the liability for not reporting, it was added as an acknowledgement of the situation already recognized in case-law,38 this is, that superiors often may not be in a position to effectively investigate and prosecute offenses committed by their subordinates.39
These elements are required to separate command responsibility from other sources of liability in the sense that here the superior is not been held responsible because of his character as such, but for his failure to act as a result of not meeting his duty or of having actual intent.40
It is interesting that not being a war crime strictly speaking,41 command responsibility shows up in the discussion but, as will be seen, some authors consider that it might offer good basis to close the accountability gap supposedly open by new weaponry.
IV. New Armament and War Crimes
As weaponry become more complex and advanced voices have been raised to question whether these developments are susceptible to be regulated by international law. In most cases -publicly known- it is hard to affirm that new weapons pose real threats to the legal frame, in the sense of making it unable to provide with regulations and answers.
Even though, a broad gamut of weaponry is under development or recently developed42 we will refer to a very specific type characterized by the increased autonomy towards human operators by using advanced computers to execute some of the tasks habitually assigned to humans.
There seems to be an agreed nomenclature to distinguish the level of involvement that operators have in relation to these systems. So, it is possible to distinguish between three basic types:43
Semi-autonomous weapons systems: Where the system is capable of selecting targets but requires active input from humans in order to attack.
Human-supervised weapons systems: Characterized by the ability of the system to select and attack targets always under the supervision of an operator capable of cancelling system’s actions.
Autonomous weapons systems: Systems that can select and attack targets without further human involvement.
Other authors44 implicitly reject this classification and focus on the capacity to adapt of the weapon itself and not in the level of “intelligent” autonomy and make a distinction between:
Automated weapons: Designed to open fire without requiring further human instruction once certain conditions become present. These weapons may not be even new since they include automated sentry guns45 and even common landmines.
Autonomous weapons systems: These are weapons systems that are able to adapt to the conditions surrounding them. They are capable of search for targets, identify, select and attack or request authorization to attack them, although they would only function as truly autonomous when not requiring such authorization.
In this categorization it is needed to distinguish between these and unmanned combat systems (which in fact may include group 1 above) on the argument that they are remotely operated by humans and in consequence the problems they may present arise more from the methods in which they are used than from the characteristics of the weapons themselves.46 No matter which classification is chosen, attention is attracted to autonomous weapons systems given their capacity to operate without human involvement.
There are a few observations before analysing the problems posed by these systems. First, although there is no agreement on how long it will take to actually deploy Autonomous Weapons Systems (AWS) on the battlefield there is agreement on the fact that they do not exist yet,47 consequently it is important to keep in mind, as said in the introduction, that any difficulties are hypothetical; the discussion is about framing an inexistent situation with the legal resources now available which in a way is a good exercise to locate potential deficiencies but that should be treated cautiously.
Second, the entire discussion is based on the premise that AWS will fail. This is not a product of pessimism but a necessary presumption, if it is going to be assumed that AWS will have an absolute 0% margin of error then there is no discussion at all and States should promptly replace all humans in the battlefield.
There are several definitions on what an AWS is. For instance, Human Rights Watch (HRW) considers that they are “…systems that would select and engage targets without meaningful human control”,48 for the International Committee of the Red Cross (ICRC) AWS “…can learn or adapt [their] functioning in response to changing circumstances in the environment in which [they are] deployed.”49 For the United States (US) Government they are systems “…that, once activated, can select and engage targets without further intervention by a human operator”.50 The core feature is that AWS do not need further human instructions to select and attack their targets (although it is true that this do not necessarily preclude human intervention).51
It is not clear how AWS will operate on the terrain, some experts point out that the concept of “autonomy” should not be overplayed and that AWS will function under pre-issued instructions that will allow them to select their next move as long as the move exists in their program,52 in this sense, AWS will not be able to do what their human programmers do not want them to do and in consequence they will remain predictable.53 For others this resembles more an automated behaviour relevant to automated weapons explained above and argue that even though AWS will be programmed, they will operate using stochastic reasoning,54 which introduces uncertainty in the system and makes it unpredictable55 although capable of learning from its mistakes.56
Although it is certainly important how these systems will operate, the relevant aspect to consider is that their use must comply with IHL.57 This might be the greatest obstacle to overcome in order to deploy these systems on the field, especially regarding the difficulties that they might present to identify and discriminate between legitimate targets and protected ones.58 Again noting that no solid answers can be provided, some authors have suggested that this might not be an absolute impediment for the use of AWS. In their opinion it would not matter that AWS could not identify properly their targets as long as they are used only in very controlled conditions where the presence of protected persons and civilian objects would be highly unlikely. So, they suggest using them in “areas of desert, remote steppe lands, and remote maritime areas…”59 or in “…anti -submarine warfare… [since] war at sea more often -if not always- occurs far from civilian shipping…”.60 These authors presuppose that because some battle scenario do not contain civilians then AWS will be exempted of identifying between targets and will be able to engage again anyone.
Nevertheless, what they forget is that even in these isolated and improbable situations, AWS would have to distinguish between combatants and personnel hors de combat.61
Now, what should be the level of compliance of AWS on IHL? There are opinions in the sense that the technology to build today a fully AWS already exists, in the sense that it would be capable of selecting and attacking targets with no human instructions, as long as we are prepared to accept a high level of failures and accidents.62 In contrast, other opinions go in the direction to consider that AWS have to comply with IHL at least in the same rate that humans do,63 so until AWS respect law the level humans are capable at, the systems should not be deployed. In our opinion an exceptionally good reason to expect -and demand- a performance that clearly overpasses that of humans is the apparent gap of accountability that current legal regulations suffer.
This whole idea of machines learning and reasoning might suggest some unusual opinions about the nature of AWS. Some64 seem to advance the position that AWS are something between mere armament and combatants since in the battlefield they may behave more as a soldier and in doing so implicate certain consequences that seem to anthropomorphize AWS.65 These opinions are misguided and lead to bizarre scenarios. An AWS, no matter how “smart” or what levels of legal compliance attain, is an object. Combatants and AWS cannot be equated since they are not in the same category,66 the harm AWS may cause comes from weapons developers, manufacturers or users,67 and in consequence it is not possible to ascribe to these weapons human characteristics or ultimately, impose legal obligations that belong to humans.
After this clarification, it is important to remember, as said above, that not every breach of IHL is equal, for instance a violation of the right of prisoners of war to receive mail68 might not be considered as a serious breach when compared to a direct attack on civilians.69 In this sense, it is important to prioritize what legal rules and prohibitions AWS must comply with.
War crimes, certainly, are an essential part of this package. Considering the role AWS are expected to play, there are probably no rules more important than these. Fears have been expressed on what will happen once AWS are involved in the commission of war crimes, who will be responsible then?
This question may vary in difficulty depending not only on the specific factual situation but also on the legal one and it should be recalled that this work is assuming as basis the approach taken by the SICC.
First it is necessary to distinguish between war crimes that can actually be relevant for AWS. It is hard to imagine a scenario where an AWS declare that “…no quarter will be given”70 or to compel “…a prisoner of war or other protected person to serve in the forces of a hostile Power”.71 On the other hand, the commission of crimes like killing a combatant who has surrendered,72 attacking a museum73 or attacking humanitarian personnel,74 may not be far from reality.
In sum, it is not difficult to picture an AWS committing a conduct considered criminal during an armed conflict under the Rome Statute as long as some direct use of force is involved.
As said above, most crimes -and certainly war- crimes follow the principle that there is no crime unless the conduct is accompanied by certain mental state so in order to be able to claim that some AWS committed a war crime it is required to state first that the AWS had the intention to act the way it did and furthermore that it was aware of the consequences and circumstances. Assuming for a moment that AWS will be able to have intentions and to act accordingly to them after having considered the consequences, what will happen then?
Ideas have been expressed about holding the machine responsible for this going as far as proposing the punishment to be imposed which in the case of war crimes committed by AWS in all probability would amount to dismantling the system75 or maybe turning it off. This result would require the premise that “smart” enough machines would be considered as legal persons.76 Given the speculative nature of the argument it is hard to provide with valid reasons why the machine could not be held responsible. In our opinion, and probably for the moment, we should acknowledge the fact that this answer requires more assumptions than desired and that it also overstretches the discussion changing the focus and entailing other speculative questions no helpful at all.77
Now, not because the AWS is incapable to have the required mental element it means that such element should not be looked for in the humans acting behind the machine. The first obvious choice would be the military commander in charge of the AWS. Three scenarios should be distinguished here.
First, in case the commander instructs the weapons system to attack an illegitimate target, there would be no need to look further since he would be using the AWS as a mere rifle or other weapon. It is of no relevance to consider whether AWS should be programmed with IHL rules impossible to override, here the commander would be the war criminal.78
Second, let’s assume that the commander orders to attack an airbase located near a village, he does not want to attack the village but he knows that his AWS unit has been showing some erratic behaviour, during the last combat the AWS selected and was about to attack a hospital but was stopped at the last moment. With this in mind, the commander decides to attack the nearby enemy airbase which results in the AWS misidentifying the village and launching its missiles against it. In this case, the commander would be liable under the argument that given the irrational conduct of his AWS there was a real probability that the system may fail again, which did not stop the commander from deciding to act. Of course, this deserves the comment that since this work is on the basis of SICC criteria, this reckless conduct of the superior would not amount to a war crime before the ICC, which puts the argument in a dead-end.
The third possibility holds the commander liable on the basis of superior responsibility79 whose elements were explained above. Nevertheless, it does not seem right for several reasons. It needs the assumption that the AWS is a subordinate of the commander, idea not in line with the fact that weapons systems are objects; in addition the mental element required for the commander is that he “…either knew or, owing to the circumstances at the time, should have known…”,80 which is problematic since it seems unreasonable to require a commander to know that his AWS is about to commit a crime. Finally, the question arises again, how is a person supposed to repress the conduct of a machine? For the remaining scenario the question solves itself, even if the commander knew that his machine was committing a crime and he successfully stopped it he is unable to “punish” it, on the other hand, if he could not possibly know that the AWS was about to commit a crime how reasonable is to expect that he could prevent it?
Other humans behind the AWS that has been suggested to be held responsible are the programmers and the manufacturers (P/M) of AWS.81 Their participation in a war crime is less clear than that of the commander but they share one thing in common although it is difficult to say whether it would be enough to hold them responsible. As with the commander, if a P/M actively writes in the AWS’ code or manufactures the machine in a way that for instance makes it misidentify targets he would be responsible.82 Although it has been pointed out that the temporal element may play against in this case since the Elements of Crimes establish that “the conduct took place in the context and was associated with an… armed conflict”83 and from the point of fabrication to the point of actual use, a long time may have passed.
The situation where the P/M intentionally builds a defective AWS may provide for criminal responsibility for him under article 25.3 (c). This article determines as criminally responsible those persons who facilitated the commission of the crime. Although it does not demand that the facilitator knows precisely what crime is going to be committed,84 the fact that it is required to establish the responsibility of the actual perpetrator renders the argument unnecessary for now.
On the other hand, would it be possible to hold the programmer or manufacturer responsible without the commander committing a crime? Some are of the opinion that if the commander is acting in good faith, the P/M who willingly fabricates a defective system could be responsible for war crimes.85
In sum, given the fact that only in very specific conditions commanders and other personnel involved with AWS would be hold responsible it is possible to conclude that the ICL frame at least regarding the ICC is not well suited to face the challenges glimpsed by the use of this kind of technology.
V. Other Options
Several alternatives have been proposed to face this situation, such as making the P/M to act as a guarantor of the AWS86 but this does not seem satisfactory since most likely is that not only one person will be responsible for the entire development of the system87 so one person would responsible for the conduct of other. And other alternatives are not related to the specific problem.88
Now, would a person be responsible for using AWS if he knows that the system is incapable of discriminating between targets? According to article 8.2 (b) (xx) it is a war crime to employ weapons which are “inherently indiscriminate” if some other conditions exist. Some authors do not consider that AWS will be indiscriminate weapons as such but until AWS achieve levels of identification comparable to human’s then it is reasonable to hold them in such condition.
Since it is valid to take for granted inexistent conditions, let us assume that at least in customary international law,89 there is a “comprehensive prohibition” on the use of AWS and that they are annexed to the SICC as required by the mentioned article. In that case there would be no need to look so hard for the mental element since the mere employ of such weaponry would constitute a war crime.
In addition, this would open other possibilities to prosecute the responsible person since in some occasions the use of this kind of armament may qualify to direct attacks over protected persons.90
VI. Conclusion
In general the adoption of new armament can be coped within the existent legal frame, however weaponry entailing certain characteristics as AWS presents new challenges that may need that the mental requirement for war crimes under the Rome Statue gets expanded in order to accept more situations, especially emerging from dolus eventualis. Of course, the fact that the SICC is defined narrowly does not preclude that some other jurisdictions may prove more functional in this regard.
In order to attribute criminal responsibility for conducts perpetrated through AWS, it is necessary to look for the mental requirement on the humans surrounding the operation of these systems given the impossibility that a weapons system forms the mens rea necessary for the commission of the crime. Nevertheless, we believe that the current ICC legal framework might not be suitable to punish criminal conducts derived from the use of AWS when even the attribution of the conduct requires so many assumptions.
VII. One Last Comment
Although the idea of robots killing humans should not be extraneous to us91 and the participation of autonomous machines in hostilities eventually will happen, the concept of a robot actively choosing to disregard the laws and in consequence to kill people by its own assessment of the situation seems unlikely. We do not pretend to go as far as some experts92 who argue that no State would use a weapon incapable of distinguishing targets, position naively optimistic given the examples of the 20th century.
To us, we should not be worried about what to do when an AWS commits a crime since the lack of mens rea challenges the very existence of the crime and the situation would fit better understood as a malfunction or negligence.93 The real danger is to provide potential war criminals with more methods to hide their actions under apparent machines breaking the law.