Category Archives: humanitarian law

War crimes in virtual worlds, virtual war crimes in the real world

My interests in gaming, technology and large scale atrocity appear to be colliding with more regularity, but two recent pieces of writing munge them together like badgers caught in a motorway pileup:

  1. A little Swiss NGO called Track Impunity Always (“TRIAL” – genius, right?) and Pro Juventute released a report called Playing by the Rules, which applies the rules of international humanitarian law to the the conduct of players in the fantasy worlds of first person shooter games. It looks at a blind spot in public policy concerning the content of violent video games, and their failure to reflect the accepted legal restraints on modern warfare.
  2. In the Cornell Journal of International Law appears an article by Stephen White called Brave New World: Neurowarfare and the Limits of International Humanitarian Law, which applys the rules of international humanitarian law to new weapons that are controlled directly by thought, an innovation that sounds like something that has escaped from the fantasy worlds of video games. Nobody except Peter Watts took much interest in this scholarly piece, which looks at an area where exotic technologies may leapfrog some of the general concepts underpinning international humanitarian law.

The wide arc of the TRIAL report is that because military-themed first person shooters rely on hyper realism for much of their appeal the gameplay should also incorporate laws that bind real militaries, such as the Geneva Conventions. TRIAL aim to prove the point that some of the things it is possible for players to do in games, if committed in real life would violate the laws of war, and that this is not well understood by game makers.  The researchers recruited some young gamers and watched them play twenty military-themed first person shooter games, recording gameplay clips when in the opinion of a team of legal experts (including Professor Marco Sassoli) a situation arose when there was potential for a virtual war crime. The researchers easily find examples of where a gamer is able and in some cases encouraged (here’s looking at you 24: The Game) to commit virtual war crimes. Such acts include executing hostages, torturing captured combatants, using heavy weapons without regard for proportionality, use of mercenaries and stealing property from civilians. They find a couple of examples where a gamer loses somehow for performing these acts, but dismiss them as insufficient.  Annoyingly, the report doesn’t analyse the game America’s Army although it has a strong in-game penalties for failing to following the rules of engagement. Anyhow, TRIAL’s legal analysis seems precise, is incredibly earnest and firmly makes the point that in-game conduct can indeed contravene the laws of war.

So what do they think are the consequences of this? The authors aren’t really clear, but they think it must be bad:

The message of the scenes [in games] should never be that everything is allowed, or that it is up to the player to decide what is right and what is wrong. In real life, this is not the way it works. In real life, there are rules and there are sanctions for violations of these rules. It is not up to the soldier or to the law enforcement agent to decide what is right and what is wrong. The events in Abu Ghraib have shown, what such “private justice”, even if carried out by well trained and high ranking officers, may lead to. (p. 43, TRIAL Report)

Did you see the sleight of hand?  If gamers do not follow the laws of war in game, then in a blink of the eye they will all be taking snapshots of themselves dragging Iraqi prisoners around on leads alongside Staff Sergeant Ivan Frederik‘s unit.  I suspect this innuendo was inserted by the authors in desperation on realising their research fails the “So what?” test because they don’t articulate or evidence the harm that is done when a player can choose to make their in-game character perform acts that resemble a war crime.

The strong inference, as noted, is that gamers performing war crimes might commit war crimes – however, this is far too bonkers to be taken seriously. A lighter inference could be that committing war crimes in games contributes to an erosion of public sympathy and support for the idea restraint in armed conflict. Still too strong. What we’re really left with is a more basic inference the authors hope we make for them: that virtual violence, of which this is one form, makes those experiencing it generally more aggressive and this is detrimental to their personal development. This general conclusion is the current high tide mark in peer-reviewed psychological research on violence and gaming (see Anderson, 2007), though it  is dismissed without analysis as shrill by gamers and the games industry. I would have liked to see TRIAL anchor their conclusions to this research more explicitly, making it an interesting and relevant contribution. It may also have helped them avoid the mangling they received by teh unforgiving Internetz.

Stephen’s White’s article on neurowarfare is different sort of beast, looking at the work done by the Defence Advanced Research Projects Agency (DARPA) in trying to make humans control weapons with their mind:

Resolving the issue of whether a pilot remotely connected by a brainmachine interfaced UAV could incur criminal liability for having killed a person that the Geneva Conventions protect would prove particularly problematic because of the uncertain status of the role that the actus reus requirement plays in determining criminal responsibility. Before the existence of this type of weapons system, courts had no occasion to resolve whether the condition exists merely because of the evidentiary impossibility of proving a thought crime … (p. 196)

Drawing on psychological and systems research, White argues that a brainmachine can interpret and act on a human thought before the human connected to it is capable of choosing to act on that thought. If this is the case, then the problem for international criminal law is proving a person acted on their own will and intended to commit a prohibited act. A second strand of this article argues that the systems may become so staggeringly complicated that it would be impossible for any military commander to appreciate the sources and margins of error present in using these systems. Both of these problems  present challenges for the current legal regime, perhaps creating opportunities for users of these systems to act with impunity. White reckons that the solution to this problem is in expanding the concept of command responsibility to include civilian engineers of these new technologies, and the companies that employed them.

There are some linking themes between these two reports, but they are quite dicey. The first is the relevance of moral norms where a person is strongly detached from the consequences of their actions.  The second is the responsibliity of new groups of actors for strengthening the norms of international human rights, humanitarian and criminal law. TRIAL want game developers to take a greater role in standard setting, which is something initiatives like the Council of Europe’s Human rights guidelines for online games providers look to be working towards. However, a consequence of White’s article could be that the current standards may soon prove to be structurally inadequate as technological innovation challenges their foundational concepts.

Anyhow, interesting, no?

Advertisements