Friday 23 November 2012

Losing Humanity The Case against Killer Robots

On 19 November 2012, the Human Rights Watch, issued a report: Losing Humanity. This report comes follows very much in the steps of a debate hosted by the Human Rights Watch and Harvard Law School’s International Human Rights Clinic. The Report aims to engage the public in view of the anticipated evolution of current drone technology into fully autonomous warfare systems. It specifically attempts to analyze:
 whether the technology would comply with international humanitarian law and preserve other checks on the killing of civilians. It finds that fully autonomous weapons would not only be unable to meet legal standards but would also undermine essential non-legal safeguards for civilians. Our research and analysis strongly conclude that fully autonomous weapons should be banned and that governments should urgently pursue that end.
This is a worrying and rather depressing prognosis.
The Report takes us through autonomous systems taxonomies:
  • Human-in-the-Loop Weapons: Robots that can select targets and deliver force only with a human command;
  • Human-on-the-Loop Weapons: Robots that can select targets and deliver force under the oversight of a human operator who can override the robots’ actions; and
  • Human-out-of-the-Loop Weapons: Robots that are capable of selecting targets and delivering force without any human input or interaction.
The real fear is that the third category. Left to their devices, HRW is concerned that there will be an arms race, where we may end up with William Hertling’s AI Apocalypse moving from Science Fiction to Military Theatre. The governance concern is the lack of regulatory oversight over the military strategy in this sphere. The concept of personhood and the attribution of liability is not far from HRW thoughts:
By eliminating human involvement in the decision to use lethal force in armed conflict, fully autonomous weapons would undermine other, non-legal protections for civilians. First, robots would not be restrained by human emotions and the capacity for compassion, which can provide an important check on the killing of civilians. Emotionless robots could, therefore, serve as tools of repressive dictators seeking to crack down on their own people without fear their troops would turn on them. While proponents argue robots would be less apt to harm civilians as a result of fear or anger, emotions do not always lead to irrational killing. In fact, a person who identifies and empathizes with another human being, something a robot cannot do, will be more reluctant to harm that individual. Second, although relying on machines to fight war would reduce military casualties—a laudable goal—it would also make it easier for political leaders to resort to force since their own troops would not face death or injury. The likelihood of armed conflict could thus increase, while the burden of war would shift from combatants to civilians caught in the crossfire.
Finally, the use of fully autonomous weapons raises serious questions of accountability, which would erode another established tool for civilian protection. Given that such a robot could identify a target and launch an attack on its own power, it is unclear who should be held responsible for any unlawful actions it commits. Options include the military commander that deployed it, the programmer, the manufacturer, and the robot itself, but all are unsatisfactory. It would be difficult and arguably unfair to hold the first three actors liable, and the actor that actually committed the crime—the robot—would not be punishable. As a result, these options for accountability would fail to deter violations of international humanitarian law and to provide victims meaningful retributive justice.
The HRW makes the following recommendations:
To All States
  • Prohibit the development, production, and use of fully autonomous weapons through an international legally binding instrument.
  • Adopt national laws and policies to prohibit the development, production, and use of fully autonomous weapons.
  • Commence reviews of technologies and components that could lead to fully autonomous weapons. These reviews should take place at the very beginning of the development process and continue throughout the development and testing phases.
To Roboticists and Others Involved in the Development of Robotic Weapons
  • Establish a professional code of conduct governing the research and development of autonomous robotic weapons, especially those capable of becoming fully autonomous, in order to ensure that legal and ethical concerns about their use in armed conflict are adequately considered at all stages of technological development.
What interests me particularly is the call to Roboticists to assume a more proactive role. As many may know, Alan Winfield, has given considerable thought to the ethics of design in this area. The EPSRC has a working draft of principles. Readers will be interested in HRW reference to the article published in June, 2011, International Governance of Autonomous Military Robots.
The authors advocated an audit trail mechanism and responsible innovation as measures to promote transparency and accountability in the field of synthetic biology and nanotechnology. HRW regard this as a strategy that is clearly needed in this area of military warfare. I have yet to review this article and assess its value to the EPSRC Principles.
Enhanced by Zemanta

No comments:

Post a Comment