I am a Senior Lecturer in Law at the Liverpool Law School, University of Liverpool. This BLOG reflects a work in progress on the legal, ethical, social and technological challenges posed by autonomous systems and robotics as they relate to aging, healthcare and warfare.. The views expressed are my own and should not be attributed to my employers, the University of Liverpool.
Sunday, 2 December 2012
Onwards!
Now that my last research paper on Smart Meters is being proof read before being sent off to the Publishers, I have quite a lot of reading to catch up on. Am mulling over a seminar at the Law School on Legal and Moral Issues Raised by Military Drones. Two items of work have caught my eye: Steve Fuller's book on Humanity 2.0 and Ian Kerr's excellent Keynote here. You may well wonder how Fuller's vision links with Kerr's article, and Peter-Paul's work. I think there is something about the "technological condition" we appear to be faced with. Sort of vita activa/vita contemplativa in the age of algorithms with regard to Military Drones and International Humanitarian Law. My virtual friends (I hope) are also highlighting how antedeluvian my thinking/reflection has been - so, it is a real challenge to use the blog to put into operation their nuggets of wisdom.
Friday, 23 November 2012
Military Drones: Point, Click and Kill
There is a thoughtful post by Chris Newman here. The title provides a clue to the author's focus: 'Moralization' of Technologies - Military Drones: A Case Study.
During the next couple of days I want to undertake a literature review with this particular issue in mind: how do we articulate the boundaries of legal, ethical and moral boundaries? Increasingly, the literature I have examined do not do this with the type of clarity that aids my understanding.
There is no doubt that the Chris does a good job in setting out the context in which the military drones are used and controversies raised as a consequence.
During the next couple of days I want to undertake a literature review with this particular issue in mind: how do we articulate the boundaries of legal, ethical and moral boundaries? Increasingly, the literature I have examined do not do this with the type of clarity that aids my understanding.
There is no doubt that the Chris does a good job in setting out the context in which the military drones are used and controversies raised as a consequence.
Chris’s source of inspiration is Peter-Paul
Verbeek. The idea that ethics and technology are indivisible is very much a
theme pursued in Moralizing Technology: Understanding and Designing the
Morality of Things (2011). Many will agree that designers of technologies
cannot insulate themselves from social, legal and ethical implications
resulting from their artefacts. As Chris acknowledges, the policy issues raised
are not easy to disentangle:
“The ‘moralization of technology’ is a complex and difficult task that requires the anticipation of mediations. In addition to the fact that the future social impacts of technologies are notoriously difficult to predict, the designers and engineers are not the only ones contributing to the materialization of mediations. The future mediating roles of technologies are also shaped by the interpretation of users and emergent characteristics of the technologies themselves (Verbeek, 2009). This means that designers of artifacts cannot simply ‘inscribe’ a certain morality into technologies but that the capacity for ‘moralizing’ a specific technology will depend on the dynamics of the interplay between the engineers, the users and the technologies themselves.”
Chris, questions the viability of ‘mediation
analysis’ as a heuristic and worries about the democratic deficit. He has a point. The ‘Constructive Technology Assessment’ he
alludes to is seen as overcoming this shortcoming since:
The article proceeds to set out the activities of military drones and highlights some of the ethical challenges raised by the disintermediation of warfare. Chris concludes:“As such, all relevant actors have a stake in the moral operation of the socio-technical ensemble and therefore the democratization of the technology design process contributes to the ‘moralization of technologies’ in a broader sense (Verbeek, 2009). This is precisely what STS scholars intend to achieve by opening the black box of technology and analyzing the complex dynamics of its design. In the following, some important moral challenges with regard to military drones will be analyzed utilizing the theoretical concepts presented thus far and possible ways to address these challenges will be discussed.”
"However, recalling the quote at the beginning of this paper and presuming that we do not want drone pilots making life and death decisions with the feeling that they are merely playing a video game, it appears that much work remains to be done in ‘moralizing’ drone technology design in order to promote more ethical behavior on the remote battlefield."
True - but in a later post I want to deal with work that has been done by Professor Gillespie, on the systems engineering approach being used for autonomous unmanned aircraft. You can read his co-authored article with Robin West,
Requirements for Autonomous
Unmanned Air Systems set by Legal Issues (2010) published in the International C2 Journal.
Drones: The Modern Prometheus
WHEN MARY SHELLEY penned her thoughts on Frankenstein, she was not merely drawing attention to autonomous systems. Her attention was focused on creators. Drones can be regarded as the allegory to this tale. Thousands have been killed by unmanned aerial vehicles. This artefact is seen as a form of naked capitalism. It comes as not too much of a surprise that Apple has deemed it acceptable to block a software application that provides alerts to deaths caused by drone air strikes. There is now an escalating “drone race” - China has unveilled its latest military drone. It is true of course that “[n]ew technology does not change the moral truth.” Alexandra Gibb and Cameron Tulk have produced a field guide.
Drones also redefine the way we engage with each other in the evolving battlefield. Researchers from Stanford and NYU produced a report, Living Under Drones, which can be consulted here. Is there a concept of a “Just Drone War”? Scholars are rightly turning their attention to the morality of drones - and not without time as autonomous systems will be the next phase in the development of military prosecution of wars.
Promotional photo of Boris Karloff from The Bride of Frankenstein as Frankenstein’s monster. (Photo credit: Wikipedia)
Losing Humanity The Case against Killer Robots
On 19 November 2012, the Human Rights Watch, issued a report: Losing Humanity.
This report comes follows very much in the steps of a debate hosted by
the Human Rights Watch and Harvard Law School’s International Human
Rights Clinic. The Report aims to engage the public in view of the
anticipated evolution of current drone technology into fully autonomous
warfare systems. It specifically attempts to analyze:
The Report takes us through autonomous systems taxonomies:
The authors advocated an audit trail mechanism and responsible innovation as measures to promote transparency and accountability in the field of synthetic biology and nanotechnology. HRW regard this as a strategy that is clearly needed in this area of military warfare. I have yet to review this article and assess its value to the EPSRC Principles.
whether the technology would comply with international humanitarian law and preserve other checks on the killing of civilians. It finds that fully autonomous weapons would not only be unable to meet legal standards but would also undermine essential non-legal safeguards for civilians. Our research and analysis strongly conclude that fully autonomous weapons should be banned and that governments should urgently pursue that end.This is a worrying and rather depressing prognosis.
The Report takes us through autonomous systems taxonomies:
- Human-in-the-Loop Weapons: Robots that can select targets and deliver force only with a human command;
- Human-on-the-Loop Weapons: Robots that can select targets and deliver force under the oversight of a human operator who can override the robots’ actions; and
- Human-out-of-the-Loop Weapons: Robots that are capable of selecting targets and delivering force without any human input or interaction.
By eliminating human involvement in the decision to use lethal force in armed conflict, fully autonomous weapons would undermine other, non-legal protections for civilians. First, robots would not be restrained by human emotions and the capacity for compassion, which can provide an important check on the killing of civilians. Emotionless robots could, therefore, serve as tools of repressive dictators seeking to crack down on their own people without fear their troops would turn on them. While proponents argue robots would be less apt to harm civilians as a result of fear or anger, emotions do not always lead to irrational killing. In fact, a person who identifies and empathizes with another human being, something a robot cannot do, will be more reluctant to harm that individual. Second, although relying on machines to fight war would reduce military casualties—a laudable goal—it would also make it easier for political leaders to resort to force since their own troops would not face death or injury. The likelihood of armed conflict could thus increase, while the burden of war would shift from combatants to civilians caught in the crossfire.The HRW makes the following recommendations:
Finally, the use of fully autonomous weapons raises serious questions of accountability, which would erode another established tool for civilian protection. Given that such a robot could identify a target and launch an attack on its own power, it is unclear who should be held responsible for any unlawful actions it commits. Options include the military commander that deployed it, the programmer, the manufacturer, and the robot itself, but all are unsatisfactory. It would be difficult and arguably unfair to hold the first three actors liable, and the actor that actually committed the crime—the robot—would not be punishable. As a result, these options for accountability would fail to deter violations of international humanitarian law and to provide victims meaningful retributive justice.
To All StatesWhat interests me particularly is the call to Roboticists to assume a more proactive role. As many may know, Alan Winfield, has given considerable thought to the ethics of design in this area. The EPSRC has a working draft of principles. Readers will be interested in HRW reference to the article published in June, 2011, International Governance of Autonomous Military Robots.
To Roboticists and Others Involved in the Development of Robotic Weapons
- Prohibit the development, production, and use of fully autonomous weapons through an international legally binding instrument.
- Adopt national laws and policies to prohibit the development, production, and use of fully autonomous weapons.
- Commence reviews of technologies and components that could lead to fully autonomous weapons. These reviews should take place at the very beginning of the development process and continue throughout the development and testing phases.
- Establish a professional code of conduct governing the research and development of autonomous robotic weapons, especially those capable of becoming fully autonomous, in order to ensure that legal and ethical concerns about their use in armed conflict are adequately considered at all stages of technological development.
The authors advocated an audit trail mechanism and responsible innovation as measures to promote transparency and accountability in the field of synthetic biology and nanotechnology. HRW regard this as a strategy that is clearly needed in this area of military warfare. I have yet to review this article and assess its value to the EPSRC Principles.
Of Frightened Horses and Autonomous Vehicles: Tort Law and its Assimilation of Innovations
KYLE GRAHAM has a really good article here:
Abstract:
This symposium contribution considers five recurring themes in the application of tort law to new technologies. First, the initial batch of cases presented to courts may be atypical of later lawsuits that implicate the innovation, yet relate rules with surprising persistence. Second, these cases may be identified, and resolved, by reference to analogies that rely on similarities in form, and which do not wear well over time. Third, it may be difficult to isolate the unreasonable risks generated by an innovation from the benefits it is perceived to offer. Fourth, potential claims by early adopters of the technology may be more difficult to identify and recover upon than those that arise later, once the technology develops a mainstream audience. Fifth, and finally, uncertainty among potential plaintiffs (and their counsel) regarding the existence of a cause of action and the likelihood of recovery may beget a dearth of claims that involve an innovation for a significant period of time after its initial appearance. In introducing and explaining these themes, this article considers the initial application of tort law to technologies such as automobiles, airplanes, radio and television, and Tasers.
Why? The Morality of Law and the Materiality of Things: Autonomous Systems as a Casestudy
Dear Reader
Thanks for dropping by.
My research principally involves analysing regulatory challenges and issues posed by new and emerging communication technologies for traditional approaches to governance. Some of the issues examined in the past include surveillance, identity theft, child online safety, peer-to-peer file sharing controversies, online dispute resolution and managing personal and corporate identities. My current project involves the exploration of legal, ethical, social and technological challenges posed by autonomous systems and robotics as they relate to aging, healthcare and warfare. There are a number of reasons for this tangential shift. First, I am overseeing a project on autonomous systems. That is in itself never a good reason. Second, I was fortunate to work with, and meet some delightful and inspiring colleagues at the Legal, Ethical, and Social Autonomous Systems Symposium (14th November). I am particularly indebted to Michael Fisher, Alan Winfield, Kerstin Dautenhahn and Kerstin Eder for articulating so clearly and powerfully why and how autonomous systems challenge our well-established assumptions about human flourishing, responsibility, accountability and personhood. “Scientists think. Engineers make. Philosophers, well, philosophize”. Lawyers? This leads me to the third reason - lawyers are meant to help solve problems, whether it has to do with drafting rules, interpreting laws or just helping parties order their activities in an efficient and just manner. Lon Fuller’s Morality of Law has long held a fascination for me, both as an undergraduate and a lecturer. Fuller, reminds me how the institution of law is socially constituted - when we forget this, and the lawmaking institution prefers markets to justice, we have chaos. It is a coincidence that the subject of autonomous systems has entered into the crosshairs - discussion about military drones, carebots, affective computing and more recently driverless automobiles. At the Symposium, the issues raised were not only challenging and complex, but they also posed me with an intellectual challenge. I needed a narrative to frame these developments. I am unsure in my mind what the end result is going to be. The path I am proposing to take however is gradually becoming less opaque. A number of friends on Twitter have given me some useful pointers and encouragement. These have fitted in with a framework advocated by Peter-Paul Verbeek in his book, Moralizing Technology: Understanding and Designing the Morality of Things. I have also been inspired by Gunther Teubner and Niklas Luhmann. Teubner’s article, Rights of Non-humans? Electronic Agents and Animals as New Actors in Politics and Law (2006) resonated. I have long taken the legal construct of personhood as a given albeit with the usual caveats. Teubner, Bruno Latour and now Thomas White showed me that we needed an “interpretive turn” that integrated the morality of law with the materiality of things. Our relations with each other are embodied in the materiality of things. I suspect that during this journey of discovery, I will have to go into the uncharted territory of Science Fiction, Heidegger and Nietzsche and the cognitive world, so eloquently captured by Michael Wheeler in Reconstructing the Cognitive World. All this, whilst reminding myself that I am only a Lawyer. As I was sitting in the Symposium, I forgot that I was a Lawyer and felt more like, as Oliver Sacks put it, “an anthropologist” making house calls, and hopefully return to where it all started: What can or should Law (and Lawyers) do when faced with military drones, independent living, carebots for dementia patients and driverless automobiles? Whilst you are here - these are some of the links that I follow when I am not on Twitter (at J_Savim)
Thanks for dropping by.
My research principally involves analysing regulatory challenges and issues posed by new and emerging communication technologies for traditional approaches to governance. Some of the issues examined in the past include surveillance, identity theft, child online safety, peer-to-peer file sharing controversies, online dispute resolution and managing personal and corporate identities. My current project involves the exploration of legal, ethical, social and technological challenges posed by autonomous systems and robotics as they relate to aging, healthcare and warfare. There are a number of reasons for this tangential shift. First, I am overseeing a project on autonomous systems. That is in itself never a good reason. Second, I was fortunate to work with, and meet some delightful and inspiring colleagues at the Legal, Ethical, and Social Autonomous Systems Symposium (14th November). I am particularly indebted to Michael Fisher, Alan Winfield, Kerstin Dautenhahn and Kerstin Eder for articulating so clearly and powerfully why and how autonomous systems challenge our well-established assumptions about human flourishing, responsibility, accountability and personhood. “Scientists think. Engineers make. Philosophers, well, philosophize”. Lawyers? This leads me to the third reason - lawyers are meant to help solve problems, whether it has to do with drafting rules, interpreting laws or just helping parties order their activities in an efficient and just manner. Lon Fuller’s Morality of Law has long held a fascination for me, both as an undergraduate and a lecturer. Fuller, reminds me how the institution of law is socially constituted - when we forget this, and the lawmaking institution prefers markets to justice, we have chaos. It is a coincidence that the subject of autonomous systems has entered into the crosshairs - discussion about military drones, carebots, affective computing and more recently driverless automobiles. At the Symposium, the issues raised were not only challenging and complex, but they also posed me with an intellectual challenge. I needed a narrative to frame these developments. I am unsure in my mind what the end result is going to be. The path I am proposing to take however is gradually becoming less opaque. A number of friends on Twitter have given me some useful pointers and encouragement. These have fitted in with a framework advocated by Peter-Paul Verbeek in his book, Moralizing Technology: Understanding and Designing the Morality of Things. I have also been inspired by Gunther Teubner and Niklas Luhmann. Teubner’s article, Rights of Non-humans? Electronic Agents and Animals as New Actors in Politics and Law (2006) resonated. I have long taken the legal construct of personhood as a given albeit with the usual caveats. Teubner, Bruno Latour and now Thomas White showed me that we needed an “interpretive turn” that integrated the morality of law with the materiality of things. Our relations with each other are embodied in the materiality of things. I suspect that during this journey of discovery, I will have to go into the uncharted territory of Science Fiction, Heidegger and Nietzsche and the cognitive world, so eloquently captured by Michael Wheeler in Reconstructing the Cognitive World. All this, whilst reminding myself that I am only a Lawyer. As I was sitting in the Symposium, I forgot that I was a Lawyer and felt more like, as Oliver Sacks put it, “an anthropologist” making house calls, and hopefully return to where it all started: What can or should Law (and Lawyers) do when faced with military drones, independent living, carebots for dementia patients and driverless automobiles? Whilst you are here - these are some of the links that I follow when I am not on Twitter (at J_Savim)
Subscribe to:
Posts (Atom)