Friday, 5 April 2013

Robots: A Crisis of Expectations?

Alan Winfield is a  highly respect scholar - his views are based on experience and long held beliefs. I was particularly attracted by his recent post. The post, titled A Crisis of Expectations was presented at a Robot Ethics Workshop. According to Alan:
...robotics is facing a crisis of expectations. As a community we face a number of expectation gaps - significant differences between what people think robots are and do, and what robots really are and really do, and (more seriously) might reasonably be expected to do in the near future. I will argue that there are three expectation gaps at work here: public expectations, press and media expectations and funder or stakeholder expectations, and that the combined effect of these amounts to a crisis of expectations. A crisis we roboticists need to be worried about.




Readers will discover much in Alan's erudite prosecution of his case. The echoes of "moral panic" will undoubtedly point to media affirmation of the love-hate relationship that has long plagued science and technology. I was particularly struck by the examples given by Alan - and he is right to draw the inferences. That said, I wondered if there was an alternative way through which we could approach Alan's case, for example, the trust deficit that surrounds any innovation that runs ahead of cultural norms and attitudes. In my subsequent posts, I want to examine whether the examples and responses have their origins in our attitudes towards living in the risk society, and consequently, how 
the mass media, law, industry and society construct and respond to these risks (Garland, 2003 and Giddens, 2002).

Enhanced by Zemanta

Alex Leveringhaus on Military Drones


Alex Leveringhaus & Tjerk de Greef have written an excellent paper that has got me thinking about the relationship between cognitive engineering, international humanitarian law and drones. He will be presenting his paper at BILETA. There is also a Plenary, which is a must attend event. I thought it a good idea for a couple of us, who have read his paper, to engage with Alex in a series of responses.

Abstract

The paper provides a critical analysis of the arguments made in favour of as well as against operationally autonomous targeting systems. It argues that critics and advocates of these systems both raise important points. However, neither of the two camps, the paper maintains, manages to present an unequivocal case for or against autonomous targeting systems. In response, we outline a more nuanced position that combines elements from both camps. Furthermore, we propose an approach to autonomous weapons systems that transcends the current debate. This approach, we contend, has significant potential to enhance the quality of targeting decisions. 


 The paper begins by making clear the distinction between moral and operational autonomy. This distinction is critical to appreciating the thrust of the argument (and it also avoids engulfing the discussion with moral and humanitarian law issues):

Playing a major role in contemporary moral and political philosophy, the former concept denotes that an agent is capable of pursuing a conception of the good life based on his/her own reasons and motives, rather than those of another agent.iii By contrast, operational autonomy, with which we are concerned in this paper, denotes that a machine is capable of carrying out specific tasks without a human operator and without being subject to certain regulatory constraints.
 Why is this distinction important? One importance of the distinction is to bring to the forefront the role of cognitive engineering. Without delving into the complex aspects of the concept, the authors emphasis on its value to understanding "operational autonomy" appears to focus on the cognitive demands of the war space and its implications for the socio-technical domain incorporating autonomous and human interactions. A system is self-sufficient in the sense of being programmed, let us say, to capture information of the landscape from an aerial viewpoint. The same system may also be built in during the design phase so that we end up with programs that facilitate, what I would describe as overlaying machine autonomy with a human interface:

To use a contemporary example, the Rules of Engagement state that jet fighter pilots are not allowed to use the jet’s fire radar system during the first phase of the information processing loop. During subsequent phases, though, this restriction might be lifted. Similarly, operationally autonomous machines will be subject to specific restrictions at different stages of the information processing loop.

This brief overview is necessary for the conclusions reached by the authors:
Returning to the debate over the role of operational autonomy in targeting systems, we think it is fair to say that critics and advocates of operationally autonomous targeting systems (abbreviated as OATS hereinafter) have in mind systems that (1) are wholly self-sufficient with regard to all four stages of the information processing loop (Self-Sufficiency) and (2) are permitted to apply force to certain targets (Self-Direction). We call these types of targeting systems fully operationally autonomous (we refer to them as OATS+). In this case, human operators are either taken out of the loop or remain on the loop. Yet, since operational autonomy is a matter of degree and therefore highly dependent on the quality of the governing software of a system, there can be OATS that have less than full operational autonomy [see picture 1]. For the sake of simplicity, we continue to refer to less than fully operationally autonomous systems as OATS. With this conceptual background in place, let us now scrutinise some of the arguments that have been made for and against OATS+.

Let us, for present, purposes accept this framing of the debate. In this part, I briefly set out the authors' middle ground which envisages a collaborative cognitive solution. It has one benefit:

This move, we contend, potentially enhances situational understanding and subsequent decision-making. The notion of cooperation between human and artificial agents requires a paradigm shift, from fully autonomous weapons systems to e-Partnership-based systems.xi The concept of e-Partnerships is not entirely new. E-Partnerships are already being prototyped in the domains of health care, space missions, and naval warfare.xii For now, however, we are content with briefly outlining how e-Partnerships might work in the context of OATS.
It is not difficult to see what the authors have in mind and their appreciation of the need to move away from the stark characterisation of the debates on the use of drones in military warfare. As I have interpreted the discussion of the e-Partnership model, I understand the authors as requiring us to engage more fully with the role and value of complex autonomous systems and urging us to be mindful of the need to observe and better understand the situational domain of warfare and the human dimension. I think this is a legitimate goal. The authors have undertaken substantial research before embarking on the "cognitive engineering" route. I think it is right to create a decision making framework which integrates autonomous adaptive capability across the various interpretive domains. There is a less explicit dimension, which I hope to explore more fully with Alex and colleagues, namely, the role on e-Parnership/cognitive engineering in auditing the limits of both humans and autonomous systems. Whether algorithmic decision making can be easily rationalised into moral and operational components is something that I am still a little unsure about. I am also uncertain whether the e-Partnership model can be used to parse the seemingly irreconcilable differences that appear to exist between the "moral" and the "operational" camps. I also think that those who are vehemently opposed to the use of drones in the military theatre space would accept that removing the technological or human operator's shortcomings would readily subscribe to a partnership model, in whatever form. In short, is the e-Partnership model nothing more than an extremely ingenious way of creating a distinction between moral and operational responsibility?   My thinking on this is at best fuzzy. As I read media and press accounts of military drones or reports from Human Rights organizations and lawyers, I wonder at what point in time do we cede "responsibility" (which was not defined - and to be fair - not the subject of the paper) to machines. Do we need a sentient being that oversees both autonomous systems and human operators?



Enhanced by Zemanta

Sunday, 2 December 2012

Onwards!

Now that my last research paper on Smart Meters is being proof read before being sent off to the Publishers, I have quite a lot of reading to catch up on. Am mulling over a seminar at the Law School on Legal and Moral Issues Raised by Military Drones. Two items of work have caught my eye: Steve Fuller's book on Humanity 2.0 and Ian Kerr's excellent Keynote here. You may well wonder how Fuller's vision links with Kerr's article, and Peter-Paul's work. I think there is something about the "technological condition" we appear to be faced with. Sort of vita activa/vita contemplativa in the age of algorithms with regard to Military Drones and International Humanitarian Law. My virtual friends (I hope) are also highlighting how antedeluvian my thinking/reflection has been - so, it is a real challenge to use the blog to put into operation their nuggets of wisdom.

Friday, 23 November 2012

"Toward the Human-Robot Co-Existence Society: On Safety Intelligence fo" by Yueh-Hsuan Weng, et al.

"Toward the Human-Robot Co-Existence Society: On Safety Intelligence fo" by Yueh-Hsuan Weng, et al.

Military Drones: Point, Click and Kill

There is a thoughtful post by Chris Newman here. The title provides a clue to the author's focus: 'Moralization' of Technologies - Military Drones: A Case Study.

During the next couple of days I want to undertake a literature review with this particular issue in mind: how do we articulate the boundaries of legal, ethical and moral boundaries? Increasingly, the literature I have examined do not do this with the type of clarity that aids my understanding.

There is no doubt that the Chris does a good job in setting out the context in which the military drones are used and controversies raised as a consequence.

Chris’s source of inspiration is Peter-Paul Verbeek. The idea that ethics and technology are indivisible is very much a theme pursued in Moralizing Technology: Understanding and Designing the Morality of Things (2011). Many will agree that designers of technologies cannot insulate themselves from social, legal and ethical implications resulting from their artefacts. As Chris acknowledges, the policy issues raised are not easy to disentangle:

“The ‘moralization of technology’ is a complex and difficult task that requires the anticipation of mediations. In addition to the fact that the future social impacts of technologies are notoriously difficult to predict, the designers and engineers are not the only ones contributing to the materialization of mediations. The future mediating roles of technologies are also shaped by the interpretation of users and emergent characteristics of the technologies themselves (Verbeek, 2009). This means that designers of artifacts cannot simply ‘inscribe’ a certain morality into technologies but that the capacity for ‘moralizing’ a specific technology will depend on the dynamics of the interplay between the engineers, the users and the technologies themselves.”

Chris, questions the viability of ‘mediation analysis’ as a heuristic and worries about the democratic deficit. He has a point. The  ‘Constructive Technology Assessment’ he alludes to is seen as overcoming this shortcoming since:
“As such, all relevant actors have a stake in the moral operation of the socio-technical ensemble and therefore the democratization of the technology design process contributes to the ‘moralization of technologies’ in a broader sense (Verbeek, 2009). This is precisely what STS scholars intend to achieve by opening the black box of technology and analyzing the complex dynamics of its design. In the following, some important moral challenges with regard to military drones will be analyzed utilizing the theoretical concepts presented thus far and possible ways to address these challenges will be discussed.”
The article proceeds to set out the activities of military drones and highlights some of the ethical challenges raised by the disintermediation of warfare. Chris concludes:
"However, recalling the quote at the beginning of this paper and presuming that we do not want drone pilots making life and death decisions with the feeling that they are merely playing a video game, it appears that much work remains to be done in ‘moralizing’ drone technology design in order to promote more ethical behavior on the remote battlefield."
True - but in a later post I want to deal with work that has been done by Professor Gillespie, on the systems engineering approach being used for autonomous unmanned aircraft. You can read his co-authored article with Robin West, Requirements for Autonomous Unmanned Air Systems set by Legal Issues (2010) published in the International C2 Journal.

Drones: The Modern Prometheus

WHEN MARY SHELLEY penned her thoughts on Frankenstein, she was not merely drawing attention to autonomous systems. Her attention was focused on creators. Drones can be regarded as the allegory to this tale. Thousands have been killed by unmanned aerial vehicles. This artefact is seen as a form of naked capitalism. It comes as not too much of a surprise that Apple has deemed it acceptable to block a software application that provides alerts to deaths caused by drone air strikes. There is now an escalating “drone race” - China has unveilled its latest military drone. It is true of course that “[n]ew technology does not change the moral truth.” Alexandra Gibb and Cameron Tulk have produced a field guide.
Drones also redefine the way we engage with each other in the evolving battlefield. Researchers from Stanford and NYU produced a report, Living Under Drones, which can be consulted here. Is there a concept of a “Just Drone War”? Scholars are rightly turning their attention to the morality of drones - and not without time as autonomous systems will be the next phase in the development of military prosecution of wars.
Promotional photo of Boris Karloff from The Br...
Promotional photo of Boris Karloff from The Bride of Frankenstein as Frankenstein’s monster. (Photo credit: Wikipedia)

Losing Humanity The Case against Killer Robots

On 19 November 2012, the Human Rights Watch, issued a report: Losing Humanity. This report comes follows very much in the steps of a debate hosted by the Human Rights Watch and Harvard Law School’s International Human Rights Clinic. The Report aims to engage the public in view of the anticipated evolution of current drone technology into fully autonomous warfare systems. It specifically attempts to analyze:
 whether the technology would comply with international humanitarian law and preserve other checks on the killing of civilians. It finds that fully autonomous weapons would not only be unable to meet legal standards but would also undermine essential non-legal safeguards for civilians. Our research and analysis strongly conclude that fully autonomous weapons should be banned and that governments should urgently pursue that end.
This is a worrying and rather depressing prognosis.
The Report takes us through autonomous systems taxonomies:
  • Human-in-the-Loop Weapons: Robots that can select targets and deliver force only with a human command;
  • Human-on-the-Loop Weapons: Robots that can select targets and deliver force under the oversight of a human operator who can override the robots’ actions; and
  • Human-out-of-the-Loop Weapons: Robots that are capable of selecting targets and delivering force without any human input or interaction.
The real fear is that the third category. Left to their devices, HRW is concerned that there will be an arms race, where we may end up with William Hertling’s AI Apocalypse moving from Science Fiction to Military Theatre. The governance concern is the lack of regulatory oversight over the military strategy in this sphere. The concept of personhood and the attribution of liability is not far from HRW thoughts:
By eliminating human involvement in the decision to use lethal force in armed conflict, fully autonomous weapons would undermine other, non-legal protections for civilians. First, robots would not be restrained by human emotions and the capacity for compassion, which can provide an important check on the killing of civilians. Emotionless robots could, therefore, serve as tools of repressive dictators seeking to crack down on their own people without fear their troops would turn on them. While proponents argue robots would be less apt to harm civilians as a result of fear or anger, emotions do not always lead to irrational killing. In fact, a person who identifies and empathizes with another human being, something a robot cannot do, will be more reluctant to harm that individual. Second, although relying on machines to fight war would reduce military casualties—a laudable goal—it would also make it easier for political leaders to resort to force since their own troops would not face death or injury. The likelihood of armed conflict could thus increase, while the burden of war would shift from combatants to civilians caught in the crossfire.
Finally, the use of fully autonomous weapons raises serious questions of accountability, which would erode another established tool for civilian protection. Given that such a robot could identify a target and launch an attack on its own power, it is unclear who should be held responsible for any unlawful actions it commits. Options include the military commander that deployed it, the programmer, the manufacturer, and the robot itself, but all are unsatisfactory. It would be difficult and arguably unfair to hold the first three actors liable, and the actor that actually committed the crime—the robot—would not be punishable. As a result, these options for accountability would fail to deter violations of international humanitarian law and to provide victims meaningful retributive justice.
The HRW makes the following recommendations:
To All States
  • Prohibit the development, production, and use of fully autonomous weapons through an international legally binding instrument.
  • Adopt national laws and policies to prohibit the development, production, and use of fully autonomous weapons.
  • Commence reviews of technologies and components that could lead to fully autonomous weapons. These reviews should take place at the very beginning of the development process and continue throughout the development and testing phases.
To Roboticists and Others Involved in the Development of Robotic Weapons
  • Establish a professional code of conduct governing the research and development of autonomous robotic weapons, especially those capable of becoming fully autonomous, in order to ensure that legal and ethical concerns about their use in armed conflict are adequately considered at all stages of technological development.
What interests me particularly is the call to Roboticists to assume a more proactive role. As many may know, Alan Winfield, has given considerable thought to the ethics of design in this area. The EPSRC has a working draft of principles. Readers will be interested in HRW reference to the article published in June, 2011, International Governance of Autonomous Military Robots.
The authors advocated an audit trail mechanism and responsible innovation as measures to promote transparency and accountability in the field of synthetic biology and nanotechnology. HRW regard this as a strategy that is clearly needed in this area of military warfare. I have yet to review this article and assess its value to the EPSRC Principles.
Enhanced by Zemanta