Friday, 5 April 2013

Robots: A Crisis of Expectations?

Alan Winfield is a  highly respect scholar - his views are based on experience and long held beliefs. I was particularly attracted by his recent post. The post, titled A Crisis of Expectations was presented at a Robot Ethics Workshop. According to Alan:
...robotics is facing a crisis of expectations. As a community we face a number of expectation gaps - significant differences between what people think robots are and do, and what robots really are and really do, and (more seriously) might reasonably be expected to do in the near future. I will argue that there are three expectation gaps at work here: public expectations, press and media expectations and funder or stakeholder expectations, and that the combined effect of these amounts to a crisis of expectations. A crisis we roboticists need to be worried about.




Readers will discover much in Alan's erudite prosecution of his case. The echoes of "moral panic" will undoubtedly point to media affirmation of the love-hate relationship that has long plagued science and technology. I was particularly struck by the examples given by Alan - and he is right to draw the inferences. That said, I wondered if there was an alternative way through which we could approach Alan's case, for example, the trust deficit that surrounds any innovation that runs ahead of cultural norms and attitudes. In my subsequent posts, I want to examine whether the examples and responses have their origins in our attitudes towards living in the risk society, and consequently, how 
the mass media, law, industry and society construct and respond to these risks (Garland, 2003 and Giddens, 2002).

Enhanced by Zemanta

Alex Leveringhaus on Military Drones


Alex Leveringhaus & Tjerk de Greef have written an excellent paper that has got me thinking about the relationship between cognitive engineering, international humanitarian law and drones. He will be presenting his paper at BILETA. There is also a Plenary, which is a must attend event. I thought it a good idea for a couple of us, who have read his paper, to engage with Alex in a series of responses.

Abstract

The paper provides a critical analysis of the arguments made in favour of as well as against operationally autonomous targeting systems. It argues that critics and advocates of these systems both raise important points. However, neither of the two camps, the paper maintains, manages to present an unequivocal case for or against autonomous targeting systems. In response, we outline a more nuanced position that combines elements from both camps. Furthermore, we propose an approach to autonomous weapons systems that transcends the current debate. This approach, we contend, has significant potential to enhance the quality of targeting decisions. 


 The paper begins by making clear the distinction between moral and operational autonomy. This distinction is critical to appreciating the thrust of the argument (and it also avoids engulfing the discussion with moral and humanitarian law issues):

Playing a major role in contemporary moral and political philosophy, the former concept denotes that an agent is capable of pursuing a conception of the good life based on his/her own reasons and motives, rather than those of another agent.iii By contrast, operational autonomy, with which we are concerned in this paper, denotes that a machine is capable of carrying out specific tasks without a human operator and without being subject to certain regulatory constraints.
 Why is this distinction important? One importance of the distinction is to bring to the forefront the role of cognitive engineering. Without delving into the complex aspects of the concept, the authors emphasis on its value to understanding "operational autonomy" appears to focus on the cognitive demands of the war space and its implications for the socio-technical domain incorporating autonomous and human interactions. A system is self-sufficient in the sense of being programmed, let us say, to capture information of the landscape from an aerial viewpoint. The same system may also be built in during the design phase so that we end up with programs that facilitate, what I would describe as overlaying machine autonomy with a human interface:

To use a contemporary example, the Rules of Engagement state that jet fighter pilots are not allowed to use the jet’s fire radar system during the first phase of the information processing loop. During subsequent phases, though, this restriction might be lifted. Similarly, operationally autonomous machines will be subject to specific restrictions at different stages of the information processing loop.

This brief overview is necessary for the conclusions reached by the authors:
Returning to the debate over the role of operational autonomy in targeting systems, we think it is fair to say that critics and advocates of operationally autonomous targeting systems (abbreviated as OATS hereinafter) have in mind systems that (1) are wholly self-sufficient with regard to all four stages of the information processing loop (Self-Sufficiency) and (2) are permitted to apply force to certain targets (Self-Direction). We call these types of targeting systems fully operationally autonomous (we refer to them as OATS+). In this case, human operators are either taken out of the loop or remain on the loop. Yet, since operational autonomy is a matter of degree and therefore highly dependent on the quality of the governing software of a system, there can be OATS that have less than full operational autonomy [see picture 1]. For the sake of simplicity, we continue to refer to less than fully operationally autonomous systems as OATS. With this conceptual background in place, let us now scrutinise some of the arguments that have been made for and against OATS+.

Let us, for present, purposes accept this framing of the debate. In this part, I briefly set out the authors' middle ground which envisages a collaborative cognitive solution. It has one benefit:

This move, we contend, potentially enhances situational understanding and subsequent decision-making. The notion of cooperation between human and artificial agents requires a paradigm shift, from fully autonomous weapons systems to e-Partnership-based systems.xi The concept of e-Partnerships is not entirely new. E-Partnerships are already being prototyped in the domains of health care, space missions, and naval warfare.xii For now, however, we are content with briefly outlining how e-Partnerships might work in the context of OATS.
It is not difficult to see what the authors have in mind and their appreciation of the need to move away from the stark characterisation of the debates on the use of drones in military warfare. As I have interpreted the discussion of the e-Partnership model, I understand the authors as requiring us to engage more fully with the role and value of complex autonomous systems and urging us to be mindful of the need to observe and better understand the situational domain of warfare and the human dimension. I think this is a legitimate goal. The authors have undertaken substantial research before embarking on the "cognitive engineering" route. I think it is right to create a decision making framework which integrates autonomous adaptive capability across the various interpretive domains. There is a less explicit dimension, which I hope to explore more fully with Alex and colleagues, namely, the role on e-Parnership/cognitive engineering in auditing the limits of both humans and autonomous systems. Whether algorithmic decision making can be easily rationalised into moral and operational components is something that I am still a little unsure about. I am also uncertain whether the e-Partnership model can be used to parse the seemingly irreconcilable differences that appear to exist between the "moral" and the "operational" camps. I also think that those who are vehemently opposed to the use of drones in the military theatre space would accept that removing the technological or human operator's shortcomings would readily subscribe to a partnership model, in whatever form. In short, is the e-Partnership model nothing more than an extremely ingenious way of creating a distinction between moral and operational responsibility?   My thinking on this is at best fuzzy. As I read media and press accounts of military drones or reports from Human Rights organizations and lawyers, I wonder at what point in time do we cede "responsibility" (which was not defined - and to be fair - not the subject of the paper) to machines. Do we need a sentient being that oversees both autonomous systems and human operators?



Enhanced by Zemanta