Archive | February, 2016

Aaron Mannes on ‘Institutional Options for Robot Governance’

Aaron Mannes

Aaron Mannes

As robots change daily life and commerce, governments will also need to change in response to this new technological challenge. This paper examines the kinds of government institutions U.S. federal policy-makers will need to develop and implement policy for the revolution in robotics. (The institutions that will be established after the robot revolution to govern humanity will be discussed in a subsequent paper.)

Broadly, the American people will want their government to support research in robotics, regulate robotics, manage robotic crises (such as dangerous autonomous behavior), and help society adapt to the broader changes wrought by robotics. This paper, using the organizational theory and bureaucratic politics paradigms, provides a menu of institutional options for dealing with this emerging technology.

Aaron Mannes will present Institutional Options for Robot Governance on Saturday, April 2nd at 10:00 AM with discussant Harry Surden at the University of Miami Newman Alumni Center in Coral Gables, Florida.

Read full story Comments { 0 }

Aurelia Tamò and Christoph Lutz on ‘Privacy and Healthcare Robots – An ANT Analysis’

Aurelia Tamò

Aurelia Tamò

Artificial intelligence and robots reach higher and higher capacity levels every year and are increasingly prevalent. Robots are already heavily used in industrial settings, but increasingly also in healthcare, for service tasks, and in households. Social robots register our habits and attitudes, affecting our sense of intimacy, privacy, bonding and emotional support. Studies in the field of human robot interaction have shown that humans tend to anthropomorphize social robots, which substantially increases the pervasiveness of such technology. In addition, per definition robots possess real-life agency, i.e., they not only collect and process information, they also act upon it by physically reaching out into the world. This further increases their pervasiveness and creates the potential for physical damage. With such real-life agency comes an unprecedented potential for access to personal rooms and surveillance. Taken together and coupled with a lack of awareness how such technology works, these aspects threaten to endanger consumers’ privacy and to substantially limit their control of sensitive data (such as emotional states, health information and intimate relationships) when they interact with robots. Summed up, the privacy implications of social robots are far-reaching and concern both informational and physical privacy.

Christoph Lutz

Christoph Lutz

This article addresses the topic of healthcare robots and privacy. The choice of healthcare robots comes from the fact that they often deal with extremely sensitive information and very vulnerable population groups: elderly and/or severely ill individuals. In this sense, they present a “worst case scenario” for privacy, where potential privacy intrusions are especially severe. The authors use actor network theory (ANT) to shed light on the privacy implications of healthcare robots from a specific theoretical point of view. ANT is a descriptive, constructivist approach that takes into account the relationality of technology and the social and the agency of objects, concepts and ideas. It has been applied to complex technological innovations, such as e-health systems. The authors use some of the main concepts of ANT–actants, translations, tokens/quasi-objects, punctualization, obligatory passage point–to “map” the privacy ecosystem in robotic healthcare technology, thereby analyzing the complex interplay of robots and humans in that context.

Aurelia Tamò and Christoph Lutz will present Privacy and healthcare robots – An ANT analysis on Saturday, April 2nd at 8:30 AM with discussant Matt Beane at the University of Miami Newman Alumni Center in Coral Gables, Florida.

Read full story Comments { 0 }

Ryan Calo on ‘Robots In American Law’

Ryan Calo

Ryan Calo

“Robots again.” Thus begins Judge Alex Kozinski’s 1997 dissent from the Ninth Circuit’s decision not to rehear the case of Wendt v. Host International en banc. “Robots” because Wendt involved an allegation by the actors who played Cliff and Norm on the television show Cheers that a bar violated their rights of publicity by creating animatronic versions of these characters. “Again” because, just four years earlier, the Ninth Circuit permitted another suit to go forward in which Vanna White sued Samsung over an advertisement featuring her robot replica.

Robotics feels new and exciting. Which it is: the field is seeing enormous advancement and investment in recent years. But robots have also been with us for decades. And like practically any other artifact, robots have been involved in legal disputes.

These disputes vary widely, as does the role of robots themselves. Often the involvement is incidental: perhaps a robot figures into a movie plot that results in a copyright claim or a contract dispute arises over robotic equipment.

Other times, however, the robot matters. It could be a dispute at maritime law over who first discovered a shipwreck; a question of whether a robot represents something “animate” for purposes of tariff schedules; or an issue of whether an all-robot band is “performing” for purposes of an entertainment tax on food and beverage service. In these real cases and others, courts have already begun to grapple with the arguably unique issues robots tend to raise in society.

This project canvasses over fifty years of state and federal case law involving robots or close analogs in an effort to predict how courts will react to the mainstreaming of robotics taking place today. The research adds clarity to academic and policy debates in at least three ways:

First, the project retroactively tests one or more theses regarding the challenges robots are likely to pose for law and legal institutions.

Second, the project tends to refute the view that the field of robotics law has no advanced inkling of how law will react to robots. Unlike cyberlaw in the 1990s, which had but little contact with the Internet, American society in 2015 already has decades of experience with robots in some sense.

Third, the project reveals a common mental model judges appear to possess around robots, which is that a robot is capable of acting only as exactly directed. If this view were ever true, it no longer is. Disabusing jurists of the idea that robots are incapable of spontaneity or other, human-like qualities is crucial to the development of satisfying a robotics law and policy.

Ryan Calo will present Robots In American Law on Friday, April 1st at 4:30 PM with discussant Michael Froomkin at the University of Miami Newman Alumni Center in Coral Gables, Florida.

Read full story Comments { 0 }

Françoise Gilbert and Raffaele Zallone on ‘Connect Cars – Recent Legal Developments’

This paper looks at recent changes in the (1) regulatory; (2) privacy and data protection landscape and (3) liability areas. Examples of developments affecting the connected car / intelligent car market in 2015 include:

1 – Regulatory Issues

USA

Françoise Gilbert

Françoise Gilbert

California, District of Columbia, Florida, Michigan, and Nevada have laws that allow the use of the experimental vehicle for testing. They allow the use of vehicles for tests, and provided that an experienced driver is at the wheel. DC authorized the “autonomous cars”, but do not limit vehicle use only tests.

Europe
In January 2015, Germany announced that the A9 highway that connects Munich to Berlin would be equipped with technology that allows autonomouss car to communicate with other vehicles. Finland is preparing an amendment to its Road Traffic Act legislation to allow autonomous vehicles to be used in certain places and at certain times.

E-Call
In April 2015, the European Parliament adopted the “e-call” system for the automatic composition of mandatory emergency call numbers in all vehicles. Starting in Spring 2018, the e-Call system will be installed on vehicles to automatically alert emergency services about the serious road accidents. It will allow road safety services to immediately decide the type and size of emergency vehicles needed for the operation, allowing them to arrive faster, saving lives, reducing the intensity and impact of injuries and reduce the cost of traffic jams.

2 – Data Privacy and Security Issues

Raffaele Zallone

Raffaele Zallone

Personal Data?
To what extent are data collected from a vehicle “personal data”, i.e. attributable to a specific individual How to Comply with basic Privacy Principles? Assuming that some of the data collected from or through the intelligent vehicle qualify as “personal data,” numerous issues arise, such as:

  1. Notice: How to inform individuals of the nature of the collection, processing or dissemination of personal data?
  2. Consent: How can the driver and the passengers express their consent or objection to the collection of data produced by their vehicle?
  3. Choice: How can individuals control whether or when their personal data is collected or used?
  4. Access by third parties: When and in which circumstances may the personal data be shared with third parties or reused for purposes other than the original purpose?
  5. Security: How will the security of personal data be protected when in use, in storage or in transit?

What will be the effect of the adoption of the EU General Data Protection Regulation? Location information could pose a significant problem. It could reveal clues about intimate details about a person, such as a medical condition.

Security
Connected vehicles and intelligent vehicles rely on a vast ecosystem of specialized entities such as vendors, service providers, outsourcers, hosting companies, internet service providers that furnish the content, the data, and the connections, that are necessary for the vehicle to move safely, interact with the traffic, suggest or make decisions to and perform its primary functions. How to ensure data security?

3 – Liability Issues

In the US, the regime of liability for defective products (product liability law) is very complex. It involves concepts of contract law (contract law) and the law of torts and torts (tort law). Contract law is involved, for example, in the context of safeguards breaches. Tort law often uses the concept of “negligence”. The complainant must show that the defendant was negligent, for example, in the design or testing of the product. In some circumstances, US law uses the concept of “strict liability”, which can make the manufacturer responsible, even if he was not negligent.

The paper also examines recent cases of physical accidents with test vehicles (e.g. class action suits, etc.).

Françoise Gilbert and Raffaele Zallone will present Connect Cars – Recent Legal Developments on Friday, April 1st at 3:00 PM with discussant Dan Siciliano at the University of Miami Newman Alumni Center in Coral Gables, Florida.

Read full story Comments { 0 }

Harry Surden and Mary-Anne Williams on ‘Autonomous Vehicles, Predictability, and Law’

Harry Surden

Harry Surden

Fully autonomous or “self-driving” automobiles are vehicles “which can drive themselves without human supervision or input.” Because of improvements in driving safety and efficiency, fully autonomous vehicles are likely to become an increasing presence in our physical environment in the 5–15 year time frame. An important point is that, for the first time, people will be moving throughout a physical environment shared not just with other people (e.g. pedestrians), machines controlled by other people (e.g. automobiles), or constrained automated machines (e.g. elevators), but also with computer-controlled, self-directed systems that have freedom of movement and the ability to direct their own activities. Free ranging, computer-directed autonomous movement is a novel phenomenon that is likely to challenge certain basic assumptions embedded in our existing legal structure.

Today, a great deal of physical harm that might otherwise occur is likely avoided through humanity’s collective ability to predict the movements of others in our nearby physical environment. In anticipating the behavior of others in this way, we employ, in part, what psychologists call a “theory of mind.” A “theory of mind” refers to our ability to extrapolate from our own internal mental states and project them onto others in order to estimate what others are thinking, feeling, or likely to do. The internal cognitive mechanisms involved in theory-of-mind assessment allow us to make instantaneous, unconscious judgments about the likely actions of those around us, and therefore, to keep ourselves safe.

Mary-Anne Williams

Mary-Anne Williams

Problematically, the movements of autonomous cars (and other autonomous moving systems like robots and drones) tend to be less predictable to ordinary people than the comparable movements of devices controlled by humans. The core theory-of-mind mechanisms that allow us to accurately model the minds of other people and interpret their communicative signals of attention and intention will be challenged in the context of non-human, autonomous moving entities such as self-driving cars. The argument is not that autonomous vehicles are less safe than human-driven cars, nor that autonomous vehicles are inherently unpredictable systems. To the contrary, most experts expect autonomous driving to be safer than human driving, and their behavior is quite predictable overall to the engineers who designed them. Rather, the argument is that we must focus upon making the movements of autonomous vehicles more predictable relative to the ordinary people—such as pedestrians—who will share their physical environment. To the extent that certain areas of law are concerned with avoiding harm (e.g. tort and regulatory law), this potential diminishment in predictability will bring new challenges that should be addressed.

Harry Surden will present Autonomous Vehicles, Predictability, and Law on Friday, April 1st at 3:00 PM with discussant Dan Siciliano at the University of Miami Newman Alumni Center in Coral Gables, Florida.

Read full story Comments { 0 }

Jason Millar and AJung Moon on ‘How to Engage the Public on the Ethics and Governance of Lethal Autonomous Weapons’

Jason Millar

Jason Millar

The ethics and governance of lethal autonomous weapons systems (LAWS)—robots that can kill without direct human intervention or oversight—are the subject of active international discussions. It is imperative that we critically examine the role and nature of public engagement intended to inform decision makers. The Martens Clause, included in the additional protocols of the Geneva Conventions, makes explicit room for the public to have a say on what is deemed permissible in matters of armed conflict, especially where new technologies are concerned. However, many measures of public opinion, using methods such as surveys and polls, have been designed in a way that is subject to bias. For example, some only consider specific drone use cases instead of general ethical aspects of the technology under consideration. This paper surveys studies that have been conducted to gauge public opinion on the use of military drones (autonomous and remotely operated), including the recent international poll conducted by the Open Roboethics initiative. By drawing on evidence from moral psychology, the authors highlight the effects that particular question framings have on the measured outcomes, and outline considerations that should be taken into account when designing and determining the applicability of public opinion measures to questions of the governance of LAWS. Such considerations can help public engagement objectives live up to the spirit of the Martens Clause.

Military drones have recently emerged as one of the most controversial new military technologies. Unmanned and often weaponised, these robotic systems can be relatively inexpensive, can patrol the skies continuously, and have the potential to do much of the work of traditional manned military aircraft without putting pilots at risk. For these reasons and others, drones have come to occupy a central role in the overall military strategy of those nations that have them.

Currently, military drones, including other ground-based robotic weapons systems, are remotely operated, and sometimes referred to as Remotely Operated Weapons Systems (ROWS). With ROWS, the decision to use lethal force remains a human decision. However, technology that could support the ability of military drones to autonomously make the decision to use lethal force is under development. That is, eventually, military robots could kill without human intervention. The very real prospect of those new Lethal Autonomous Weapons Systems (LAWS) raises important ethical questions that have been taken up by the public media, governments, civil society, and the United Nations.

The decisions whether or not to build or use LAWS are a matter of democratic and humanitarian concern. International law underscores the importance of public engagement in such matters. The Martens Clause, included in the additional protocols of the Geneva Conventions, makes explicit room for the public to have a say on what is, and is not, deemed permissible in matters of armed conflict, especially where new technologies are concerned. It reads:

AJung Moon

AJung Moon

“Recalling that, in cases not covered by the law in force, the human person remains under the protection of the principles of humanity and the dictates of the public conscience.” (Additional Protocol II to the Geneva Conventions)

Though legal scholars often disagree on how best to interpret and implement the Martens Clause, it remains a fact that from the perspective of the Clause, the public is called upon to help shape international laws of armed conflict that have yet to be established. Public engagement is one way to support the requirements set out in the Clause.

Public opinion polls have been conducted to gauge people’s attitudes towards drones. Most recently, the Open Roboethics initiative conducted one such international public opinion poll in the spirit of the Martens Clause. Prior to their work, the public survey work that had been conducted on the topic was mostly limited to English-speaking, often US-based, perspectives. In addition, most of that polling focused on very specific drone use cases, often asking whether or not people supported their use in fighting terrorists.

This paper examines the various drone related public opinion polls that have been conducted to date, and examines the kind of framings used in their question design. By drawing from evidence in moral psychology, we critique the applicability of certain question types for supporting the kind of governance objectives intended by the Martens Clause. If we are to understand where the “public conscience” stands on the issue of LAWS, we need to design questions that probe individuals’ opinions about the nature of LAWS, rather than their specific use cases.

This paper sets out some general considerations that policymakers can use when designing or interpreting public opinion polls related to drones. As such, it is intended for use in ethics, law and policy.

Jason Millar and AJung Moon will present How to Engage the Public on the Ethics and Governance of Lethal Autonomous Weapons on Friday, April 1st at 11:45 AM with discussant Peter Asaro at the University of Miami Newman Alumni Center in Coral Gables, Florida.

Read full story Comments { 0 }

Matthew Rueben and William D. Smart on ‘Privacy in Human-Robot Interaction: Survey and Future Work’

Matthew Rueben

Matthew Rueben

This paper introduces the emerging subfield of privacy-sensitive robotics. It contains two in-depth surveys, one of the concept of privacy and one of robotics techniques that could be used for privacy protection. The survey of privacy begins with definitions, then outlines the history of privacy in philosophy and U.S. law. Next, an array of studies in the social sciences are presented before closing with a review of privacy in the technology literature. The survey of robot constraints is divided into three parts—perception constraints, navigation constraints, and manipulation constraints—and is presented in light of a need for privacy-based restrictions on robot behavior.  The paper also suggests future work in privacy-sensitive robotics including both basic research, which addresses questions relevant to any concern within privacy-sensitive robotics, and applied research, which develops and tests concrete solutions in specific scenarios.

Bill Smart

William D. Smart

Several themes emerge. First, that the word “privacy” is variously defined: There is no unanimously accepted theory of privacy, but most theorists acknowledge that “privacy” refers to more than one idea. Hence, it is very important for privacy-sensitive robotics researchers to give a specific definition for each privacy-related construct being used. Second, we see that privacy research has been done in many different fields—e.g., law, psychology, economics, and computer science. Privacy-sensitive robotics researchers will benefit from connecting with several of these existing trees of research as they begin making their own contributions. Third, most privacy constructs are subjective; the same scenario might violate some people’s privacy, but not others’. Thus, user studies are necessary, followed by careful analysis. Making broad generalizations is especially dangerous in privacy research. Fourth, privacy-sensitive robotics is only just beginning to be explored by researchers, and it appears that many well-defined and useful research projects can be started right away.

Matthew Rueben and Bill Smart will present Privacy-Sensitive Robotics: Initial Survey and Future Directions on Friday, April 1st at 10:5 AM with discussant Ashkan Soltani at the University of Miami Newman Alumni Center in Coral Gables, Florida.

Read full story Comments { 0 }

Madeleine Elish on ‘Moral Crumple Zones: Cautionary Tales in Human Robot Interaction’

Madeleine Elish

Madeleine Elish

A prevailing rhetoric in human-robot interaction is that automated systems will help humans do their jobs better. Robots will not replace humans, but rather work alongside and supplement human work. Even when most of a system will be automated, the concept of keeping a “human in the loop” assures that human judgment will always be able to trump automation. This rhetoric emphasizes fluid cooperation and shared control. In practice, the dynamics of shared control between human and robot are more complicated, especially with respect to issues of accountability. As control has become distributed across multiple actors, our social and legal conceptions of responsibility are still generally about an individual. If there’s an accident, we intuitively—and our laws, in practice—want someone to take the blame.

The result of this ambiguity is that humans may emerge as “liability sponges” or “moral crumple zones.” Just as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in a robotic system may become simply a component—accidentally or intentionally—that is intended to bear the brunt of the moral and legal penalties when the overall system fails.

Madeleine Elish’s paper uses the concept of “moral crumple zones” within human-machine systems as a lens through which to think about the limitations of current design paradigms and frameworks for accountability in human-robot systems. It begins by examining historical instances of “moral crumple zones” in the fields of aviation, nuclear energy and automated warfare. For instance, through an analysis of technical, social and legal histories of aviation autopilots, which can be seen as an early or proto-autonomous technology, we observe a counter-intuitive focus on human responsibility even while human action is increasingly replaced by automated control. From the perspective of both legal liability and social perception, the systems which govern autopilots and other flight management systems have remained remarkably unaccountable in the case of accidents even while these autopilot systems are primarily in control of flight.

In all of the systems discussed, the paper analyzes the dimensions of distributed control at stake while also mapping the degree to which this control of and responsibility for an action are proportionate. It argues that an analysis of the dimensions of accountability in automated and robotic systems must contend with how and why accountability may be misapplied and how structural conditions enable this misunderstanding. How do non-human actors in a system effectively deflect accountability onto other human actors? And how might future models of robotic accountability require this deflection to be controlled? At stake is the potential ultimately to protect against new forms of consumer and worker harm.

This paper presents the concept of the “moral crumple zone” as both a challenge to and an opportunity for the design and regulation of human-robot systems. By articulating mismatches between control and responsibility, we argue for an updated framework of accountability in human-robot systems, one that can contend with the complicated dimensions of cooperation between human and robot.

Madeleine Elish will present Moral Crumple Zones: Cautionary Tales in Human Robot Interaction on Friday, April 1st at 8:45 AM with discussant Rebecca Crootof at the University of Miami Newman Alumni Center in Coral Gables, Florida.

Read full story Comments { 0 }

Call for Posters: Present Your Research at We Robot 2016

Applications are now open for the first-ever We Robot poster session – proposals will be accepted on a rolling basis until March 8, 2016.

We seek late-breaking and cutting edge projects. This session is ideal for researchers to get feedback on a work in progress and professionals, academics and graduate students are all encouraged to participate. At least one of the authors of each accepted poster should plan to be present at the poster during the entire poster session on the afternoon of April 1, 2016 and for a “lightning round” of one-minute presentations.

How to propose a poster session. Please send an up to 400 word description of what you have or are doing, with links to any relevant photos or audio visual information, as well as your C.V., via the conferencing system at https://cmt.research.microsoft.com/ROBOT2016/. Please be sure to choose the “Posters” track for your upload. Submissions are due by March 8, 2016. We’ll accept poster proposals on a rolling basis. Remember, at least one author of an accepted poster must register at the conference to submit the final version – but we’ll waive the conference fee for that person.

About the Conference. We Robot 2016 will be held in Coral Gables, Florida on April 1-2, 2016 at the University of Miami School of Law, with a special day of Workshops on March 31. We Robot is the premiere US conference on law and policy relating to Robotics. It began at the University of Miami School of Law in 2012, and has since been held at Stanford and University of Washington. Attendees include lawyers, engineers, philosophers, robot builders, ethicists, and regulators who are on the front lines of robot theory, design, or development. The We Robot 2016 conference web site is http://robots.law.miami.edu/2016.

Read full story Comments { 0 }