Archive | Presentations – April 1 RSS feed for this section

Jason Millar and AJung Moon on ‘How to Engage the Public on the Ethics and Governance of Lethal Autonomous Weapons’

Jason Millar

Jason Millar

The ethics and governance of lethal autonomous weapons systems (LAWS)—robots that can kill without direct human intervention or oversight—are the subject of active international discussions. It is imperative that we critically examine the role and nature of public engagement intended to inform decision makers. The Martens Clause, included in the additional protocols of the Geneva Conventions, makes explicit room for the public to have a say on what is deemed permissible in matters of armed conflict, especially where new technologies are concerned. However, many measures of public opinion, using methods such as surveys and polls, have been designed in a way that is subject to bias. For example, some only consider specific drone use cases instead of general ethical aspects of the technology under consideration. This paper surveys studies that have been conducted to gauge public opinion on the use of military drones (autonomous and remotely operated), including the recent international poll conducted by the Open Roboethics initiative. By drawing on evidence from moral psychology, the authors highlight the effects that particular question framings have on the measured outcomes, and outline considerations that should be taken into account when designing and determining the applicability of public opinion measures to questions of the governance of LAWS. Such considerations can help public engagement objectives live up to the spirit of the Martens Clause.

Military drones have recently emerged as one of the most controversial new military technologies. Unmanned and often weaponised, these robotic systems can be relatively inexpensive, can patrol the skies continuously, and have the potential to do much of the work of traditional manned military aircraft without putting pilots at risk. For these reasons and others, drones have come to occupy a central role in the overall military strategy of those nations that have them.

Currently, military drones, including other ground-based robotic weapons systems, are remotely operated, and sometimes referred to as Remotely Operated Weapons Systems (ROWS). With ROWS, the decision to use lethal force remains a human decision. However, technology that could support the ability of military drones to autonomously make the decision to use lethal force is under development. That is, eventually, military robots could kill without human intervention. The very real prospect of those new Lethal Autonomous Weapons Systems (LAWS) raises important ethical questions that have been taken up by the public media, governments, civil society, and the United Nations.

The decisions whether or not to build or use LAWS are a matter of democratic and humanitarian concern. International law underscores the importance of public engagement in such matters. The Martens Clause, included in the additional protocols of the Geneva Conventions, makes explicit room for the public to have a say on what is, and is not, deemed permissible in matters of armed conflict, especially where new technologies are concerned. It reads:

AJung Moon

AJung Moon

“Recalling that, in cases not covered by the law in force, the human person remains under the protection of the principles of humanity and the dictates of the public conscience.” (Additional Protocol II to the Geneva Conventions)

Though legal scholars often disagree on how best to interpret and implement the Martens Clause, it remains a fact that from the perspective of the Clause, the public is called upon to help shape international laws of armed conflict that have yet to be established. Public engagement is one way to support the requirements set out in the Clause.

Public opinion polls have been conducted to gauge people’s attitudes towards drones. Most recently, the Open Roboethics initiative conducted one such international public opinion poll in the spirit of the Martens Clause. Prior to their work, the public survey work that had been conducted on the topic was mostly limited to English-speaking, often US-based, perspectives. In addition, most of that polling focused on very specific drone use cases, often asking whether or not people supported their use in fighting terrorists.

This paper examines the various drone related public opinion polls that have been conducted to date, and examines the kind of framings used in their question design. By drawing from evidence in moral psychology, we critique the applicability of certain question types for supporting the kind of governance objectives intended by the Martens Clause. If we are to understand where the “public conscience” stands on the issue of LAWS, we need to design questions that probe individuals’ opinions about the nature of LAWS, rather than their specific use cases.

This paper sets out some general considerations that policymakers can use when designing or interpreting public opinion polls related to drones. As such, it is intended for use in ethics, law and policy.

Jason Millar and AJung Moon will present How to Engage the Public on the Ethics and Governance of Lethal Autonomous Weapons on Friday, April 1st at 11:45 AM with discussant Peter Asaro at the University of Miami Newman Alumni Center in Coral Gables, Florida.

Read full story Comments { 0 }

Matthew Rueben and William D. Smart on ‘Privacy in Human-Robot Interaction: Survey and Future Work’

Matthew Rueben

Matthew Rueben

This paper introduces the emerging subfield of privacy-sensitive robotics. It contains two in-depth surveys, one of the concept of privacy and one of robotics techniques that could be used for privacy protection. The survey of privacy begins with definitions, then outlines the history of privacy in philosophy and U.S. law. Next, an array of studies in the social sciences are presented before closing with a review of privacy in the technology literature. The survey of robot constraints is divided into three parts—perception constraints, navigation constraints, and manipulation constraints—and is presented in light of a need for privacy-based restrictions on robot behavior.  The paper also suggests future work in privacy-sensitive robotics including both basic research, which addresses questions relevant to any concern within privacy-sensitive robotics, and applied research, which develops and tests concrete solutions in specific scenarios.

Bill Smart

William D. Smart

Several themes emerge. First, that the word “privacy” is variously defined: There is no unanimously accepted theory of privacy, but most theorists acknowledge that “privacy” refers to more than one idea. Hence, it is very important for privacy-sensitive robotics researchers to give a specific definition for each privacy-related construct being used. Second, we see that privacy research has been done in many different fields—e.g., law, psychology, economics, and computer science. Privacy-sensitive robotics researchers will benefit from connecting with several of these existing trees of research as they begin making their own contributions. Third, most privacy constructs are subjective; the same scenario might violate some people’s privacy, but not others’. Thus, user studies are necessary, followed by careful analysis. Making broad generalizations is especially dangerous in privacy research. Fourth, privacy-sensitive robotics is only just beginning to be explored by researchers, and it appears that many well-defined and useful research projects can be started right away.

Matthew Rueben and Bill Smart will present Privacy-Sensitive Robotics: Initial Survey and Future Directions on Friday, April 1st at 10:5 AM with discussant Ashkan Soltani at the University of Miami Newman Alumni Center in Coral Gables, Florida.

Read full story Comments { 0 }

Madeleine Elish on ‘Moral Crumple Zones: Cautionary Tales in Human Robot Interaction’

Madeleine Elish

Madeleine Elish

A prevailing rhetoric in human-robot interaction is that automated systems will help humans do their jobs better. Robots will not replace humans, but rather work alongside and supplement human work. Even when most of a system will be automated, the concept of keeping a “human in the loop” assures that human judgment will always be able to trump automation. This rhetoric emphasizes fluid cooperation and shared control. In practice, the dynamics of shared control between human and robot are more complicated, especially with respect to issues of accountability. As control has become distributed across multiple actors, our social and legal conceptions of responsibility are still generally about an individual. If there’s an accident, we intuitively—and our laws, in practice—want someone to take the blame.

The result of this ambiguity is that humans may emerge as “liability sponges” or “moral crumple zones.” Just as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in a robotic system may become simply a component—accidentally or intentionally—that is intended to bear the brunt of the moral and legal penalties when the overall system fails.

Madeleine Elish’s paper uses the concept of “moral crumple zones” within human-machine systems as a lens through which to think about the limitations of current design paradigms and frameworks for accountability in human-robot systems. It begins by examining historical instances of “moral crumple zones” in the fields of aviation, nuclear energy and automated warfare. For instance, through an analysis of technical, social and legal histories of aviation autopilots, which can be seen as an early or proto-autonomous technology, we observe a counter-intuitive focus on human responsibility even while human action is increasingly replaced by automated control. From the perspective of both legal liability and social perception, the systems which govern autopilots and other flight management systems have remained remarkably unaccountable in the case of accidents even while these autopilot systems are primarily in control of flight.

In all of the systems discussed, the paper analyzes the dimensions of distributed control at stake while also mapping the degree to which this control of and responsibility for an action are proportionate. It argues that an analysis of the dimensions of accountability in automated and robotic systems must contend with how and why accountability may be misapplied and how structural conditions enable this misunderstanding. How do non-human actors in a system effectively deflect accountability onto other human actors? And how might future models of robotic accountability require this deflection to be controlled? At stake is the potential ultimately to protect against new forms of consumer and worker harm.

This paper presents the concept of the “moral crumple zone” as both a challenge to and an opportunity for the design and regulation of human-robot systems. By articulating mismatches between control and responsibility, we argue for an updated framework of accountability in human-robot systems, one that can contend with the complicated dimensions of cooperation between human and robot.

Madeleine Elish will present Moral Crumple Zones: Cautionary Tales in Human Robot Interaction on Friday, April 1st at 8:45 AM with discussant Rebecca Crootof at the University of Miami Newman Alumni Center in Coral Gables, Florida.

Read full story Comments { 0 }

Call for Posters: Present Your Research at We Robot 2016

Applications are now open for the first-ever We Robot poster session – proposals will be accepted on a rolling basis until March 8, 2016.

We seek late-breaking and cutting edge projects. This session is ideal for researchers to get feedback on a work in progress and professionals, academics and graduate students are all encouraged to participate. At least one of the authors of each accepted poster should plan to be present at the poster during the entire poster session on the afternoon of April 1, 2016 and for a “lightning round” of one-minute presentations.

How to propose a poster session. Please send an up to 400 word description of what you have or are doing, with links to any relevant photos or audio visual information, as well as your C.V., via the conferencing system at Please be sure to choose the “Posters” track for your upload. Submissions are due by March 8, 2016. We’ll accept poster proposals on a rolling basis. Remember, at least one author of an accepted poster must register at the conference to submit the final version – but we’ll waive the conference fee for that person.

About the Conference. We Robot 2016 will be held in Coral Gables, Florida on April 1-2, 2016 at the University of Miami School of Law, with a special day of Workshops on March 31. We Robot is the premiere US conference on law and policy relating to Robotics. It began at the University of Miami School of Law in 2012, and has since been held at Stanford and University of Washington. Attendees include lawyers, engineers, philosophers, robot builders, ethicists, and regulators who are on the front lines of robot theory, design, or development. The We Robot 2016 conference web site is

Read full story Comments { 0 }

We Robot Preliminary Program

Registration for We Robot 2016 is now open.  Please check the official We Robot 2016 Program for any changes to this preliminary program.

Thursday, March 31


9:00am Check-in & breakfast

9:30am Juris Machina: Legal Aspects of Robotics
Organizer: Woody Hartzog, Cumberland School of Law at Samford University

11:00am Break

11:15am Electronic Love, Trust, & Abuse: Social Aspects of Robotics
Organizer: Kate Darling, Research Specialist at MIT Media Lab. Fellow at the Harvard Berkman Center for Internet & Society. Affiliate at the Institute for Ethics and Emerging Technologies

12:45pm Lunch

2:00pm “The Robot Revolution has been Rescheduled (until we can debug the sensors)”: Technical Aspects of Robotics
Organizer: William D. Smart, Robotics Program, Oregon State University

3:30pm Break

3:45pm Funding the Future: Financial Aspects of Robotics
Organizer: Dan Siciliano, Rock Center for Corporate Governance, Stanford Law School

5:15pm Wrap up

Friday, April 1st


Check-in and Breakfast



Welcome Remarks: Patricia White, University of Miami School of Law
Introductory Remarks and Introduction of Sponsors: A. Michael Froomkin, University of Miami School of Law, Program Chair


Moral Crumple Zones: Cautionary Tales in Human Robot Interaction
Madeleine Elish, The Intelligence & Autonomy Initiative, Data & Society
Discussant: Rebecca Crootof, The Information Society Project, Yale Law School

10:00am Break


Privacy in Human-Robot Interaction: Survey and Future Work
Matthew Rueben, Robotics Program, Oregon State University
William D. Smart, Robotics Program, Oregon State University
Discussant: Ashkan Soltani, White House Office of Science and Technology Policy

11:30am Break


How to Engage the Public on the Ethics and Governance of Lethal Autonomous Weapons
Jason Millar, Philosophy, Queen’s University
AJung Moon, Engineering, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
Discussant: Peter Asaro, School of Media Studies, The New School for Public Engagement, Stanford Law School, International Committee for Robot Arms Control

12:30pm Lunch


Demonstration: Legal and Ethical Implications for Robots in our Life
Olivier Guihelm, Aldebaran, SoftBank Robotics

3:45pm Break


Hot Topic: Autonomous Vehicles

Autonomous Vehicles, Predictability, and Law
Harry Surden, University of Colorado Law School
Connect Cars – Recent Legal developments
Françoise Gilbert,  Greenberg Traurig LLP, Palo Alto, California
Raffaele Zallone, IT Law, the Bocconi University, ITC Committee, the European Lawyers Association
Discussant: Dan Siciliano, Rock Center for Corporate Governance, Stanford Law School


Robots In American Law
Ryan Calo, University of Washington School of Law
Discussant: A. Michael Froomkin, University of Miami School of Law, Program Chair


Poster Session & Reception

7:00pm Birds of a Feather Sessions@ Local restaurants

Saturday, April 2nd


Registration and Breakfast


Privacy and Healthcare Robots – An ANT analysis
Aurelia Tamo, The Chair for Information and Communication Law and Visiting Researcher, The Institute for Pervasive Computing, Swiss Federal Institute of Technology
Christoph Lutz, Institute for Media and Communications Management, University of St. Gallen
Discussant: Matt Beane, MIT Sloan School of Management

9:45am Break


Institutional Options for Robot Governance
Dr. Aaron Mannes, Apex Data Analytics Engine, HSARPA Department of Homeland Security
Discussant: Harry Surden, University of Colorado Law School

11:15am Break


Will #BlackLivesMatter to RoboCop?
Peter Asaro, School of Media Studies, The New School for Public Engagement, Stanford Law School, International Committee for Robot Arms Control
Discussant: Mary Anne Franks, University of Miami School of Law


Special Event: Autonomous Technologies and their Societal Impact
Raj Madhavan, Future Directions Committee, Institute of Electrical and Electronics Engineers;
Founder & CEO, HumRobTech, LLC & Distinguished Visiting Professor of Robotics, Amrita
University, India.

12:30pm Lunch


Demonstration: Openrov And Openrov Trident: Democratizing Exploration, Conservation, And Marine Science Through Low-Cost Open-Source Underwater Robots
Andrew Thaler, OpenROV
David Land, OpenROV

3:00pm Break


Siriously? Free Speech Rights for Artificial Intelligence
Helen Norton, University of Colorado School of Law
Toni Massaro, University of Arizona James E. Rogers College of Law
Discussant: Margot E. Kaminski, Ohio State University

4:15pm Break


What do We Really Know About Robots and the Law?
William D. Smart, Robotics Program, Oregon State University
Discussant: Ian Kerr, University of Ottawa, Faculty of Law, Faculty of Medicine, and Department of Philosophy.


Final Remarks: A. Michael Froomkin, University of Miami School of Law

All events April 1-2 at University of Miami Newman Alumni Center except Birds of a Feather Sessions.

Workshops March 31 will be held at the University of Miami School of Law.

You can register just for the main event or the conference and the workshops.

Read full story Comments { 0 }