Deadline for We Robot Poster Proposals Extended to March 11

 

 

Applications are now open for the first-ever We Robot poster session – and the deadline for proposals has been extended until March 11, 2016 due to planned maintenance and downtime of the submission site scheduled for, wouldn’t you guess it, March 8, the original deadline. A list of posters that have already been accepted is on our program — but there’s still room for some more.

We seek late-breaking and cutting edge projects. This session is ideal for researchers to get feedback on a work in progress and professionals, academics and graduate students are all encouraged to participate. At least one of the authors of each accepted poster should plan to be present at the poster during the entire poster session on the afternoon of April 1, 2016 and for a “lightning round” of one-minute presentations.

How to propose a poster session. Please send an up to 400 word description of what you have or are doing, with links to any relevant photos or audio visual information, as well as your C.V., via the conferencing system at https://cmt.research.microsoft.com/ROBOT2016/. Please be sure to choose the “Posters” track for your upload. Submissions are due by March 8, 2016. We’ll accept poster proposals on a rolling basis. Remember, at least one author of an accepted poster must register at the conference to submit the final version – but we’ll waive the conference fee for that person.

About the Conference. We Robot 2016 will be held in Coral Gables, Florida on April 1-2, 2016 at the University of Miami School of Law, with a special day of Workshops on March 31. We Robot is the premiere US conference on law and policy relating to Robotics. It began at the University of Miami School of Law in 2012, and has since been held at Stanford and University of Washington. Attendees include lawyers, engineers, philosophers, robot builders, ethicists, and regulators who are on the front lines of robot theory, design, or development. The We Robot 2016 conference web site is https://robots.law.miami.edu/2016.

READ FULL STORY

 

 

Helen Norton and Toni Massaro on ‘Siriously? Free Speech Rights for Artificial Intelligence’

 

 

Helen Norton

Helen Norton

Computers with communicative artificial intelligence (AI) are pushing First Amendment theory and doctrine in profound and novel ways. They are becoming increasingly self-directed and corporal in ways that may one day make it difficult to call the communication “ours” versus “theirs.” This, in turn, invites questions about whether the First Amendment ever will (or ever should) protect AI speech or speakers even absent a locatable and accountable human creator. The authors explain why current free speech theory and doctrine pose surprisingly few barriers to this counterintuitive result; their elasticity suggests that speaker human-ness no longer may be a logically essential part of the First Amendment calculus.

Toni Massaro

Toni Massaro

The authors also observe, however, that free speech theory and doctrine provide a basis for regulating, as well as protecting, the speech of nonhuman speakers to serve the interests of their human listeners should strong AI ever evolve to this point. Finally, we note that the futurist implications we describe are possible, but not inevitable. Moreover, contemplating these outcomes for AI speech may inspire rethinking of the free speech theory and doctrine that makes them plausible.

Helen Norton and Toni Massaro will present Siriously? Free Speech Rights for Artificial Intelligence on Saturday, April 2nd at 3:15 PM with discussant Margot E. Kaminski at the University of Miami Newman Alumni Center in Coral Gables, Florida.

READ FULL STORY

 

 

Peter Asaro on ‘Will #BlackLivesMatter to RoboCop?’

 

 

Peter Asaro

Peter Asaro

This paper examines the possible future application of robotics to policing on the assumption that these will be systems that are controlled by programmed computers, rather than cyborgs. In particular, this paper examines the legal and moral requirements for the use of force by police, and whether robotic systems of the foreseeable future could meet these requirements, or whether those laws may need to be revised in light of robotic technologies, as some have argued.

Beyond this, the paper considers the racial dimensions of the use of force by police, and how such automation might impact the discriminatory nature of police violence. Many people believe that technologies are politically neutral, and might expect a future RoboCop to be similarly neutral, and consequently to lack racial prejudice and bias. In this way, RoboCop might be seen as a technological solution to racist policing. Many scholars have argued that technologies embody the values of the society that produces them, and often amplify the power disparities and biases of that society. In this way, RoboCop might be seen as an even more powerful, dangerous and unaccountable embodiment of racist policing.

The paper proceeds by examining the problems of racist policing from a number of diverse perspectives. These include examining the national and international legal standards for the use of force by police, as well as the guidelines issued by UN Human Rights Council, ICRC, and Amnesty International, and the legal implications of designing robotic systems to use violent and lethal force, remotely or autonomously.

From another perspective, the paper will consider the ways in which digital technologies are not racially neutral, but can actually embody forms of racism by design, both intentionally and unintentionally. This includes simple forms such as automatic faucets which fail to recognize dark skinned hands,the intentional tuning of color film stock to give greater dynamic range to white faces at the expense of black faces, and the numerous challenges of adapting facial recognition technologies to racially diverse faces. In other words, how might automated technologies that are intended to treat everyone equal, fail to do so? And further, how might automated technologies be expected to make special considerations for particularly vulnerable populations? The paper also considers the challenges of recognizing individuals in need of special consideration during police encounters, such as the elderly, children, pregnant women, people experiencing health emergencies, the mentally ill, and the physically handicapped including the deaf, blind and those utilizing wheelchairs, canes, prosthetics and other medical aides and devices.

The paper also considers the systemic nature of racism. The automation of policing might fail to address systemic racism, even if it could be successful in eliminating racial bias in individual police encounters. In particular, it considers the likely applications of data-driven policing. Given the efficiency aims of automation, it seems likely that automated patrols would be shaped by data from previous police calls and encounters. As is already the case with human policing, robotic police will likely be deployed more heavily in the communities of racial minorities, and the poor and disenfranchised where they will generate more interactions, more arrests, and thus provide data to further justify greater robotic police presence in those communities. That is, automated policing could easily reproduce the racist effects of existing practices and its explicit and implicit forms of racism.

Finally, the paper reflects on the need for greater community involvement in establishing police use-of-force standards, as well as the enforcement of those standards, and other norms governing policing. Moreover, as policing becomes increasingly automated, through both data-driven and robotic technologies, it is increasingly important to involve communities in the design and adoption of technologies used to keep the peace in those communities. Failing to do so will only further increase an adversarial stance between communities and their police force.

Peter Asaro will present Will #BlackLivesMatter to RoboCop? on Saturday, April 2nd at 3:15 PM with discussant Mary Anne Franks at the University of Miami Newman Alumni Center in Coral Gables, Florida.

READ FULL STORY

 

 

Aaron Mannes on ‘Institutional Options for Robot Governance’

 

 

Aaron Mannes

Aaron Mannes

As robots change daily life and commerce, governments will also need to change in response to this new technological challenge. This paper examines the kinds of government institutions U.S. federal policy-makers will need to develop and implement policy for the revolution in robotics. (The institutions that will be established after the robot revolution to govern humanity will be discussed in a subsequent paper.)

Broadly, the American people will want their government to support research in robotics, regulate robotics, manage robotic crises (such as dangerous autonomous behavior), and help society adapt to the broader changes wrought by robotics. This paper, using the organizational theory and bureaucratic politics paradigms, provides a menu of institutional options for dealing with this emerging technology.

Aaron Mannes will present Institutional Options for Robot Governance on Saturday, April 2nd at 10:00 AM with discussant Harry Surden at the University of Miami Newman Alumni Center in Coral Gables, Florida.

READ FULL STORY

 

 

Aurelia Tamò and Christoph Lutz on ‘Privacy and Healthcare Robots – An ANT Analysis’

 

 

Aurelia Tamò

Aurelia Tamò

Artificial intelligence and robots reach higher and higher capacity levels every year and are increasingly prevalent. Robots are already heavily used in industrial settings, but increasingly also in healthcare, for service tasks, and in households. Social robots register our habits and attitudes, affecting our sense of intimacy, privacy, bonding and emotional support. Studies in the field of human robot interaction have shown that humans tend to anthropomorphize social robots, which substantially increases the pervasiveness of such technology. In addition, per definition robots possess real-life agency, i.e., they not only collect and process information, they also act upon it by physically reaching out into the world. This further increases their pervasiveness and creates the potential for physical damage. With such real-life agency comes an unprecedented potential for access to personal rooms and surveillance. Taken together and coupled with a lack of awareness how such technology works, these aspects threaten to endanger consumers’ privacy and to substantially limit their control of sensitive data (such as emotional states, health information and intimate relationships) when they interact with robots. Summed up, the privacy implications of social robots are far-reaching and concern both informational and physical privacy.

Christoph Lutz

Christoph Lutz

This article addresses the topic of healthcare robots and privacy. The choice of healthcare robots comes from the fact that they often deal with extremely sensitive information and very vulnerable population groups: elderly and/or severely ill individuals. In this sense, they present a “worst case scenario” for privacy, where potential privacy intrusions are especially severe. The authors use actor network theory (ANT) to shed light on the privacy implications of healthcare robots from a specific theoretical point of view. ANT is a descriptive, constructivist approach that takes into account the relationality of technology and the social and the agency of objects, concepts and ideas. It has been applied to complex technological innovations, such as e-health systems. The authors use some of the main concepts of ANT–actants, translations, tokens/quasi-objects, punctualization, obligatory passage point–to “map” the privacy ecosystem in robotic healthcare technology, thereby analyzing the complex interplay of robots and humans in that context.

Aurelia Tamò and Christoph Lutz will present Privacy and healthcare robots – An ANT analysis on Saturday, April 2nd at 8:30 AM with discussant Matt Beane at the University of Miami Newman Alumni Center in Coral Gables, Florida.

READ FULL STORY

 

 

Ryan Calo on ‘Robots In American Law’

 

 

Ryan Calo

Ryan Calo

“Robots again.” Thus begins Judge Alex Kozinski’s 1997 dissent from the Ninth Circuit’s decision not to rehear the case of Wendt v. Host International en banc. “Robots” because Wendt involved an allegation by the actors who played Cliff and Norm on the television show Cheers that a bar violated their rights of publicity by creating animatronic versions of these characters. “Again” because, just four years earlier, the Ninth Circuit permitted another suit to go forward in which Vanna White sued Samsung over an advertisement featuring her robot replica.

Robotics feels new and exciting. Which it is: the field is seeing enormous advancement and investment in recent years. But robots have also been with us for decades. And like practically any other artifact, robots have been involved in legal disputes.

These disputes vary widely, as does the role of robots themselves. Often the involvement is incidental: perhaps a robot figures into a movie plot that results in a copyright claim or a contract dispute arises over robotic equipment.

Other times, however, the robot matters. It could be a dispute at maritime law over who first discovered a shipwreck; a question of whether a robot represents something “animate” for purposes of tariff schedules; or an issue of whether an all-robot band is “performing” for purposes of an entertainment tax on food and beverage service. In these real cases and others, courts have already begun to grapple with the arguably unique issues robots tend to raise in society.

This project canvasses over fifty years of state and federal case law involving robots or close analogs in an effort to predict how courts will react to the mainstreaming of robotics taking place today. The research adds clarity to academic and policy debates in at least three ways:

First, the project retroactively tests one or more theses regarding the challenges robots are likely to pose for law and legal institutions.

Second, the project tends to refute the view that the field of robotics law has no advanced inkling of how law will react to robots. Unlike cyberlaw in the 1990s, which had but little contact with the Internet, American society in 2015 already has decades of experience with robots in some sense.

Third, the project reveals a common mental model judges appear to possess around robots, which is that a robot is capable of acting only as exactly directed. If this view were ever true, it no longer is. Disabusing jurists of the idea that robots are incapable of spontaneity or other, human-like qualities is crucial to the development of satisfying a robotics law and policy.

Ryan Calo will present Robots In American Law on Friday, April 1st at 4:30 PM with discussant Michael Froomkin at the University of Miami Newman Alumni Center in Coral Gables, Florida.

READ FULL STORY

 

 

Françoise Gilbert and Raffaele Zallone on ‘Connect Cars – Recent Legal Developments’

 

 

This paper looks at recent changes in the (1) regulatory; (2) privacy and data protection landscape and (3) liability areas. Examples of developments affecting the connected car / intelligent car market in 2015 include:

1 – Regulatory Issues

USA

Françoise Gilbert

Françoise Gilbert

California, District of Columbia, Florida, Michigan, and Nevada have laws that allow the use of the experimental vehicle for testing. They allow the use of vehicles for tests, and provided that an experienced driver is at the wheel. DC authorized the “autonomous cars”, but do not limit vehicle use only tests.

Europe
In January 2015, Germany announced that the A9 highway that connects Munich to Berlin would be equipped with technology that allows autonomouss car to communicate with other vehicles. Finland is preparing an amendment to its Road Traffic Act legislation to allow autonomous vehicles to be used in certain places and at certain times.

E-Call
In April 2015, the European Parliament adopted the “e-call” system for the automatic composition of mandatory emergency call numbers in all vehicles. Starting in Spring 2018, the e-Call system will be installed on vehicles to automatically alert emergency services about the serious road accidents. It will allow road safety services to immediately decide the type and size of emergency vehicles needed for the operation, allowing them to arrive faster, saving lives, reducing the intensity and impact of injuries and reduce the cost of traffic jams.

2 – Data Privacy and Security Issues

Raffaele Zallone

Raffaele Zallone

Personal Data?
To what extent are data collected from a vehicle “personal data”, i.e. attributable to a specific individual How to Comply with basic Privacy Principles? Assuming that some of the data collected from or through the intelligent vehicle qualify as “personal data,” numerous issues arise, such as:

  1. Notice: How to inform individuals of the nature of the collection, processing or dissemination of personal data?
  2. Consent: How can the driver and the passengers express their consent or objection to the collection of data produced by their vehicle?
  3. Choice: How can individuals control whether or when their personal data is collected or used?
  4. Access by third parties: When and in which circumstances may the personal data be shared with third parties or reused for purposes other than the original purpose?
  5. Security: How will the security of personal data be protected when in use, in storage or in transit?

What will be the effect of the adoption of the EU General Data Protection Regulation? Location information could pose a significant problem. It could reveal clues about intimate details about a person, such as a medical condition.

Security
Connected vehicles and intelligent vehicles rely on a vast ecosystem of specialized entities such as vendors, service providers, outsourcers, hosting companies, internet service providers that furnish the content, the data, and the connections, that are necessary for the vehicle to move safely, interact with the traffic, suggest or make decisions to and perform its primary functions. How to ensure data security?

3 – Liability Issues

In the US, the regime of liability for defective products (product liability law) is very complex. It involves concepts of contract law (contract law) and the law of torts and torts (tort law). Contract law is involved, for example, in the context of safeguards breaches. Tort law often uses the concept of “negligence”. The complainant must show that the defendant was negligent, for example, in the design or testing of the product. In some circumstances, US law uses the concept of “strict liability”, which can make the manufacturer responsible, even if he was not negligent.

The paper also examines recent cases of physical accidents with test vehicles (e.g. class action suits, etc.).

Françoise Gilbert and Raffaele Zallone will present Connect Cars – Recent Legal Developments on Friday, April 1st at 3:00 PM with discussant Dan Siciliano at the University of Miami Newman Alumni Center in Coral Gables, Florida.

READ FULL STORY

 

 

Harry Surden and Mary-Anne Williams on ‘Autonomous Vehicles, Predictability, and Law’

 

 

Harry Surden

Harry Surden

Fully autonomous or “self-driving” automobiles are vehicles “which can drive themselves without human supervision or input.” Because of improvements in driving safety and efficiency, fully autonomous vehicles are likely to become an increasing presence in our physical environment in the 5–15 year time frame. An important point is that, for the first time, people will be moving throughout a physical environment shared not just with other people (e.g. pedestrians), machines controlled by other people (e.g. automobiles), or constrained automated machines (e.g. elevators), but also with computer-controlled, self-directed systems that have freedom of movement and the ability to direct their own activities. Free ranging, computer-directed autonomous movement is a novel phenomenon that is likely to challenge certain basic assumptions embedded in our existing legal structure.

Today, a great deal of physical harm that might otherwise occur is likely avoided through humanity’s collective ability to predict the movements of others in our nearby physical environment. In anticipating the behavior of others in this way, we employ, in part, what psychologists call a “theory of mind.” A “theory of mind” refers to our ability to extrapolate from our own internal mental states and project them onto others in order to estimate what others are thinking, feeling, or likely to do. The internal cognitive mechanisms involved in theory-of-mind assessment allow us to make instantaneous, unconscious judgments about the likely actions of those around us, and therefore, to keep ourselves safe.

Mary-Anne Williams

Mary-Anne Williams

Problematically, the movements of autonomous cars (and other autonomous moving systems like robots and drones) tend to be less predictable to ordinary people than the comparable movements of devices controlled by humans. The core theory-of-mind mechanisms that allow us to accurately model the minds of other people and interpret their communicative signals of attention and intention will be challenged in the context of non-human, autonomous moving entities such as self-driving cars. The argument is not that autonomous vehicles are less safe than human-driven cars, nor that autonomous vehicles are inherently unpredictable systems. To the contrary, most experts expect autonomous driving to be safer than human driving, and their behavior is quite predictable overall to the engineers who designed them. Rather, the argument is that we must focus upon making the movements of autonomous vehicles more predictable relative to the ordinary people—such as pedestrians—who will share their physical environment. To the extent that certain areas of law are concerned with avoiding harm (e.g. tort and regulatory law), this potential diminishment in predictability will bring new challenges that should be addressed.

Harry Surden will present Autonomous Vehicles, Predictability, and Law on Friday, April 1st at 3:00 PM with discussant Dan Siciliano at the University of Miami Newman Alumni Center in Coral Gables, Florida.

READ FULL STORY

 

 

Jason Millar and AJung Moon on ‘How to Engage the Public on the Ethics and Governance of Lethal Autonomous Weapons’

 

 

Jason Millar

Jason Millar

The ethics and governance of lethal autonomous weapons systems (LAWS)—robots that can kill without direct human intervention or oversight—are the subject of active international discussions. It is imperative that we critically examine the role and nature of public engagement intended to inform decision makers. The Martens Clause, included in the additional protocols of the Geneva Conventions, makes explicit room for the public to have a say on what is deemed permissible in matters of armed conflict, especially where new technologies are concerned. However, many measures of public opinion, using methods such as surveys and polls, have been designed in a way that is subject to bias. For example, some only consider specific drone use cases instead of general ethical aspects of the technology under consideration. This paper surveys studies that have been conducted to gauge public opinion on the use of military drones (autonomous and remotely operated), including the recent international poll conducted by the Open Roboethics initiative. By drawing on evidence from moral psychology, the authors highlight the effects that particular question framings have on the measured outcomes, and outline considerations that should be taken into account when designing and determining the applicability of public opinion measures to questions of the governance of LAWS. Such considerations can help public engagement objectives live up to the spirit of the Martens Clause.

Military drones have recently emerged as one of the most controversial new military technologies. Unmanned and often weaponised, these robotic systems can be relatively inexpensive, can patrol the skies continuously, and have the potential to do much of the work of traditional manned military aircraft without putting pilots at risk. For these reasons and others, drones have come to occupy a central role in the overall military strategy of those nations that have them.

Currently, military drones, including other ground-based robotic weapons systems, are remotely operated, and sometimes referred to as Remotely Operated Weapons Systems (ROWS). With ROWS, the decision to use lethal force remains a human decision. However, technology that could support the ability of military drones to autonomously make the decision to use lethal force is under development. That is, eventually, military robots could kill without human intervention. The very real prospect of those new Lethal Autonomous Weapons Systems (LAWS) raises important ethical questions that have been taken up by the public media, governments, civil society, and the United Nations.

The decisions whether or not to build or use LAWS are a matter of democratic and humanitarian concern. International law underscores the importance of public engagement in such matters. The Martens Clause, included in the additional protocols of the Geneva Conventions, makes explicit room for the public to have a say on what is, and is not, deemed permissible in matters of armed conflict, especially where new technologies are concerned. It reads:

AJung Moon

AJung Moon

“Recalling that, in cases not covered by the law in force, the human person remains under the protection of the principles of humanity and the dictates of the public conscience.” (Additional Protocol II to the Geneva Conventions)

Though legal scholars often disagree on how best to interpret and implement the Martens Clause, it remains a fact that from the perspective of the Clause, the public is called upon to help shape international laws of armed conflict that have yet to be established. Public engagement is one way to support the requirements set out in the Clause.

Public opinion polls have been conducted to gauge people’s attitudes towards drones. Most recently, the Open Roboethics initiative conducted one such international public opinion poll in the spirit of the Martens Clause. Prior to their work, the public survey work that had been conducted on the topic was mostly limited to English-speaking, often US-based, perspectives. In addition, most of that polling focused on very specific drone use cases, often asking whether or not people supported their use in fighting terrorists.

This paper examines the various drone related public opinion polls that have been conducted to date, and examines the kind of framings used in their question design. By drawing from evidence in moral psychology, we critique the applicability of certain question types for supporting the kind of governance objectives intended by the Martens Clause. If we are to understand where the “public conscience” stands on the issue of LAWS, we need to design questions that probe individuals’ opinions about the nature of LAWS, rather than their specific use cases.

This paper sets out some general considerations that policymakers can use when designing or interpreting public opinion polls related to drones. As such, it is intended for use in ethics, law and policy.

Jason Millar and AJung Moon will present How to Engage the Public on the Ethics and Governance of Lethal Autonomous Weapons on Friday, April 1st at 11:45 AM with discussant Peter Asaro at the University of Miami Newman Alumni Center in Coral Gables, Florida.

READ FULL STORY

 

 

Matthew Rueben and William D. Smart on ‘Privacy in Human-Robot Interaction: Survey and Future Work’

 

 

Matthew Rueben

Matthew Rueben

This paper introduces the emerging subfield of privacy-sensitive robotics. It contains two in-depth surveys, one of the concept of privacy and one of robotics techniques that could be used for privacy protection. The survey of privacy begins with definitions, then outlines the history of privacy in philosophy and U.S. law. Next, an array of studies in the social sciences are presented before closing with a review of privacy in the technology literature. The survey of robot constraints is divided into three parts—perception constraints, navigation constraints, and manipulation constraints—and is presented in light of a need for privacy-based restrictions on robot behavior.  The paper also suggests future work in privacy-sensitive robotics including both basic research, which addresses questions relevant to any concern within privacy-sensitive robotics, and applied research, which develops and tests concrete solutions in specific scenarios.

Bill Smart

William D. Smart

Several themes emerge. First, that the word “privacy” is variously defined: There is no unanimously accepted theory of privacy, but most theorists acknowledge that “privacy” refers to more than one idea. Hence, it is very important for privacy-sensitive robotics researchers to give a specific definition for each privacy-related construct being used. Second, we see that privacy research has been done in many different fields—e.g., law, psychology, economics, and computer science. Privacy-sensitive robotics researchers will benefit from connecting with several of these existing trees of research as they begin making their own contributions. Third, most privacy constructs are subjective; the same scenario might violate some people’s privacy, but not others’. Thus, user studies are necessary, followed by careful analysis. Making broad generalizations is especially dangerous in privacy research. Fourth, privacy-sensitive robotics is only just beginning to be explored by researchers, and it appears that many well-defined and useful research projects can be started right away.

Matthew Rueben and Bill Smart will present Privacy-Sensitive Robotics: Initial Survey and Future Directions on Friday, April 1st at 10:5 AM with discussant Ashkan Soltani at the University of Miami Newman Alumni Center in Coral Gables, Florida.

READ FULL STORY