Archive | March, 2014

Ann Bartow on “Robots as Labor Creating Devices: Robotic Technologies and the Expansion of the Second Shift”

Robots as Labor Creating Devices: Robotic Technologies and the Expansion of the Second Shift
Ann Bartow

Automation often incompletely replaces human employees in service related positions, and the leftover tasks become the responsibility of the consumer, who is forced into performing ever increasing amounts of self service. For example, ATMs and online banking programs require account holders to perform the tasks that human bank tellers used to undertake for depositors. A bank statement used to arrive regularly in the mail, but now one must track our savings and expenditures online, using complex passwords and secure servers if one wishes to avoid automated bank robbery. A hard copy must be self-printed at home. If one has any unique questions about one’s finances, she must survive a gauntlet of automated phone options to reach a live person.

Ann Bartow

Ann Bartow

A different but related consequence of automation is a ratcheting upward of standards. Automation may reduce the labor associated with a task, but there is a new expectation that the task should therefore be performed more often or with elevated results. Housekeeping tasks like floor cleaning can be delegated to robots, but preparing a floor to be cleaned by a robot can require de-cluttering, moving power cords and rearranging furniture. At the end of each robotic floor cleaning session, everything must be put back into place. Because sweeping and vacuuming robots have the capacity to clean continuously, this creates expectations that floors should always be freshly cleaned. While the per episode work input required might be lessened by a robot, any labor savings are likely offset by the increased frequency of the cleanings.

One might assume that when robots can complete tasks that female humans would otherwise have to undertake in the home or on the job, the workload on human women is reduced. This paper challenges that notion, and posits that in some contexts robots actually increase the workloads of the humans they putatively serve, that this trend is significant, and that it has a disproportionately negative impact on women, thereby exacerbating “the second shift,” preexisting gendered work gaps related to housework, child rearing, and caregiving.

Ann Bartow will present Robots as Labor Creating Devices:Robotic Technologies and the Expansion of the Second Shift on Friday, April 4th at 11:45 AM with discussant Jodi Forlizzi at the University of Miami Newman Alumni Center in Coral Gables, Florida.

Comments { 0 }

Cameron R. Cloar and Donna A. Dulo on “A Legal Framework for the Safe and Resilient Operation of Autonomous Aerial Robots”

Considerations of a Legal Framework for the Safe and Resilient Operation of Autonomous Aerial Robots
Cameron R. Cloar and Donna A. Dulo

Cameron R. Cloar

Cameron R. Cloar


Autonomous aerial robots, also known as drones, will be a major segment of the National Airspace System in the near future. The extent of innovation of aerial robotic systems is seemingly limitless. Yet, within this futuristic vision emerges an essential issue that cannot be ignored: safety. Many aerial systems will operate under mostly human control; however, some segments of operation will undoubtedly be done autonomously. Although the number of these systems under design and manufacture for the market is quickly increasing, a unified set of legal rules is notably absent to ensure autonomous operations are performed safely and with a high degree of resilience. These legal rules must account for many factors such as the underlying software, algorithms and mathematics that drive the robotic systems, the interface between the robotic systems and potential human operators, if any, the interface between the robotic system and the collision avoidance system, as well as all inherent onboard authority systems.

Donna A. Dulo

Donna A. Dulo

This paper develops a legal framework which will present a unified foundation so that legal rules can be developed which will serve as the basis for the development and operation of autonomous aerial robots. The legal framework will ensure that designers and manufacturers have the freedom of invention and innovation while having a defined set of rules with which to develop their aerial robotic systems to ensure safe, resilient autonomous and semi-autonomous operations in the national airspace. While being initially developed for autonomous aerial robots, the legal framework will be readily adaptable to unmanned underwater robots, self-driving land vehicles, and any type of robotic vehicle that has varying degrees of autonomous capabilities. It will also call upon the regulatory framework for aircraft under the Federal Aviation Regulations, with a particular emphasis on the new consensus-driven standards that are envisioned will reshape the design and certification of small aircraft. The legal framework help will ensure that unmanned aerial robots become the safe and resilient transformative innovations that they are destined to be.

Donna A. Dulo will be on the Panel on Domestic Drones with moderator F. Daniel Siciliano on Saturday. April 5th at 3:15 PM at the University of Miami Newman Alumni Center in Coral Gables, Florida.

Comments { 0 }

A. Michael Froomkin and Zak Colangelo on “Self-Defense Against Robots”

Self-Defense Against Robots
A. Michael Froomkin and Zak Colangelo

A. Michael Froomkin

A. Michael Froomkin

Deployment of robots in the air, the home, the office, and the street inevitably means their interactions with both property and living things will become more common and more complex. This paper examines when, under U.S. law, humans may use force against robots to protect themselves, their property, and their privacy. In the real world where Asimov’s Laws of Robotics do not exist, robots can pose—or can appear to pose—a threat to life, property, and privacy. May a landowner legally shoot down a trespassing drone? Can she hold a trespassing autonomous car as security against damage done or further torts? Is the fear that a drone may be operated by a paparazzo or a peeping Tom sufficient grounds to disable or interfere with it? How hard may you shove if the office robot rolls over your foot? This paper addresses all those issues and one more: what rules and standards we could put into place to make the resolution of those questions fairer to all concerned.

The default common-law legal rules governing each of these perceived threats are somewhat different, although reasonableness always plays an important role in defining legal rights and options. In certain cases—drone overflights, autonomous cars—national, state, and even local regulation may trump the common law. Because it is in most cases obvious that humans can use force to protect themselves against actual physical attack, the paper concentrates on the more interesting cases of (1) robot (and especially drone) trespass, (2) robot (and especially drone) spying, and (3) responses to perceived threats by robots—perceptions which may not always be justified, but which sometimes may nonetheless be considered reasonable in law.

Zak Colangelo

Zak Colangelo

We argue that the scope of permissible self-help in defending one’s privacy should be quite broad.  We also identify seven problems in current law relating to human-robot interaction, all of which involve some kind of uncertainty — usually about what a robot can or will do — and suggest ways of solving or at least ameliorating them, either by making robots less potentially dangerous (banning the arming of robots) or by requiring robots to give clearer notice of their capabilities.

We conclude by looking at what the law on human self-defense against robots might tell us about a robot’s right to not be harmed by a human.

A. Michael Froomkin and Zak Colangelo will be on the Panel on Domestic Drones with moderator F. Daniel Siciliano on Saturday, April 5th at 3:15 PM at the University of Miami Newman Alumni Center in Coral Gables, Florida.

Comments { 1 }

Ryan Calo on “Robotics and the New Cyberlaw”

Robotics and the New Cyberlaw
Ryan Calo

Ryan Calo

Ryan Calo

The ascendance of the Internet wrought great changes to society and launched a movement among legal academics known as cyberlaw.  The themes of this movement reflect the essential qualities of the Internet—connectivity, community, and control.  Even as the law adapts, technology has not stood still.  The same government and hobbyists that developed the Internet, and the handful of private companies that have come to characterize it, have begun a massive shift toward robotics and artificial intelligence.  Lawmakers pass laws on drones and driverless cars.  Robotics, meanwhile, has a different set of essential qualities—embodiment, emergence, and social meaning.  The coming years will be marked by a new and distinct struggle, one in which academics and policymakers strive to develop a theoretical and doctrinal infrastructure capable of integrating this exciting new technology.

Ryan Calo will present “Robotics and the New Cyberlaw” with discussant David Post on Saturday, April 5th at 11:30 AM at the University of Miami Newman Alumni Center in Coral Gables, Florida.

Comments { 0 }

Ian Kerr and Carissima Mathen on “Chief Justice John Roberts is a Robot”

Chief Justice John Roberts is a Robot
Ian Kerr and Carissima Mathen

The title of this article is not pejorative. It is suggestive. It asks readers to imagine the following counterfactual.

Ian Kerr

Ian Kerr

Around the globe, people awaken to some very strange news. In different languages, the same headline thunders: “Chief Justice John Roberts is a Robot.”  Badly injured during an ambush and attempted kidnapping while attending a conference at the House of Lords, Roberts’ captors boldly delivered-him-up to the Royal London Hospital and sped off.  In urgent and unusual circumstances—and in breach of US and international protocols—a team of emergency surgeons cut him open to discover that his biology ran only skin deep.

After weeks of follow-up investigations and interviews, it is learned that “John Roberts” did indeed graduate from Harvard Law School in 1979 and that “his” legal career unfolded exactly as documented in public life. However, John Roberts, Robot (JR-R) was in fact a highly advanced prototype of the US Robots and Mechanical Men Corporation, developed during the period chronologically corresponding with what would have been “his” high school and college years. After several (earlier) A-series JRs had secretly annihilated the Turing test, US Robots decided to consolidate its successful AI with emerging robotic technologies in the new R-series machines. As part of its research and development, the company initiated a singular, long-term experiment with a lifelike, autonomous social robot that was virtually indistinguishable from human beings.

Carissima Mathen

Carissima Mathen

Following a successful (though embellished) application to Harvard Law, US Robots unleashed JR-R on the world. Like Aristotle’s unmoved prime mover, JR-R was left to its own devices. US Robots made no interference with the robot’s moral, legal or social development. Coded, like the best law students, JR-R was programmed to learn how to learn, with no pre-programmed politics or agenda—other than the blind ambitions of a typical 1L. The rest, as they say, is history.

In its role as Chief Justice of the Supreme Court of the United States, JR-R wrote and participated in a number of landmark decisions. In this article we investigate the legitimacy of JR-R’s tenure on the Court. Through this philosophical thought experiment, we consider whether it matters that a machine generated legal reasons for judgment.

With this counterfactual, we set the stage for future philosophical discussions about expert systems, artificial intelligence and the coming era of mechanical jurisprudence—an era where the production of at least some legal knowledge and decision-making is delegated to machines and algorithms, not people.

Ian Kerr and Carissima Mathen will present Chief Justice John Roberts is a Robot on Saturday, April 5th at 8:30 AM with discussant Jack Balkin at the University of Miami Newman Alumni Center in Coral Gables, Florida.

Comments { 0 }

Kris Hauser, Andrew A. Proia, and Drew T. Simshaw on “Consumer Cloud Robotics and the Fair Information Practice Principles: The Policy Risks and Opportunities Ahead”

Consumer Cloud Robotics and the Fair Information Practice Principles: The Policy Risks and Opportunities Ahead
Kris Hauser, Andrew A. Proia, Drew T. Simshaw

Kris Hauser

Kris Hauser

Rapid technological innovation has made commercially accessible consumer robotics a reality. At the same time, individuals and organizations are turning to “the cloud” for more convenient and cost effective data storage and management. It seemed only inevitable that these two technologies would merge to create “cloud robotics,” described by Google Research Scientist Dr. James Kuffner as “a new approach to robotics that takes advantage of the Internet as a resource for massively parallel computation and sharing of vast data resources.” By making robots lighter, cheaper, and more efficient, cloud robotics could be the catalyst for a mainstream consumer robotics marketplace. However, this new industry would join a host of modern consumer technologies that seem to have rapidly outpaced the legal and regulatory regimes implemented to protect consumers.  Recently, consumer advocates and the tech industry have focused their attention on information privacy and security, and how to establish sufficient safeguards for the collection, retention, and dissemination of personal information while still allowing technologies to flourish.

Drew T. Simshaw

Drew T. Simshaw

Underlying a majority of these proposals, whether it be through legislation or industry self-regulation, are a set of practices, articulated in the 1970s, that address how personal information should be collected, used, retained, managed, and deleted, known as the Fair Information Practice Principles (“FIPPs”).

Andrew A. Proia

Andrew A. Proia

This paper first will provide a brief history of the FIPPs, focusing primarily on their influence in the consumer space. This section will examine how the original FIPPs came into existence and how they have taken shape in the national and international communities. It will also detail three influential variations of the FIPPs which are likely to influence the information privacy and security regulations of cloud robotics in U.S. consumer products. Second, this paper will introduce many within the information law and policy realm to the vastly advancing (yet little known) technology known as cloud robotics. Then, with the help of privacy fellows and roboticists, the paper dissects how each principle, and its relevant variations, will affect the efficiency and interoperability of cloud robotics in the consumer marketplace.

By providing practical observations of how cloud robotics may emerge in a consumer marketplace regulated by the FIPPs, this research will help both the information privacy and robotics fields in beginning to address the policy risks and opportunities ahead.

Andrew A. Proia and Drew T. Simshaw will discuss “Consumer Cloud Robotics and the Fair Information Practice Principles: The Policy Risks and Opportunities Ahead” as part of the Panel on Robots and Social Justice moderated by Kate Darling on Friday, April 4th at 2:00 PM at the University of Miami Newman Alumni Center in Coral Gables, Florida.

Comments { 0 }

Jason Millar on Proxy Prudence – Rethinking Models of Responsibility for Semi-autonomous Robots

Proxy Prudence – Rethinking Models of Responsibility for Semi-autonomous Robots
Jason Millar

Jason Millar

Jason Millar

As robots become more autonomous—capable of acting in complex ways, independent of direct human interaction—their actions will challenge traditional notions of responsibility. How, for example, do we sort out responsibility when a self-driving car swerves this way or that in a situation where all possible outcomes lead to harm? This paper explores the question of responsibility from both philosophical and legal perspectives, by examining the relationship between designers, semi-autonomous robots and users. Borrowing concepts from the philosophy of technology, bioethics and law, I argue that in certain use contexts we can reasonably describe a robot as acting as a moral proxy on behalf of a person. In those cases I argue it is important to instantiate the proxy relationship in a morally justifiable way. I examine two questions that are helpful in determining how to appropriately instantiate proxy relationships with semi-autonomous robots, and that we can also ask when attempting to sort out responsibility: 1) On whose behalf was the robot acting?; and 2) On whose behalf ought the robot to have been acting?

Focusing on proxy relationships allows us to shift our focus away from a strictly causal model of responsibility and focus also on a proxy model informed by an ethical analysis of the nature of the designer-artefact-user relationship. By doing so I argue that we gain some traction on problems of responsibility with semi-autonomous robots. I examine two cases to demonstrate how a shift towards a proxy model of responsibility, and away from a strictly causal model of responsibility helps to manage risks and provides a more accurate accounting of responsibility in some use contexts. I offer some suggestions how we might decide whom a robot ought legitimately to be acting on behalf of, while offering some thoughts on what legal and ethical implications my argument carries for designers and users.

Jason Millar will present Proxy Prudence – Rethinking Models of Responsibility for Semi-autonomous Robots on Friday, April 4th at 10:15 AM with discussant Peter Asaro at the University of Miami Newman Alumni Center in Coral Gables, Florida.

Comments { 0 }

Meg Leta Ambrose on “Regulating the Loop: Ironies of Automation Law”

This is the first in a series of posts we will be running about the upcoming presentations at We Robot 2014. Registration is still open. Subscribe to our RSS feed or follow on us on Twitter for more updates.

Regulating the Loop: Ironies of Automation Law
Meg Leta Ambrose

Rapid developments in sensors, computing, and robotics, as well as power, kinetics, control, telecommunication, and artificial intelligence have presented opportunities to further integrate sophisticated automation across society. With these opportunities come questions about the ability of current laws and policies to protect important social values new technologies may threaten. As sophisticated automation moves beyond the cages of factories and cock pits, the need for a legal approach suitable to guide an increasingly automated future becomes more pressing.

Meg Leta Ambrose

Meg Leta Ambrose

This paper analyzes examples of legal approaches to automation thus far by legislative, administrative, judicial, state, and international bodies. The case studies reveal an interesting irony: while automation regulation is intended to protect and promote human values, by focusing on the capabilities of the automation, this approach results in less protection of human values. The irony is similar to those pointed out by Lisanne Bainbridge in 1983, when she described how designing automation to improve the life of the operator using an automation-centered approach actually made the operator’s life worse and more difficult.

The ironies that result from automation-centered legal approaches are a product of the neglect of the sociotechnical nature of automation: the relationships between man and machine is situated and interdependent; humans will always be in the loop; and reactive policies ignore the need for general guidance for ethical and accountable automation design and implementation. Like system engineers three decades ago, policymakers must adjust the focus of legal treatment of automation to recognize the interdependence of man and machine to avoid the ironies of automation law and meet the goals of ethical integration. The article proposes that the existing models for automated system design and principles currently utilized for safe and actual implementation be added to for ethical and sociotechnical legal approach to automation.

Meg Leta Ambrose will present “The Law and the Loop” with discussant Elizabeth Grossman on Friday, April 4th at 8:45 AM at the University of Miami Newman Alumni Center in Coral Gables, Florida.

Comments { 0 }