Telerobotics Demonstration with Howard Jay Chizeck
Telerobotics Demonstration with Howard Jay Chizeck
In this demonstration, some recent advances in telerobots for manipulation will be demonstrated. A telerobot is a robot operated by a human operator, most often from a remote location, through a two way communication link. Robot surgery devices are telerobots (although the surgeon is nearby). Telerobots can be useful in any situation that is too dangerous for human presence, or too small, or too large or too far away. Telerobots can be used for deep water operations (pollution cleanup, scientific sampling, and resource extraction). They can be used for mine removal or for mining, for firefighting or for human guided tasks in high temperature, chemically hazardous, biohazardous or radioactive situations (such as nuclear reactor cleanup). The robot performs the physical action, but it is directed by a human operator, either completely or with shared autonomy.Technology permitting (internet connection issues, etc), one arm of a remote surgical robot (in Seattle) will be controlled over an internet link from the conference, with a skype video connection providing viewing of the remote operation. There will also be a short video of very recent results demonstrating the hacking of the control of this surgical robot.
Several short videos of research robots that provide the operator a sense of touch, through the use of haptic rendering (for laser surgery and for underwater manipulation). In addition, a demonstration of this haptic rendering (touching a beating heart) will be available for a ‘hands on’ demonstration.
Howard Jay Chizeck will present the Telerobotics Demonstration on Friday, April 4th at 4:00 PM in the University of Miami Newman Alumni Center in Coral Gables, Florida.
Laurel D. Riek and Don Howard on “A Code of Ethics for the Human-Robot Interaction Profession”
A Code of Ethics for the Human-Robot Interaction Profession
Laurel D. Riek and Don Howard
Laurel D. Riek and Don Howard will present A Code of Ethics for the Human Robot Interaction Profession on Friday, April 4th at 2:00 PM with moderator Kate Darling on the Panel on Robots and Social Justice at the University of Miami Newman Alumni Center.
David K. Breyer, Donna A. Dulo, Gale A. Townsley, and Stephen S. Wu on “Risk, Product Liability Trends, Triggers, and Insurance in Commercial Aerial Robots”
Risk, Product Liability Trends, Triggers, and Insurance in Commercial Aerial Robots
David K. Breyer, Donna A. Dulo, Gale A. Townsley, and Stephen S. Wu
The commercialization of autonomous aerial robots, “drones” will become pervasive and ubiquitous across the national airspace over the coming years resulting in an increasing risk potential for drone operators and manufacturers. The risk and liability to these entities is entirely unknown due to the lack of historical data on which to determine liability triggers and trends to facilitate the development of accurate insurance underwriting trends. Product liability questions as well remain unanswered with causes of action such as strict product liability,negligence, breach of warranty, and the violation of the law against unfair and deceptive trade practices looming in the near future against operators and manufacturers.
This paper presents these issues in light of available risk and accident data, primarily centered on historical military accident data with current accidents and mishaps that have occurred in the national airspace. It will address these issues in the context of manned and unmanned systems with the unique insurance issues and product liability issues that will face commercial owners, operators and manufacturers of drones that seek to limit their risks of liability and damage exposure through the purchase of insurance with both domestic and Lloyd’s of London based markets being discussed.Donna A. Dulo, Gale A. Townsley, and Stephen S. Wu will be speaking on the Panel on Domestic Drones on Saturday, April 5th at 3:15 PM at the University of Miami Newman Alumni Center with moderator F. Daniel Siciliano in Coral Gables, Florida.
Gregory Conti, Woodrow Hartzog, John C. Nelson, and Lisa A. Shay on “A Conservation Theory of Governance for Automated Law Enforcement”
A Conservation Theory of Governance for Automated Law Enforcement
Gregory Conti, Woodrow Hartzog, John C. Nelson, and Lisa A. Shay
However, the future portends an ever-increasing range of crimes that can be enforced through automated law enforcement systems. These advances bring opportunities for both good and harm. The legal, law enforcement, and policy making communities, as well as the general public, must carefully consider potential advantages, weigh the social cost and other risks, and challenge unsubstantiated claims of benefits.
To assist in these efforts, this paper provides end-to-end analysis of automatic law enforcement systems. It examines the key components and their amenability to automation, and how changes to the current state of the art might alter how laws and statutes could be enforced. Our results indicate that automated law enforcement systems will gain increasing power and effectiveness but if left unchecked could cause significant social harm despite, ironically, attempting to improve public welfare.Lisa A. Shay, Woodrow Hartzog, and Gregory Conti will be speaking on Saturday, April 5th at 1:45 PM with discussant Mary Anne Franks at the University of Miami Newman Alumni Center in Coral Gables, Florida.
Aaron Jay Saiger on “Robots in School: Disability and the Promise (or Spectre?) of Radical Educational Equality”
Robots in School: Disability and the Promise (or Specter?) of Radical Educational Equality
Aaron Jay Saiger
A recent New York Times story: A nine-year-old South Carolinian named Lexie Kinder, suffering from an immune disorder, is tutored for years at home to avoid infection. Then she is taught to control a VGo, a “camera-and-Internet-enabled robot that swivels around the classroom and streams two-way video between her school and house.” The VGo, dolled up by Lexie in a pink tutu, ends the little girl’s pervasive isolation. Her robot, which looks like a laptop and webcam bolted to a child-height cart, sits at an ordinary school desk, interacts with both teachers and classmates, stands in line for recess, and even is evacuated with its controller’s friends during fire drills.
For any parent of a disabled child — for any parent, really — the slide show that the Times posted to its website to accompany its story grips both mind and heart. Technology, in particular the robot-plus-internet model, seems suddenly to offer real hope of mitigating the many educational disadvantages faced by the disabled. It tantalizingly hints not only at the possibility of genuine equality of educational opportunity for disabled children, but of real social integration to boot. Were I the parent of a child like Lexie, I would be exuberant. I would also would be on the phone to the VGo distributor. Were I the parent of a disabled child whose challenges were different from Lexie’s, I would likely be nearly as enthusiastic, joyously welcoming the possibility of adapting her family’s model to my own child’s needs. The potential of robotic technology to realize these kinds of equality is very real. But this paper argues that, in the context of the legal structures that govern education of the disabled, robotic technology is also deeply threatening. The same robots that can open schoolhouse doors that had been closed to individual children with disabilities can, collectively, work to slam those doors shut for the disabled as a class. The idea of “special” education is that the disabled have special needs that must be protected by a grant of special legal rights. The very ability of robots to satisfy those needs in ways heretofore unimagined has the potential to erode the justifications and the institutions that guarantee special legal rights. This could move disabled children backwards, towards less equal educational opportunity.Aaron Jay Saiger will present Robots in School: Disability and the Promise (or Specter?) of Radical Educational Equality on Friday, April 4th at 2:00 PM with moderator Kate Darling on the Panel on Robots and Social Justice at the University of Miami Newman Alumni Center in Coral Gables, Florida.
Ann Bartow on “Robots as Labor Creating Devices: Robotic Technologies and the Expansion of the Second Shift”
Robots as Labor Creating Devices: Robotic Technologies and the Expansion of the Second Shift
Ann Bartow
Automation often incompletely replaces human employees in service related positions, and the leftover tasks become the responsibility of the consumer, who is forced into performing ever increasing amounts of self service. For example, ATMs and online banking programs require account holders to perform the tasks that human bank tellers used to undertake for depositors. A bank statement used to arrive regularly in the mail, but now one must track our savings and expenditures online, using complex passwords and secure servers if one wishes to avoid automated bank robbery. A hard copy must be self-printed at home. If one has any unique questions about one’s finances, she must survive a gauntlet of automated phone options to reach a live person.
A different but related consequence of automation is a ratcheting upward of standards. Automation may reduce the labor associated with a task, but there is a new expectation that the task should therefore be performed more often or with elevated results. Housekeeping tasks like floor cleaning can be delegated to robots, but preparing a floor to be cleaned by a robot can require de-cluttering, moving power cords and rearranging furniture. At the end of each robotic floor cleaning session, everything must be put back into place. Because sweeping and vacuuming robots have the capacity to clean continuously, this creates expectations that floors should always be freshly cleaned. While the per episode work input required might be lessened by a robot, any labor savings are likely offset by the increased frequency of the cleanings.One might assume that when robots can complete tasks that female humans would otherwise have to undertake in the home or on the job, the workload on human women is reduced. This paper challenges that notion, and posits that in some contexts robots actually increase the workloads of the humans they putatively serve, that this trend is significant, and that it has a disproportionately negative impact on women, thereby exacerbating “the second shift,” preexisting gendered work gaps related to housework, child rearing, and caregiving.
Ann Bartow will present Robots as Labor Creating Devices:Robotic Technologies and the Expansion of the Second Shift on Friday, April 4th at 11:45 AM with discussant Jodi Forlizzi at the University of Miami Newman Alumni Center in Coral Gables, Florida.
Cameron R. Cloar and Donna A. Dulo on “A Legal Framework for the Safe and Resilient Operation of Autonomous Aerial Robots”
Considerations of a Legal Framework for the Safe and Resilient Operation of Autonomous Aerial Robots
Cameron R. Cloar and Donna A. Dulo
Autonomous aerial robots, also known as drones, will be a major segment of the National Airspace System in the near future. The extent of innovation of aerial robotic systems is seemingly limitless. Yet, within this futuristic vision emerges an essential issue that cannot be ignored: safety. Many aerial systems will operate under mostly human control; however, some segments of operation will undoubtedly be done autonomously. Although the number of these systems under design and manufacture for the market is quickly increasing, a unified set of legal rules is notably absent to ensure autonomous operations are performed safely and with a high degree of resilience. These legal rules must account for many factors such as the underlying software, algorithms and mathematics that drive the robotic systems, the interface between the robotic systems and potential human operators, if any, the interface between the robotic system and the collision avoidance system, as well as all inherent onboard authority systems.
This paper develops a legal framework which will present a unified foundation so that legal rules can be developed which will serve as the basis for the development and operation of autonomous aerial robots. The legal framework will ensure that designers and manufacturers have the freedom of invention and innovation while having a defined set of rules with which to develop their aerial robotic systems to ensure safe, resilient autonomous and semi-autonomous operations in the national airspace. While being initially developed for autonomous aerial robots, the legal framework will be readily adaptable to unmanned underwater robots, self-driving land vehicles, and any type of robotic vehicle that has varying degrees of autonomous capabilities. It will also call upon the regulatory framework for aircraft under the Federal Aviation Regulations, with a particular emphasis on the new consensus-driven standards that are envisioned will reshape the design and certification of small aircraft. The legal framework help will ensure that unmanned aerial robots become the safe and resilient transformative innovations that they are destined to be.
Donna A. Dulo will be on the Panel on Domestic Drones with moderator F. Daniel Siciliano on Saturday. April 5th at 3:15 PM at the University of Miami Newman Alumni Center in Coral Gables, Florida.
A. Michael Froomkin and Zak Colangelo on “Self-Defense Against Robots”
Self-Defense Against Robots
A. Michael Froomkin and Zak Colangelo
Deployment of robots in the air, the home, the office, and the street inevitably means their interactions with both property and living things will become more common and more complex. This paper examines when, under U.S. law, humans may use force against robots to protect themselves, their property, and their privacy. In the real world where Asimov’s Laws of Robotics do not exist, robots can pose—or can appear to pose—a threat to life, property, and privacy. May a landowner legally shoot down a trespassing drone? Can she hold a trespassing autonomous car as security against damage done or further torts? Is the fear that a drone may be operated by a paparazzo or a peeping Tom sufficient grounds to disable or interfere with it? How hard may you shove if the office robot rolls over your foot? This paper addresses all those issues and one more: what rules and standards we could put into place to make the resolution of those questions fairer to all concerned.
The default common-law legal rules governing each of these perceived threats are somewhat different, although reasonableness always plays an important role in defining legal rights and options. In certain cases—drone overflights, autonomous cars—national, state, and even local regulation may trump the common law. Because it is in most cases obvious that humans can use force to protect themselves against actual physical attack, the paper concentrates on the more interesting cases of (1) robot (and especially drone) trespass, (2) robot (and especially drone) spying, and (3) responses to perceived threats by robots—perceptions which may not always be justified, but which sometimes may nonetheless be considered reasonable in law.
We argue that the scope of permissible self-help in defending one’s privacy should be quite broad. We also identify seven problems in current law relating to human-robot interaction, all of which involve some kind of uncertainty — usually about what a robot can or will do — and suggest ways of solving or at least ameliorating them, either by making robots less potentially dangerous (banning the arming of robots) or by requiring robots to give clearer notice of their capabilities.
We conclude by looking at what the law on human self-defense against robots might tell us about a robot’s right to not be harmed by a human.
A. Michael Froomkin and Zak Colangelo will be on the Panel on Domestic Drones with moderator F. Daniel Siciliano on Saturday, April 5th at 3:15 PM at the University of Miami Newman Alumni Center in Coral Gables, Florida.
Ryan Calo on “Robotics and the New Cyberlaw”
Robotics and the New Cyberlaw
Ryan Calo
Ryan Calo will present “Robotics and the New Cyberlaw” with discussant David Post on Saturday, April 5th at 11:30 AM at the University of Miami Newman Alumni Center in Coral Gables, Florida.
Ian Kerr and Carissima Mathen on “Chief Justice John Roberts is a Robot”
Chief Justice John Roberts is a Robot
Ian Kerr and Carissima Mathen
The title of this article is not pejorative. It is suggestive. It asks readers to imagine the following counterfactual.
Around the globe, people awaken to some very strange news. In different languages, the same headline thunders: “Chief Justice John Roberts is a Robot.” Badly injured during an ambush and attempted kidnapping while attending a conference at the House of Lords, Roberts’ captors boldly delivered-him-up to the Royal London Hospital and sped off. In urgent and unusual circumstances—and in breach of US and international protocols—a team of emergency surgeons cut him open to discover that his biology ran only skin deep.
After weeks of follow-up investigations and interviews, it is learned that “John Roberts” did indeed graduate from Harvard Law School in 1979 and that “his” legal career unfolded exactly as documented in public life. However, John Roberts, Robot (JR-R) was in fact a highly advanced prototype of the US Robots and Mechanical Men Corporation, developed during the period chronologically corresponding with what would have been “his” high school and college years. After several (earlier) A-series JRs had secretly annihilated the Turing test, US Robots decided to consolidate its successful AI with emerging robotic technologies in the new R-series machines. As part of its research and development, the company initiated a singular, long-term experiment with a lifelike, autonomous social robot that was virtually indistinguishable from human beings.
Following a successful (though embellished) application to Harvard Law, US Robots unleashed JR-R on the world. Like Aristotle’s unmoved prime mover, JR-R was left to its own devices. US Robots made no interference with the robot’s moral, legal or social development. Coded, like the best law students, JR-R was programmed to learn how to learn, with no pre-programmed politics or agenda—other than the blind ambitions of a typical 1L. The rest, as they say, is history.In its role as Chief Justice of the Supreme Court of the United States, JR-R wrote and participated in a number of landmark decisions. In this article we investigate the legitimacy of JR-R’s tenure on the Court. Through this philosophical thought experiment, we consider whether it matters that a machine generated legal reasons for judgment.
With this counterfactual, we set the stage for future philosophical discussions about expert systems, artificial intelligence and the coming era of mechanical jurisprudence—an era where the production of at least some legal knowledge and decision-making is delegated to machines and algorithms, not people.
Ian Kerr and Carissima Mathen will present Chief Justice John Roberts is a Robot on Saturday, April 5th at 8:30 AM with discussant Jack Balkin at the University of Miami Newman Alumni Center in Coral Gables, Florida.