Announcing the We Robot Bibliography Wiki

As part of the We Robot 2012 conference we have started a bibliography of scholarly writing relevant to the law and policy of robots.

Our wiki-based bibliography lists scholarly works that examine the role of robotics in society through the lenses of Ethics, Law, and Policy. It is not intended to list works devoted primarily to robotic technology.

Interest in Law and Robotics has spurred a diverse set of interdisciplinary material. A bibliography will make these interdisciplinary materials more accessible to everyone interested in the subject. We invite our conference attendees and anyone interested in law and policy issues relating to robotics to contribute citations to the We Robot Bibliography

Your contributions will help this resource be of value to policy-oriented and scholarly communities.

Read full story

Links to All the Papers for We Robot 2012

Here’s a handy hyperlinked list of all the downloadable papers for this weekend’s We Robot 2012 conference.

Day One

Day Two

Read full story

Oren Gross on “When Machines Kill: Criminal Responsibility for International Crimes Committed by Lethal Autonomous Robots”

Oren Gross

Warfare technology widens the human-technology gap in combat.  Human beings are becoming the weak link in the Observe, Orient, Decide, and Act Loop (OODA Loop) because of the increasing need to collect and process vast amounts of data.  The combat use of Lethal Autonomous Robots (LARs) is ushering in an era of de-humanized warfare, where human beings are less present in combat zones.  The tension between LAR autonomy and human accountability for war crimes raises legal, ethical, and policy concerns.  Professor Gross argues that current domestic and international criminal law is ill prepared to apportion human accountability in the event a LAR commits a war crime.  His paper proposes more effective methods for apportioning criminal responsibility in such situations.

Oren Gross will present When Machines Kill: Criminal Responsibility for International Crimes Committed by Lethal Autonomous Robots at the Military Robotics Panel Presentation on Sunday, April 22nd at 3:15pm at We Robot 2012 at the University of Miami School of Law in Coral Gables, Florida.  Oren Gross is the Irving Younger Professor of Law at the University of Minnesota Law School, where he is also the Director of the Institute of International Legal & Security Studies.  He has a LL.M. and SJD from Harvard Law School and an LL.B. from Tel-Aviv University in Tel-Aviv, Israel.  Professor Gross is an internationally recognized expert in national security law, international law, and international trade.

Read full story

Announcing ‘Birds of a Feather’ Sessions for Saturday Dinner

We are covering a lot of ground at We Robot 2012. But the potential topics of exploration are even broader. We invite participants to propose and sign up to attend Saturday evening “birds of a feather” (BoF) sessions over dinner. Want to talk drones and privacy? Driverless cars? Something else entirely? Propose a session in comments to this entry. We’ll also have signup sheets at the conference on Saturday.

Here are some suggested restaurants in Coconut Grove (near the main conference hotel) at which you could plan to meet. Conference staff will be available to help with reservations once we have some idea of the level of interest.

Read full story

Markus Wagner on “The Dehumanization of International Humanitarian Law: Independently Operating Weapon System and Modern Armed Conflict”

Markus Wagner

Combat is changing, and more transformative changes loom in the horizon.  Today’s unmanned systems (UMSs) require human input to operate; however, the next generation of UMSs will operate with little to no human input.  Autonomous UMSs may fundamentally alter our relationship with International Humanitarian Law (IHL).  Markus Wagner’s paper explores the development of UMSs in three ways.  First, UMSs will have a dehumanizing effect on two IHL principles: distinction and proportionality.  Second, Wagner traverses the moral implications of this dehumanization.  For example, personal responsibility is a fundamental characteristic of IHL and acts as a deterrent for those who decide to deploy autonomous weapon systems (AWSs).  The widespread use of UMSs may dilute the deterrent effect of personal responsibility.  In response, Wagner explores possible ways of implementing personal responsibility into the design stage of AWSs.  Finally, he argues that the introduction of UMSs will lower the risk to human soldiers in combat, thereby, altering the risk calculus of engaging in combat.  As a result, militaries may be more willing to engage in combat.

Markus Wagner will present The Dehumanization of International Humanitarian Law: Independently Operating Weapon System and Modern Armed Conflict at the Military Robotics Panel Presentation on Sunday, April 22nd at 3:15pm at We Robot 2012 at the University of Miami School of Law in Coral Gables, Florida.  Markus Wagner is an Associate Professor of Law at the University of Miami School of Law in Coral Gables, Florida.  He has a Masters of the Science of Law from Stanford Law School in Stanford, California and a J.D. from the University of Giessen Law School in Giessen, Germany.  Professor Wagner teaches courses in International Law, International Economic Law, Comparative Law, and a Miami-Leipzig Seminar in Leipzig, Germany.  His research interests focus on robotics and military technology as well as international trade law.

Read full story

Richard O’Meara on “The Intersection: The Rules of War and The Use of Unarmed, Remotely Operated, and Autonomous Robotics Systems Platform and Weapons… Some Cautions”

Richard O'meara

The German author Erich Maria Remarque once wrote that although the square is empty, it “would be madness to go farther—the machine-gun is covering the square.”  Robotic technology is poised to have a greater effect on warfare than the machinegun.  Robots will make human combatants more lethal and possibly take them out of combat.  Therefore, we need more than traditional rules of war when discussing when and how to use robots in combat.  Richard O’Meara’s paper begins such a discussion by observing that the technology used to control lethal robots has begun a period of drastic change.  Thus, any consensus reached today will become obsolete in less than a decade.  Next, O’Meara observes that 20th Century International Humanitarian Law (IHL) assumes that all parties have a utilitarian interest in diminishing collateral damage.  O’Meara argues that IHL is fairly irrelevant to the 21st Century because many modern combat forces have little interest in diminishing collateral damage.  Furthermore, the differences between manned and autonomous robots suggest that we should apply different rules to each.  In particular, the laws of war should not be stretched to accommodate the vast destructive potential of robots.

Richard O’Meara will present The Intersection: The Rules of War and The Use of Unarmed, Remotely Operated, and Autonomous Robotics Systems Platform and Weapons… Some Cautions at the Military Robotics Panel Presentation on Sunday, April 22nd at 3:15pm at We Robot 2012 at the University of Miami School of Law in Coral Gables, Florida.  Dr. Richard O’Meara, Brigadier General, U.S.A. (Ret.) is Program Coordinator and Lecturer for the Homeland Security Studies Program at Ocean County College, in Toms River, New Jersey.  He has a J.D. from Fordham University School of Law and a Ph.D. in Global Affairs from Rutgers University.  Dr. O’Meara has served as an Adjunct Faculty Member at several universities around the country and is currently a Professor of Global and Homeland Security Affairs.

Read full story

Ian Kerr & Katie Szilagyi on “Asleep at the Switch? How Lethal Autonomous Robots Become a Force Multiplier of Military Necessity”

Ian Kerr

International Humanitarian Law (IHL) justifies the use of military force with a necessity/proportionality calculus, which weighs a military operation’s necessity against the harm resulting from carrying out that operation. Robotic warfare proponents believe the advanced sensory capabilities of machines will outperform human soldiers and save lives by reducing injustices in armed conflict by better and more consistently comporting with IHL norms.   Ian Kerr and Katie Szilagyi argue that Lethal Autonomous Robots (LARs) threaten to erode the IHL framework because the robotization of warfare permits us to redefine IHL norms. The authors illustrate how laws of war purport to the principle of technological

Katie Szilagyi

neutrality—the belief that general laws are superior to specific ones, and that forbidding the implementation of particular technologies is inappropriate.  They reject the application of this principle, arguing that we must consider approaches that contemplate the transformative effects of robotic military technologies.

Ian Kerr and Katie Szilagyi will present Asleep at the Switch? How Lethal Autonomous Robots Become a Force Multiplier of Military Necessity at the Military Robotics Panel Presentation on Sunday, April 22nd at 3:15pm at We Robot 2012 at the University of Miami School of Law in Coral Gables, Florida.  Ian Kerr is a Professor of Law at the University of Ottawa, Canada, where he holds cross-appointments to the Faculty of Medicine and the Department of Philosophy.  He has a Ph.D. in Philosophy and a JD from Western University.  Dr. Kerr is the Canada Research Chair in Ethics, Law & Technology.  Katie Szilagyi is a current J.D. candidate at the University of Ottawa, Canada.  She will clerk at the Federal Court of Appeal in Ottawa, Ontario in 2012-2013.  Her primary research interest is in legal responses to social and structural challenges created by new technologies.

Read full story

J Storrs Hall on “Machine Agency: a Philosophical and Technological Roadmap”

J. Storrs Hall

When a machine commits a moral or legal wrong, the universal governing standard attributes the fault to the machine’s designers and builders.  The more machines resemble and act like human beings, the more human status we assign them.  Today, we are at a precipice where machines are beginning to make decisions, which would imply legal and moral responsibilities if made by humans.  J. Storrs Hall notes that the learning component in machines is increasing as they rely more on experience and training than on programming.  He argues that in the near future, apportioning fault will present challenges because a machine’s wrongful actions will no longer be clearly attributable to its designers and builders.  Dr. Hall’s paper discusses the philosophical, architectural, and practical issues involved in assigning the role of moral agent to a machine.

J Storrs Hall will present Machine Agency: a Philosophical and Technological Roadmap on Sunday, April 22nd at 8:30am at We Robot 2012 at the University of Miami School of Law in Coral Gables, Florida.  J. Storrs Hall has a Ph.D. in Computer Science from Rutgers University and published the first book on machine ethics.  He is an established author in the field of artificial general intelligence and was recently President of the Foresight Institute in Menlo Park, CA, a leading think tank focused on transformative technologies.  Dr. Hall was the founding Chief Scientist of Nanorex, Inc., a developer of open-source computational modeling tools for designing and analyzing atomically precise nanosystems.

Read full story

Our Hashtag: #WeRobot

If you are tweeting about We Robot 2012, please use our new hashtag: #WeRobot.

(For more about hashtags see Wikipedia.)

Read full story

AJung Moon, Ergun Calisgan, Fiorella Operto, Gianmarco Veruggio, and H.F. Machiel Van der Loos on “Open Roboethics: Establishing an Online Community for Accelerated Policy and Design Change”

AJung Moon

Rapid development and deployment of robots in military and civilian settings is prompting policy and ethical discussions as a result of the inadequacies of current laws.  For example, Paro, a therapy robot used in nursing homes, is classified as a Class 2 Medical Device (C2MD) in the United States.  This is a problem because C2MDs include unsophisticated devices, such as powered wheelchairs, and the interactive nature of Paro-like robots makes them significantly different from other C2MDs. 

Ergun Calisgan

The authors provide an overview of Roboethics Organizations and Roboethics Initiatives (RIs).  They then analyze the applicability of bottom-up policymaking approaches used in non-robotic spheres and the limitations of national and disciplinary boundaries on global RIs.  As proponents of open-source paradigms, the authors propose an online Roboethics knowledgebase, called Open Roboethics, with an open discussion and code design space.  This centralized online space will provide policymakers and robot designers with an efficient and transparent means to communicate societal values.

Gianmarco Veruggio

AJung Moon, Ergun Calisgan, Fiorella Operto, Gianmarco Veruggio, and H.F. Machiel Van der Loos will present Open Roboethics: Establishing an Online Community for Accelerated Policy and Design Change on Sunday, April 22nd at 1:45pm at We Robot 2012 at the University of Miami School of Law in Coral Gables, Florida.  AJung Moon is a Ph.D. student in Mechanical Engineering at the University of British Columbia, Vancouver, Canada.  Her M.A.Sc. thesis work was on making robots hesitate as a means to communicatively resolve human-robot resource conflicts. 

Fiorella Operto

Ergun Calisgan is a M.A.Sc. Mechanical Engineering candidate at the University of British Columbia and writing his thesis on “Observing Nonverbal Human Behavior Cues for Automated Turn-Taking During Human-Robot Collaboration.”  Fiorella Operto has a M.A. in Philosophy of Science at the Universitá di Milano in Italy.  She is a contributor to the scientific book series I Dialoghi and in 2000 co-founded the School of Robotics in Italy, of which she remains President.  Gianmarco Veruggio has a Masters degree in Electronic Engineering from

H.F. Machiel Van der Loos

Genoa University in Italy.  He is an expert in robotics for extreme environments and is Director of Research at CNR-IEIIT in Genoa. He co-founded the School of Robotics and in 2002 coined the term and proposed the concept of Roboethics.  H.F. Machiel Van der Loos is an Associate Professor of Mechanical Engineering at the University of British Columbia and the Associate Director of the CARIS lab in Vancouver, Canada.  He has a Ph.D. in interactive robotics from Stanford University in Stanford, California.  Professor Van der Loos has co-authored three books, authored 27 journal papers, and has a patent on a sleep quality improvement technology called SleepSmart.

Read full story