There has been recent debate over how to regulate autonomous robots that enter into spaces where they must interact with members of the public. Sidewalk delivery robots, drones, and autonomous vehicles, among other examples, are pushing this conversation forward. Those involved in the law are often not well-versed in the intricacies of the latest sensor or AI technologies, and robot builders often do not have a deep understanding of the law. How can we bridge these two sides so that we can form appropriate law and policy around autonomous robots that provides robust protections for people, but does not place an undue burden on the robot developers? This paper proposes a framework for thinking about how law and policy interact with the practicalities of autonomous mobile robotics. We discuss how it is possible to start from the two extremes, in regulation (without regard to implementation) and in robot design (without regard to regulation), and iteratively bring these viewpoints together to form a holistic understanding of the most robust set of regulation that still results in a viable product. We also focus particular attention on the case where it is not possible to do this due to a gap between the minimal set of realistic regulation and the ability to autonomously comply with it. In this case, we show how that can drive scholarship in the legal and policy world and innovation in technology. By shining a light on these gaps, we can focus our collective attention on them, and close them faster.
As a concrete example of how to apply our framework, we consider the case of sidewalk delivery robots on public sidewalks. This specific example has the additional benefit of comparing the outcomes of applying the framework to emergent regulations. Starting with the “ideal” regulation and the most elegant robot design, we look at what it would take to implement or enforce the ideal rules and dig down into the technologies involved, discussing their practicality, cost, and the risks involved when they do not work perfectly.
Do imperfect sensors cause the robot to stop working due to an abundance of caution, or do they cause it to violate the law or policy? If there is a violation, how sure can we be that it was a faulty piece of technology, rather than a purposeful act by the designer? Does implementing the law or policy mean a technology so expensive that the robot is no longer a viable product? In the end, the laws and policies that will govern autonomous robots as they do their work in our public spaces need to be designed with consideration of the technology. We must strive for fair, equitable laws, that are practically and realistically enforceable with current or near-future technologies.