Archive | August, 2021

Driving Into the Loop: Mapping Automation Bias & Liability Issues for Advanced Driver Assistance Systems

Katie Szilagyi

Katie Szilagyi, Jason Millar, Ajung Moon, and Shalaleh Rismani will present their paper, Driving Into the Loop: Mapping Automation Bias & Liability Issues for Advanced Driver Assistance Systems, on Friday, September 24th at 5:15pm at #werobot 2021. Meg Leta Jones will lead the discussion.

Advanced Driver Assistance Systems (ADAS) are transforming the modern driving experience with technology including emergency braking, blind spot monitoring, and sirens to ward off driver errors.

The assumption appears straightforward: automation will improve driver safety because automation just reduces the occurrence of human driving errors. But, is this a safe assumption? Our current regulatory reality, within which this assumption operates, demands that drivers be able to effectively monitor and operate ADAS without any formal training or licensing to do so. This is premised on the outdated notion that drivers are still in full control of critical driving tasks. Meanwhile, ADAS now asks drivers to both drive the vehicle and simultaneously monitor these complex systems. This significant shift is not yet reflected in driving licensing regimes or transportation liability regulations.

Jason Millar

Ajung Moon

Conversations about liability and automated vehicles often jump straight to the pesky problems posed by hardcoding ethical decisions into machines. By focusing on tomorrow’s automated driving technologies, such investigations overlook today’s mundane, yet still instructive, driving automation: blind spot monitoring, adaptive cruise control, and automated parking. These are the robotic co-pilots truly transforming human-robot interaction in the cabin, which could have serious legal consequences for transportation statutes, regulations, and liability schemes. Robotics scholarship has effectively teased out a distinction between humans in-the-loop and on-the-loop for automation. ADAS start to blur these lines, by automating some tasks but not others, by varying system behaviours between different vehicle manufacturers, and by expecting the driver to serve as advanced systems monitor without ever receiving appropriate training.

Meg Leta Jones (discussant)

Shalaleh Rismani

In Part I, we explain the technical aspects of today’s most common assistive driving technologies and show how they are best situated in the litany of automation concerns documented by the current SAE framework. In Part II, we offer theoretical framing through automation bias, explaining some of the key concerns that arise when untrained people take control of dangerous automated machinery. In Part III, we provide an overview of driver licensing regimes, demonstrating the paucity of regulations designed to ensure the effective use of ADAS on public roads. In Part IV, we compare the automation challenges generated by ADAS to legal accounts of robotics, asking how liability accrues for operators of untrained robotic systems in other social spheres. Finally, in Part V, we offer clear policy advice on how to better integrate assistive driving technologies with today’s schemes for driver regulation.

Comments { 0 }

Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Policy

Vicky Charisi

Vicky CharisiUrs GasserRandy Gomez, and Selma Šabanović will present their paper, Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Policy, on Friday, September 24th at #werobot 2021. Veronica Ahumada-Newhart will lead the discussion at 3:45pm.

Urs Gasser

Social robotics now reach vulnerable populations, notably children, who are in a critical period of their development. These systems, algorithmically mediated by Artificial Intelligence (AI), can be used to successfully supplement children’s learning and entertainment because children can effectively and cognitively engage these systems. Demand for social robots that interact with children is likely to increase in the coming years. As a result there are increasing concerns that need to be addressed due to the profound impact that this technology can have on children.

Selma Šabanović

To date, the majority of AI policies, strategies and guidelines make only a cursory mention of children. To help fill this gap, UNICEF is currently exploring approaches to uphold children’s rights in the context of AI and to create opportunities for children’s participation. Robots bring unique opportunities but also robot-specific considerations for children. This combination calls into question how existing protection regimes might be applied; at present it remains unclear what the rules for children’s protection relating to their interaction with robots should look like.

Veronica Ahumada-Newhart (discussant)

Children dynamically and rapidly develop through social interactions, and evidence shows that they can perceive robots as part of their social groups, which means that robots can affect their development. However, the lack of AI literacy and relevant skills that would support children’s critical reflection towards robotic technology is currently missing from the majority of formal educational systems. This paper will elaborate on the development of a dynamic framework which will identify the key risks and opportunities for robot adoption for children, the key actors, and a set of operationable actions that might support future implications of robots for children.

Comments { 0 }

On the Practicalities of Robots in Public Spaces

Cindy Grimm

Cindy Grimm and Kristen Thomasen will present their paper, On the Practicalities of Robots in Public Spaces, on Friday, September 24th at #werobot 2021. Edward Tunstel will lead the 1:45pm – 3:15pm panel on Field Robotics.
There has been recent debate over how to regulate autonomous robots that enter into spaces where they must interact with members of the public. Sidewalk delivery robots, drones, and autonomous vehicles, among other examples, are pushing this conversation forward. Those involved in the law are often not well-versed in the intricacies of the latest sensor or AI technologies, and robot builders often do not have a deep understanding of the law. How can we bridge these two sides so that we can form appropriate law and policy around autonomous robots that provides robust protections for people, but does not place an undue burden on the robot developers?

Kristen Thomasen

This paper proposes a framework for thinking about how law and policy interact with the practicalities of autonomous mobile robotics. We discuss how it is possible to start from the two extremes, in regulation (without regard to implementation) and in robot design (without regard to regulation), and iteratively bring these viewpoints together to form a holistic understanding of the most robust set of regulation that still results in a viable product. We also focus particular attention on the case where it is not possible to do this due to a gap between the minimal set of realistic regulation and the ability to autonomously comply with it. In this case, we show how that can drive scholarship in the legal and policy world and innovation in technology. By shining a light on these gaps, we can focus our collective attention on them, and close them faster.

Edward Tunstel (moderator)

As a concrete example of how to apply our framework, we consider the case of sidewalk delivery robots on public sidewalks. This specific example has the additional benefit of comparing the outcomes of applying the framework to emergent regulations. Starting with the “ideal” regulation and the most elegant robot design, we look at what it would take to implement or enforce the ideal rules and dig down into the technologies involved, discussing their practicality, cost, and the risks involved when they do not work perfectly.

Do imperfect sensors cause the robot to stop working due to an abundance of caution, or do they cause it to violate the law or policy? If there is a violation, how sure can we be that it was a faulty piece of technology, rather than a purposeful act by the designer? Does implementing the law or policy mean a technology so expensive that the robot is no longer a viable product? In the end, the laws and policies that will govern autonomous robots as they do their work in our public spaces need to be designed with consideration of the technology. We must strive for fair, equitable laws, that are practically and realistically enforceable with current or near-future technologies.

Comments { 0 }

Smart Farming and Governing AI in East Africa: Taking Gendered Relations and Vegetal Beings into Account

Jeremy de BeerLaura FosterChidi OguamanamKatie Szilagyi, and Angeline Wairegi will present their paper, Smart Farming and Governing AI in East Africa: Taking Gendered Relations and Vegetal Beings into Account, on Friday, September 24th at #werobot 2021. Edward Tunstel will moderate the 1:45pm – 3:15pm panel on Field Robotics.

Laura Foster

Robots are on their way to East African farms. Deploying robotic farm workers and corresponding smart systems is lauded as the solution for improving crop yields, strengthening food security, generating GDP growth, and combating poverty. These optimistic narratives mimic those of the Green Revolution and activate memories of its underperformance in Africa. Expanding upon previous contributions on smart farming and the colonial aspects of AI technology, this paper investigates how AI-related technologies are deployed across East Africa.

Edward Tunstel (moderator)

Chidi Oguamanam

The creation of AI algorithms and datasets are processes driven by human judgements; the resultant technology is shaped by society. Cognizant of this, this paper provides an overview of emerging smart farming technologies across the East African region, situated within contemporary agricultural industries and the colonial legacies that inform women’s lives in the region.

Angeline Wairegi

After establishing the gendered implications of smart farming as a central concern, this paper provides rich analysis of the state-of-the-art scholarly and policy literature on smart farming in the region, as well as key intergovernmental agricultural AI initiatives being led by national governments, the United Nations, the African Union, and other agencies. This enables an understanding of how smart farming is being articulated across multiple material-discursive sites—media, government, civil society, and industry.

Katie Szilagyi

What becomes apparent is that smart farming technologies are being articulated through four key assumptions: not only techno-optimism, as above, but also ahistoricism, ownership, and human exceptionalism. These assumptions, and the multiple tensions they reveal, limit possibilities for governing smart farming to benefit small-scale women farmers. Using these four frames, our interdisciplinary author team identifies the key ethical implications for adopting AI technologies for East African female farmers.

Comments { 0 }

Robots in the Ocean

Annie Brett

Annie Brett will present her paper, Robots in the Ocean, on Friday, September 24th at #werobot 2021. Edward Tunstel will lead the 1:45pm – 3:15pm panel on Field Robotics.

Academics (and particularly legal academics) have not paid much attention to robots in the ocean. The small amount of existing work is focused on relatively narrow questions, from whether robots qualify as vessels under the Law of the Sea to whether robotic telepresence can be used to establish a salvage claim on shipwrecks.

This paper looks at how two major robotic advances are creating fundamental challenges for current ocean governance frameworks. The first is a proliferation in robots actively altering ocean conditions through both exploitative alteration, such as deep sea mining, and alteration with conservation goals, such as waste removal. This is best illustrated by The Ocean Cleanup, who defied warnings from scientists in deploying an ocean waste capture prototype that became irreparable merely six months into its voyage. The second is in observational robots that are being used, primarily by scientific and defense entities, to further understand of ocean ecosystems and human activities in them.

Edward Tunstel (moderator)

Annie Brett focuses on the regulatory grey area of international law implicated by robots with the capacity to actively alter ocean conditions. She also focuses on analogues in terrestrial environmental law and climate geoengineering literature to propose a mechanism for regulating robotic interventions in the ocean. Specifically, she argues for a modified form of environmental impact review that attempts to strike a balance between allowing innovation in ocean robots and providing a measure of oversight for interventions that have the potential to permanently alter ocean ecosystems.

Comments { 0 }

Being “Seen” vs. “Mis-seen”: Tensions Between Privacy and Fairness in Computer Vision

Alice Xiang

Alice Xiang will present her paper, Being “Seen” vs. “Mis-seen”: Tensions Between Privacy and Fairness in Computer Vision, on Friday, September 24th at 11:30am at #werobot 2021. Daniel Susser will lead the discussion.

The rise of AI technologies has caused growing anxiety that AI may create mass surveillance systems and entrench societal biases. Major facial recognition systems are less accurate for women and individuals with darker skin tones due to a lack of diversity in the training datasets. Efforts to diversify datasets can raise privacy issues; plaintiffs can argue that they had not consented to having their images used in facial recognition training datasets.

This highlights the tension that AI technologies create between representation vs. surveillance: we want AI to “see” and “recognize” us, but we are uncomfortable with the idea of AI having access to personal data about us. This tension is further amplified when the need for sensitive attribute data to detect or mitigate bias is considered. Existing privacy law addresses this area primarily by erring on the side of hiding people’s sensitive attributes unless there is explicit informed consent. While some have argued that not being “seen” by AI is preferable—that being under-represented in training data might allow one to evade mass surveillance—incomplete datasets may result in detrimental false-positive identification. Thus, not being “seen” by AI does not protect against being “mis-seen.”

Daniel Susser (discussant)

The first contribution of this article is to characterize this tension between privacy and fairness in the context of algorithmic bias mitigation. In particular, this article argues that the irreducible paradox underlying current efforts to design less biased algorithms is the simultaneous desire to be both “seen” yet “unseen” by AI. Second, the Article reviews the viability of strategies that have been proposed for addressing the tension between privacy and fairness and evaluates whether they adequately address associated technical, operational, legal, and ethical challenges. Finally, this article argues that solving the tension between representation and surveillance requires considering the importance of not being “mis-seen” by AI rather than simply being “unseen.” Untethering these concepts (being seen, unseen, vs. mis-seen) can bring greater clarity around what rights relevant laws and policies should seek to protect. Given that privacy and fairness are both critical objectives for ethical AI, it is vital to address this tension head-on. Approaches that rely purely on visibility or invisibility will likely fail to achieve either objective.

Comments { 0 }

The Legal Construction of Black Boxes

Elizabeth Kumar

Elizabeth KumarAndrew Selbst, and Suresh Venkatasubramanian will present their paper, The Legal Construction of Black Boxes, on Saturday, September 25th at 10:00am at #werobot 2021. Ryan Calo will lead the discussion.

Abstraction is a fundamental technique in computer science. Formal abstraction treats a system as defined entirely by its inputs, outputs, and the relationship that transforms inputs to outputs. If a system’s user knows those details, they need not know anything else about how the system works; the internal elements can be hidden from them in a “black box.” Abstraction also entails abstraction choices: What are the relevant inputs and outputs? What properties should the transformation between them have? What constitutes the “abstraction boundary?” These choices are necessary, but they have implications for legal proceedings that involve the use of machine learning (ML).

Andrew Selbst

This paper makes two arguments about abstraction in ML and legal proceedings. The first is that abstraction choices that can be treated as normative and epistemic claims made by developers that compete with judgments properly belonging to courts. Abstraction constitutes a claim as to the division of responsibility: what is inside the black box is the province of the developer; what is outside belongs to the user. Abstraction also is a factual definition, rendering the system an intelligible and interrogable object. Yet the abstraction that defines the boundary of a system is itself a design choice. When courts treat technology as a black box with a fixed outer boundary, they unwittingly abdicate their responsibility to make normative judgments as to the division of responsibility for certain wrongs, and abdicate part of their factfinding roles by taking the abstraction boundaries as a given. We demonstrate these effects in discussions of foreseeability in tort law, liability in discrimination law, and evidentiary burdens more broadly.

Suresh Venkatasubramanian

Our second argument builds from that observation. By interpreting the abstraction as one of many possible design choices, rather than a simple fact, courts can surface those choices as evidence to draw new lines of responsibility without necessarily interrogating the interior of the black box itself. Courts may draw on evidence about the system as presented to support these alternative lines of responsibility, but by analyzing the construction of the implied abstraction boundary of a system, they can also consider the context around its development and deployment.

Ryan Calo

Courts can rely on experts to compare a designer’s choices with emerging standard practices in the field of ML or assign a burden to a user to justify their use of off-the-shelf technology. After resurfacing the normative and epistemic contentions embedded in the technology, courts can use familiar lines of reasoning to assign liability as proper.


Comments { 0 }

We Robot 2021 Will Be Virtual After All

We had hoped very much to have a live event, but circumstances make it clear that it’s not to be. We’d looked forward to welcoming you back to Coral Gables, but we’ve decided that due to safety concerns we have to take We Robot to a fully virtual format again.

Starting with its first edition here in Miami, We Robot has sought — we think successfully — to create and encourage interdisciplinary conversations about robotics (and AI) law and policy. We now have a decade’s worth of success at evolving a common vocabulary and a body of work which includes bedrock scholarship for the rapidly expanding fields represented at the conference. We have fostered, and continue to foster connections between a diverse, international, and interdisciplinary group of scholars, ranging from graduate students to senior professors to persons in government and industry. And — not least — we’ve had a lot of fun doing it.

We’re currently exploring various conference tools that we hope will make it easy not only to have an engaging event with significant audience participation, but also will facilitate the side conversations that are part of what makes We Robot the exciting event it has always been. Watch our homepage for the latest news.

We will soon be posting drafts of the papers that will be presented at We Robot. We may be going virtual, but we’re not changing the format: you will have a chance to read the papers before the conference, and indeed we hope that you will do so and come armed with your thoughts and questions. Other than on panels, authors will not present their own papers – instead our discussant will give a quick summary and critique, and then we’ll open it up to questions from the audience. For the panels, the authors speak briefly, then we go to Q&A. Links to the papers will appear on the program page of the website and in a series of blog posts on the front page of the site.

The good news that by going virtual we are no longer capacity constrained. We’re also reducing the price structure of the event. Registration for the workshop day will be only $25; registration for the two-day main conference will be $49 for everyone except for all students, and for UMiami faculty, for whom it will be $25 including the workshop. We do have some fee waivers available if these fees are a hardship for you. If you have already registered you will be notified directly about processing any refunds that may be due.

Although we will not be able to see you in person, we look forward very much to your virtual participation in We Robot 2021. The heart of We Robot has always been in participation by its attendees, and we will do all we can to preserve that.

See you soon–virtually.

Comments { 1 }