Prescribing Exploitation

 

 

Charlotte Tschider

Charlotte Tschider will present her paper, Prescribing Exploitation, on Saturday, September 25th at #werobot 2021. Michelle Johnson will moderate the 4:30pm – 5:30pm panel on Health Robotics.

Patients increasingly rely on connected wearable medical devices that use artificial intelligence infrastructures and physical housing that directly interacts with the human body. Many folks who have traditionally relied on compulsory medical wearables have been members of legally protected groups specifically enumerated in anti-discrimination law, such as disability status. With continued aging of our generations and a longer average lifespan, the field of medical wearables is about to encounter a patient population explosion that will force the medical industry, lawyers, and advocates to find ways of balancing immensely larger scales of patient health data with maintaining a focus on patient dignity.

Michelle Johnson (moderator)

Health data discrimination results from a combination of factors essential to effective medical device AI operation: 1) existence, or approximation, of a fiduciary relationship, 2) a technology-user relationship independent of the expertise of the fiduciary, 3) existence of a critical health event or status requiring use of a medical device, 4) ubiquitous sensitive data collection essential to AI functionality and the exceptional nature of health data, 5) lack of reasonably similar analog technology alternatives, and 6) compulsory reliance on a medical device. Each of these factors increase the probability of inherent discrimination, or a deontological privacy risk resulting from healthcare AI use.

We conclude that health technologies introduce a unique combination of circumstances that create a new conception of discrimination: discrimination created by technology reliance, rather than automated or exacerbated by it. Specific groups are protected under anti-discrimination laws because there is an inherent risk of potential injury due to an individual’s status. If individuals who are compulsorily dependent on AI-enabled healthcare technologies are uniquely vulnerable relative to their non-technology-dependent peers, they are owed some additional duties.

READ FULL STORY

 

 

Diverse Patient Perspectives on the Role of AI and Big Data in Healthcare

 

 

Kelly Bergstrand

Kelly BergstrandJess FindleyChristopher RobertsonMarv Slepian, Cayley Balser, and Andrew Woods will present their paper, Diverse Patient Perspectives on the Role of AI and Big Data in Healthcare, on Saturday, September 25th at #werobot 2021. Michelle Johnson will moderate the 4:30pm – 5:30pm panel on Health Robotics.

Jess Findley

Artificial intelligence and big data (AIBD) are being used to find tumors in chest images, to regulate implanted devices, and to select personalized courses of care. Such new technologies have the potential to transform healthcare. But it is also unclear how these technologies will affect the doctor-patient relationship.

Christopher Robertson

Prior research suggests that patients’ trust in their physicians is an essential component of effective healing, but research also shows lower trust among Black, Hispanic, and Native American patients. Moreover, prior research suggests broad public skepticism about computer agents, which appears especially salient in under-served and marginalized communities. These groups may be concerned that the AIBD systems will be biased against them, however research also shows race biases amongst human providers.

Marv Slepian

Ultimately, patient bias against computer-aided decisions may cause adverse health outcomes, where automated systems are actually more accurate and less biased. These facts could make effective disclosure of AIBD’s role in treatment material to securing patient consent and especially so with diverse patient populations. Our study examines diverse patient populations’ views about automated medicine. Our project has qualitative and quantitative phases. In the qualitative phase, we have conducted structured interviews with 20 patients from a range of racial, ethnic, and socioeconomic perspectives, to study understand their reactions to current and future AIBD technologies.

Andrew Woods

For the quantitative phase, we have purchased a large (n=2600) and diverse sample of American respondents from YouGov, with oversampling of Black, Hispanic, and Native American populations in particular.

Cayley Balser

Our randomized, blinded survey experiments place respondents as mock patients into clinical vignettes, and manipulate whether the physician uses AIBD for diagnosis or treatment of the patient’s condition, whether that fact is disclosed, and how it is communicated to the patient. Importantly, we manipulate the distinction between the physician deferring to versus relying upon (and potentially overriding) the AIBD system.

Michelle Johnson (moderator)

Our findings will be useful for the development of theory-based and evidence-driven recommendations for how physicians and patients might integrate AIBD into the informed consent process. We suspect that the era of AIBD may change what it means to be a human physician, as expertise gives way to an ambassadorial function, between the human patients and the expert systems.

READ FULL STORY

 

 

Somebody That I Used to Know: The Risks of Personalizing Robots for Dementia Care

 

 

Alyssa Kubota

Alyssa KubotaMaryam Pourebadi, Sharon BanhSoyon Kim, and Laurel D. Riek will present their paper, Somebody That I Used to Know: The Risks of Personalizing Robots for Dementia Care, on Saturday, September 25th at #werobot 2021. Michelle Johnson will moderate the 4:30pm – 5:30pm panel on Health Robotics.

Maryam Pourebadi

People with dementia (PwD) often live at home with a full time caregiver. That caregiver is often overburdened, stressed, and is themselves an older adult with health problems. Research has explored the use of robots that can aid both PwD and their caregivers with a range of daily living tasks, conduct household chores, provide companionship, and deliver cognitive stimulation.

Sharon Banh

A key concept discussed for these assistive robots is personalization, which is a measure of how well  the robot adapts to the person over time. Personalization offers many benefits, including treatment effectiveness, adherence to treatment, and goal-oriented health management. However, it can also jeopardize the safety and autonomy of PwD, or exacerbate their social isolation and abuse. Thus, as roboticists continue to develop algorithms to adapt robot behavior, they must critically consider what the unintended consequences to personalizing robot behavior might be.

Laurel D. Riek

Soyon Kim

Michelle Johnson (moderator)

As robot designers who also work in community health, our team is uniquely positioned to explore open technical challenges and raise ethical concerns of personalizing robot behavior to people with cognitive impairments. In this paper, we propose key technical and policy concepts to enable robot designers, law-makers, and others to develop safe and ethical approaches for longitudinal interactions with socially assistive robots, particularly those designed for people with cognitive impairments. We hope that our work will inspire roboticists to consider the potential risks and benefits of robot personalization, and support future ethically-focused robot design.

READ FULL STORY

 

 

Anti-Discrimination Law’s Cybernetic Black Hole

 

 

Marc Canellas

Marc Canellas will present his paper, Anti-Discrimination Law’s Cybernetic Black Hole, on Saturday, September 25th at 3:00pm at #werobot 2021. Cynthia Khoo will lead the discussion.

The incorporation of machines into American systems (e.g., crime, housing, family regulation, welfare) represents the peak of evolution, the perfect design, for our devotion to a colorblind society at the expense of a substantively equitable society. Machines increase the speed, scale, and efficiency of operations, while their complex inner-workings and our complex interactions with them effectively shield our consciousness and our laws from the reality that their failures disproportionately affect protected groups.

When investigating alleged discrimination, anti-discrimination law – especially Title VII’s protections against employment discrimination – is premised on two flawed assumptions: First, that discrimination always has a single, identifiable, and fixable source; second, that discrimination is exclusively the result of a human or a machine – a human villain with discriminatory animus or a faulty machine with algorithmic bias. These assumptions within anti-discrimination law are completely incompatible with the reality of how humans and machines work together, referred to as “cybernetic systems” in the engineering community. Cybernetic systems are characterized by the properties of interdependence and complexity, meaning that they have numerous, dynamic, uncertain factors that are hard to predict or identify as to how they contribute to performance or failure. The failure of the law and its commentators to understand and address fundamental conflicts between how discrimination is assumed to occur and how discrimination actually occurs within cybernetic systems means that there is no liability for cybernetic discrimination produced from the interaction of humans and machines within organizations.

Cynthia Khoo (discussant)

This cybernetic black hole within anti-discrimination law has set up a system of perverse effects and incentives. When humans and machines make decisions together, it is almost impossible for plaintiffs to identify a single, identifiable source of discrimination in either the human or machine. As machines increasingly mediate human decisions in criminal, housing, family regulation, and welfare systems, plaintiffs will increasingly lose what little protection they had from intentional discrimination (disparate treatment) or discriminatory effects (disparate impact). Decisionmakers who want to reduce liability have no need to reduce discrimination, instead they are incentivized increase adoption of complex, opaque machines to recommend and inform their human decisions – making the burden on plaintiffs as heavy as possible.

There have been endless proposed solutions to the problems of discrimination law and Title VII, but after review the truth emerges that no tweaks to anti-discrimination law or pinch of technological magic will solve the problem of cybernetic discrimination. The only meaningful solution is one that our modern jurisprudence seems to fear most: a strict liability standard where outcomes for protected classes are explicitly used to identify and remedy discrimination.

READ FULL STORY

 

 

Predicting Consumer Contracts

 

 

Noam Kolt

Noam Kolt will present his paper, Predicting Consumer Contracts, on Saturday, September 25th at 1:30pm at #werobot 2021. Meg Mitchell will lead the discussion.

This paper empirically examines whether a computational language model can read and understand consumer contracts. Language models are able to perform a wide range of complex tasks by predicting the next word in a sequence. In the legal domain, language models can summarize laws, translate legalese into plain English, and, as this paper will explore, inform consumers of their contractual rights and obligations.

Meg Mitchell (discussant)

To showcase the opportunities and challenges of using language models to read consumer contracts, this paper studies the performance of GPT-3, a powerful language model released in June 2020. The case study employs a novel dataset comprised of questions relating to the terms of service of popular U.S. websites. Although the results are not definitive, they offer several important insights. First, owing to its immense training data, the model can exploit subtle informational cues embedded in questions. Second, the model performed poorly on contractual provisions that favor the rights and interests of consumers, suggesting that it may contain an anti-consumer bias. Third, the model is brittle in unexpected ways. Performance was highly sensitive to the wording of questions, but surprisingly indifferent to variations in contractual language.

While language models could potentially empower consumers, they could also provide misleading legal advice and entrench harmful biases. Leveraging the benefits of language models in reading consumer contracts and confronting the challenges they pose requires a combination of engineering and governance. Policymakers, together with developers and users of language models, should begin exploring technical and institutional safeguards to ensure that language models are used responsibly and align with broader social values.

READ FULL STORY

 

 

Debunking Robot Rights: Metaphysically, Ethically and Legally

 

 

Abeba Birhane

Abeba BirhaneJelle van Dijk, and Frank Pasquale will present their paper, Debunking Robot Rights: Metaphysically, Ethically and Legally, on Saturday, September 25th at 10:00am at #werobot 2021. Deb Raji will lead the discussion.

In this work we challenge the argument for robot rights on metaphysical, ethical and legal grounds. Metaphysically, we argue that machines are not the kinds of things that could be denied or granted rights. Ethically, we argue that, given machines’ current and potential harms to the most marginalized in society, limits on (rather than rights for) machines should be at the centre of current AI ethics debate. From a legal perspective, the best analogy to robot rights is not human rights but corporate rights, rights which have undermined the US electoral process, as well as workers; and consumers’ rights. The idea of robot rights, we conclude, acts as a smoke screen, allowing theorists to fantasize about benevolently sentient machines, while so much of current AI and robotics is fuelling surveillance capitalism, accelerating environmental destruction, and entrenching injustice and human suffering.

Jelle van Dijk

Building on theories of phenomenology, post-Cartesian approaches to cognitive science, and critical race studies, we ground our position in the lived reality of actual humans in an increasingly ubiquitously connected, controlled and surveilled society. What we find is the seamless integration of machinic systems into daily lives in the name of convenience and efficiency. The last thing these systems need is legally enforceable “rights” to ensure persons defer to them. Conversely, the ‘autonomous intelligent machine’ is a sci-fi fantasy, a meme that functions to mask the environmental costs and human labour and which are the backbone of contemporary AI. The robot rights debate further mystifies and obscures these problems. And it could easily provide a normative rationale for permitting powerful entities developing and selling AI to be absolved from accountability and responsibility, given the general association of rights with responsibility.

Frank Pasquale

Existing robotic systems (from chatbots to humanoid robots) are often portrayed as fully autonomous systems and that is part of the appeal for granting them rights. However, these systems are never fully autonomous, but always human-machine systems that run on human labour and environmental resources and are necessarily embedded in social systems from their conception, to development to deployment and beyond. Yet, the “rights” debate proceeds from the assumption that the entity in question is somewhat autonomous, or worse that it is devoid of exploited human labour. Approaching ethics requires reimagining ethics from the perspective, needs, and rights of the most marginalized and underserved. This means that any robot rights discussion that overlooks underpaid and exploited populations that serve as the backbone for “robots” as well as the environmental cost of creating AI, risks being disingenuous. The question should not be whether robotic systems deserve rights, but rather if we grant or deny rights to a robotic system, what consequences and implications arise for people owning, using, developing, and affected by the actual robots?

Deb Raji (discussant)

The time has come to change the narrative, from “robot rights” to the duties of the corporations and powerful persons now profiting from sociotechnical systems (including, but not limited to, robots). Damages, harm and suffering have been repeatedly documented as a result of the creation and integration (into the social world) of AI systems. Rather than speculating about the desert of hypothetical machines, the far more urgent conversation concerns robots and AI as concrete artifacts built by powerful corporations, further invading our private, public, and political space, and perpetuating injustice. A purely intellectual and theoretical debate is at risk of obscuring the real threat here: that many of the actual robots that corporations are building are doing people harm both directly and indirectly, and that a premature and speculative robot rights discourse risks even further unravelling our frail systems of accountability for technological harms.

READ FULL STORY

 

 

Driving Into the Loop: Mapping Automation Bias & Liability Issues for Advanced Driver Assistance Systems

 

 

Katie Szilagyi

Katie Szilagyi, Jason Millar, Ajung Moon, and Shalaleh Rismani will present their paper, Driving Into the Loop: Mapping Automation Bias & Liability Issues for Advanced Driver Assistance Systems, on Friday, September 24th at 5:15pm at #werobot 2021. Meg Leta Jones will lead the discussion.

Advanced Driver Assistance Systems (ADAS) are transforming the modern driving experience with technology including emergency braking, blind spot monitoring, and sirens to ward off driver errors.

The assumption appears straightforward: automation will improve driver safety because automation just reduces the occurrence of human driving errors. But, is this a safe assumption? Our current regulatory reality, within which this assumption operates, demands that drivers be able to effectively monitor and operate ADAS without any formal training or licensing to do so. This is premised on the outdated notion that drivers are still in full control of critical driving tasks. Meanwhile, ADAS now asks drivers to both drive the vehicle and simultaneously monitor these complex systems. This significant shift is not yet reflected in driving licensing regimes or transportation liability regulations.

Jason Millar

Ajung Moon

Conversations about liability and automated vehicles often jump straight to the pesky problems posed by hardcoding ethical decisions into machines. By focusing on tomorrow’s automated driving technologies, such investigations overlook today’s mundane, yet still instructive, driving automation: blind spot monitoring, adaptive cruise control, and automated parking. These are the robotic co-pilots truly transforming human-robot interaction in the cabin, which could have serious legal consequences for transportation statutes, regulations, and liability schemes. Robotics scholarship has effectively teased out a distinction between humans in-the-loop and on-the-loop for automation. ADAS start to blur these lines, by automating some tasks but not others, by varying system behaviours between different vehicle manufacturers, and by expecting the driver to serve as advanced systems monitor without ever receiving appropriate training.

Meg Leta Jones (discussant)

Shalaleh Rismani

In Part I, we explain the technical aspects of today’s most common assistive driving technologies and show how they are best situated in the litany of automation concerns documented by the current SAE framework. In Part II, we offer theoretical framing through automation bias, explaining some of the key concerns that arise when untrained people take control of dangerous automated machinery. In Part III, we provide an overview of driver licensing regimes, demonstrating the paucity of regulations designed to ensure the effective use of ADAS on public roads. In Part IV, we compare the automation challenges generated by ADAS to legal accounts of robotics, asking how liability accrues for operators of untrained robotic systems in other social spheres. Finally, in Part V, we offer clear policy advice on how to better integrate assistive driving technologies with today’s schemes for driver regulation.

READ FULL STORY

 

 

Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Policy

 

 

Vicky Charisi

Vicky CharisiUrs GasserRandy Gomez, and Selma Šabanović will present their paper, Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Policy, on Friday, September 24th at #werobot 2021. Veronica Ahumada-Newhart will lead the discussion at 3:45pm.

Urs Gasser

Social robotics now reach vulnerable populations, notably children, who are in a critical period of their development. These systems, algorithmically mediated by Artificial Intelligence (AI), can be used to successfully supplement children’s learning and entertainment because children can effectively and cognitively engage these systems. Demand for social robots that interact with children is likely to increase in the coming years. As a result there are increasing concerns that need to be addressed due to the profound impact that this technology can have on children.

Selma Šabanović

To date, the majority of AI policies, strategies and guidelines make only a cursory mention of children. To help fill this gap, UNICEF is currently exploring approaches to uphold children’s rights in the context of AI and to create opportunities for children’s participation. Robots bring unique opportunities but also robot-specific considerations for children. This combination calls into question how existing protection regimes might be applied; at present it remains unclear what the rules for children’s protection relating to their interaction with robots should look like.

Veronica Ahumada-Newhart (discussant)

Children dynamically and rapidly develop through social interactions, and evidence shows that they can perceive robots as part of their social groups, which means that robots can affect their development. However, the lack of AI literacy and relevant skills that would support children’s critical reflection towards robotic technology is currently missing from the majority of formal educational systems. This paper will elaborate on the development of a dynamic framework which will identify the key risks and opportunities for robot adoption for children, the key actors, and a set of operationable actions that might support future implications of robots for children.

READ FULL STORY

 

 

On the Practicalities of Robots in Public Spaces

 

 

Cindy Grimm

Cindy Grimm and Kristen Thomasen will present their paper, On the Practicalities of Robots in Public Spaces, on Friday, September 24th at #werobot 2021. Edward Tunstel will lead the 1:45pm – 3:15pm panel on Field Robotics.
There has been recent debate over how to regulate autonomous robots that enter into spaces where they must interact with members of the public. Sidewalk delivery robots, drones, and autonomous vehicles, among other examples, are pushing this conversation forward. Those involved in the law are often not well-versed in the intricacies of the latest sensor or AI technologies, and robot builders often do not have a deep understanding of the law. How can we bridge these two sides so that we can form appropriate law and policy around autonomous robots that provides robust protections for people, but does not place an undue burden on the robot developers?

Kristen Thomasen

This paper proposes a framework for thinking about how law and policy interact with the practicalities of autonomous mobile robotics. We discuss how it is possible to start from the two extremes, in regulation (without regard to implementation) and in robot design (without regard to regulation), and iteratively bring these viewpoints together to form a holistic understanding of the most robust set of regulation that still results in a viable product. We also focus particular attention on the case where it is not possible to do this due to a gap between the minimal set of realistic regulation and the ability to autonomously comply with it. In this case, we show how that can drive scholarship in the legal and policy world and innovation in technology. By shining a light on these gaps, we can focus our collective attention on them, and close them faster.

Edward Tunstel (moderator)


As a concrete example of how to apply our framework, we consider the case of sidewalk delivery robots on public sidewalks. This specific example has the additional benefit of comparing the outcomes of applying the framework to emergent regulations. Starting with the “ideal” regulation and the most elegant robot design, we look at what it would take to implement or enforce the ideal rules and dig down into the technologies involved, discussing their practicality, cost, and the risks involved when they do not work perfectly.

Do imperfect sensors cause the robot to stop working due to an abundance of caution, or do they cause it to violate the law or policy? If there is a violation, how sure can we be that it was a faulty piece of technology, rather than a purposeful act by the designer? Does implementing the law or policy mean a technology so expensive that the robot is no longer a viable product? In the end, the laws and policies that will govern autonomous robots as they do their work in our public spaces need to be designed with consideration of the technology. We must strive for fair, equitable laws, that are practically and realistically enforceable with current or near-future technologies.

READ FULL STORY

 

 

Smart Farming and Governing AI in East Africa: Taking Gendered Relations and Vegetal Beings into Account

 

 

Jeremy de BeerLaura FosterChidi OguamanamKatie Szilagyi, and Angeline Wairegi will present their paper, Smart Farming and Governing AI in East Africa: Taking Gendered Relations and Vegetal Beings into Account, on Friday, September 24th at #werobot 2021. Edward Tunstel will moderate the 1:45pm – 3:15pm panel on Field Robotics.

Laura Foster


Robots are on their way to East African farms. Deploying robotic farm workers and corresponding smart systems is lauded as the solution for improving crop yields, strengthening food security, generating GDP growth, and combating poverty. These optimistic narratives mimic those of the Green Revolution and activate memories of its underperformance in Africa. Expanding upon previous contributions on smart farming and the colonial aspects of AI technology, this paper investigates how AI-related technologies are deployed across East Africa.

Edward Tunstel (moderator)

Chidi Oguamanam


The creation of AI algorithms and datasets are processes driven by human judgements; the resultant technology is shaped by society. Cognizant of this, this paper provides an overview of emerging smart farming technologies across the East African region, situated within contemporary agricultural industries and the colonial legacies that inform women’s lives in the region.

Angeline Wairegi


After establishing the gendered implications of smart farming as a central concern, this paper provides rich analysis of the state-of-the-art scholarly and policy literature on smart farming in the region, as well as key intergovernmental agricultural AI initiatives being led by national governments, the United Nations, the African Union, and other agencies. This enables an understanding of how smart farming is being articulated across multiple material-discursive sites—media, government, civil society, and industry.

Katie Szilagyi

What becomes apparent is that smart farming technologies are being articulated through four key assumptions: not only techno-optimism, as above, but also ahistoricism, ownership, and human exceptionalism. These assumptions, and the multiple tensions they reveal, limit possibilities for governing smart farming to benefit small-scale women farmers. Using these four frames, our interdisciplinary author team identifies the key ethical implications for adopting AI technologies for East African female farmers.

READ FULL STORY