Author Archive | We-Robot-2021

Recording of #WeRobot 2021 Sessions Now Available

If you missed any part of We Robot 2021, or you just want to enjoy it again, you’ll be pleased to know we’ve got recordings of the sessions available on line. If you want to read the paper before hearing the discussion (highly recommended!) see our program page for links to everything.

Comments { 0 }

#WeRobot Tenth Anniversary: Virtual but Still Vital

#WeRobot had a great Workshop day; now the heavy lifting begins.

See the Program page for the next two days’ schedule and for links to all the papers, demos, and more.

Our conference software allows a healthy back-channel discussion, and this was in full form yesterday–expect even more today.

Comments { 0 }

#WeRobot Login Reminder

If you are using Whova for the first time, after you have registered for We Robot, create an account here.

If you have a Whova account, and are already registered for We Robot, you can go straight to the WeRobot 2021 Whova login.

Comments { 0 }

#WeRobot Features Workshops Today

Looking behind the curtain: What robots can (and cannot) do and how that influences the types of policies that can (and cannot) be written

Bill Smart

Bill Smart and Cindy Grimm will lead this workshop session at 11:00am on Thursday, September 23rd at #werobot.

Cindy Grimm

Machine learning is <gasp> just a glorified spreadsheet. Robots do not (and will not) fail in the same way that people do. We’ll use a mix of demonstrations and your own experience with common apps to highlight those differences. Our intention is to help you build up a better mental model of what robots can do, and how they can fail. We’ll close with an open discussion on how law and policy language might be adapted to account for this difference (What is a “reasonable good-faith effort” for a deployed robotic system?)


if(goingToTurnEvil), {don’t();}: Creating Legal Rules for Robots

Evan Selinger

Evan Selinger and Woody Hartzog will lead this workshop session at 1:00pm on Thursday, September 23rd at #werobot.

Woody Harzog

A lawyer, a roboticist, and a sociologist (or other discipline) walk into a bar…to form multidisciplinary teams attempting to craft some hypothetical legislation. Drafting laws can be just as frustrating as getting your code to work. When you combine emergent issues in robotics with social dynamics and you’ve got quite a challenge.

This experiential session combines law, robotics, and group fun. We’ll be working our way through how the legal sausage is made and try our hand at crafting some legal rules to solve some not-so hypothetical problems.


Finding your Path, Your People, and Your Conference Program–Networking Break

Ryan Calo

Kristen Thomasen

Sue Glueck






Ryan Calo, Sue Glueck, and Kristen Thomasen will lead this workshop session at 2:00pm on Thursday, September 23rd at #werobot.

New this year:  the workshop will feature 3 simultaneous networking/mentoring sessions during one of the breaks:  (1) How to do interdisciplinary research in this space; (2) What do I want to be when I grow up; (3) Welcome to We Robot for newbies.  We hope you’ll take advantage of this opportunity to connect!


Why Call Them Robots? 100 Years of R.U.R.

Robin Murphy

Robin Murphy, Joanne Pransky and Jeremy Brett will lead this workshop session at 3:00pm on Thursday, September 23rd at #werobot.

A roboticist, a science fiction librarian, and a robot psychiatrist walk into a … We Robot panel to present multidisciplinary perspectives on R.U.R., the 1921 Czech play that gave the world the word “robot” and the robot uprising meme. Robin Murphy, author of Robotics Through Science Fiction, Jeremy Brett, curator of one of the largest and most respected collections of science fiction in the world, and Joanne Pransky, the world’s first robotic psychiatrist who admittedly works more with industrialists than their equipment,  are excited about engaging the audience in a lively discussion about R.U.R.  The panel intends tackle questions such as: Why should we care about R.U.R. today? Is it a dated play that got lucky on creating memes or was it a hit like Hamilton which shaped popular opinion about policy? Is it relevant for today’s discussions on robots replacing workers, universal basic incomes, and responsible robotics innovation? The panel will be open for questions and discussion from the audience and likely to spawn a spirited chat session.

Joanna Pranksy

If you aren’t familiar with R.U.R., there’s a good synopsis by the wikipedia ( and the play can be found at  Project Gutenberg at Other fun reading is at The Czech Play that gave us the Word Robot,  The MIT Press Reader,

YouTube is full of many adaptations as well.


I’ll Take Robot Geeks for $1000, Alex: An Afternoon of Robot Trivia

Jason Millar

Join us for an afternoon of testing your knowledge of all things robot-related. Trivia-master Jason Millar will close our pre-conference workshop with some fun and good-natured competition. The Trivia session will be held at 4:15pm on Thursday, September 23rd at #werobot. Winner gets bragging rights until the next We Robot.

Comments { 0 }

Important #WeRobot Meeting Information

At We Robot we ask (and expect) that everyone reads the papers scheduled for Days One and Two in advance of those sessions. (The Workshops do not have advance papers.) In most cases, authors do not deliver their papers. Instead we go straight to the discussant’s wrap-up and appreciation/critique. The authors respond briefly, and then we open it up to Q&A from our fabulous attendee/participants. Download the program to your calendar. Download a zip file of Friday’s papers and Saturday’s papers.

We Robot 2021 will be hosted on Whova. We’ve prepared a We Robot 2021 Attendee Guide. You can also Get Whova Now.

If you have not yet registered, the Registration Page awaits you.

Comments { 0 }

Autonomous Vehicle Fleets as Public Infrastructure

Roel Dobbe

Roel Dobbe and Thomas Gilbert will present their paper, Autonomous Vehicles Fleets as Public Infrastructure, on Saturday, September 25th at 11:30am at #werobot 2021. Madeleine Clare Elish will lead the discussion.

The promise of ‘autonomous vehicles’ (AV) to redefine public mobility makes their development political — across a variety of stakeholders. This politics may not be obvious. In their ability to optimize the local safety and efficiency of individual vehicles, AVs promise to make transportation more predictable and reliable. Trips that people find too tedious to make could be made into trips worth taking, and as this change is reflected through the broader population it has the potential to fundamentally change the relationship consumers have with transportation. AV fleets also make it possible to centralize and coordinate the routing of vehicles. At the most local level we can see coordinated routing through the large body of work in platooning, alleviating traffic congestion. Such works represent only the beginnings of what could be possible. Centralized route planning could allow load-balancing between routes on the scale of cities, the predictive placement of vehicles for the purposes of ride-sharing, special routing considerations for emergency vehicles, and the management of interactions between these considerations.

Thomas Gilbert

At the same time, AVs are disrupting legacy processes for vehicle safety certification. We are witnessing regulatory capture as AV companies hire federal and state contractors to ensure their design certifications meet legacy thresholds for liability. Companies now craft their own Operational Design Domains to meet proprietary definitions of road features (streets, lanes, city regions) that purport to be technically safe, without requisite validation by the human factors community. Finally, there is the metaphorical frame through which AVs are likely to be understood, as private companies, consulting firms, and municipal entities craft public surveys as they see fit and thereby shape the types of consumer demand that suit their own organizational priorities.

Madeleine Clare Elish (discussant)

We create a framework for mapping concrete AV development choices to current and emerging forms of sociotechnical politics, and suggest what more responsible and stakeholder-sensitive design commitments would look like. We summarize three dimensions of AV politics: jaying (which places certain mobility stakeholders “out of scope”), wearing (which damages road infrastructure in a predictable fashion), and moral crumple zoning (which allocates responsibility for accidents to the most vulnerable). Despite the common label of AVs as “autonomous”, they will be shaped by human interests and expectations, and their status as public infrastructure must be decided through ongoing normative deliberation.

Empirically, we examine the emerging regulatory landscape of AV development, based on 50 semi-structured interviews with researchers in AI theory, human factors, and AV policymakers. To our knowledge, this comprises the first qualitative dataset of insights and expert judgment from every stage of AV development, from design to training to physical deployment.

Comments { 0 }

Prescribing Exploitation

Charlotte Tschider

Charlotte Tschider will present her paper, Prescribing Exploitation, on Saturday, September 25th at #werobot 2021. Michelle Johnson will moderate the 4:30pm – 5:30pm panel on Health Robotics.

Patients increasingly rely on connected wearable medical devices that use artificial intelligence infrastructures and physical housing that directly interacts with the human body. Many folks who have traditionally relied on compulsory medical wearables have been members of legally protected groups specifically enumerated in anti-discrimination law, such as disability status. With continued aging of our generations and a longer average lifespan, the field of medical wearables is about to encounter a patient population explosion that will force the medical industry, lawyers, and advocates to find ways of balancing immensely larger scales of patient health data with maintaining a focus on patient dignity.

Michelle Johnson (moderator)

Health data discrimination results from a combination of factors essential to effective medical device AI operation: 1) existence, or approximation, of a fiduciary relationship, 2) a technology-user relationship independent of the expertise of the fiduciary, 3) existence of a critical health event or status requiring use of a medical device, 4) ubiquitous sensitive data collection essential to AI functionality and the exceptional nature of health data, 5) lack of reasonably similar analog technology alternatives, and 6) compulsory reliance on a medical device. Each of these factors increase the probability of inherent discrimination, or a deontological privacy risk resulting from healthcare AI use.

We conclude that health technologies introduce a unique combination of circumstances that create a new conception of discrimination: discrimination created by technology reliance, rather than automated or exacerbated by it. Specific groups are protected under anti-discrimination laws because there is an inherent risk of potential injury due to an individual’s status. If individuals who are compulsorily dependent on AI-enabled healthcare technologies are uniquely vulnerable relative to their non-technology-dependent peers, they are owed some additional duties.

Comments { 0 }

Diverse Patient Perspectives on the Role of AI and Big Data in Healthcare

Kelly Bergstrand

Kelly BergstrandJess FindleyChristopher RobertsonMarv Slepian, Cayley Balser, and Andrew Woods will present their paper, Diverse Patient Perspectives on the Role of AI and Big Data in Healthcare, on Saturday, September 25th at #werobot 2021. Michelle Johnson will moderate the 4:30pm – 5:30pm panel on Health Robotics.

Jess Findley

Artificial intelligence and big data (AIBD) are being used to find tumors in chest images, to regulate implanted devices, and to select personalized courses of care. Such new technologies have the potential to transform healthcare. But it is also unclear how these technologies will affect the doctor-patient relationship.

Christopher Robertson

Prior research suggests that patients’ trust in their physicians is an essential component of effective healing, but research also shows lower trust among Black, Hispanic, and Native American patients. Moreover, prior research suggests broad public skepticism about computer agents, which appears especially salient in under-served and marginalized communities. These groups may be concerned that the AIBD systems will be biased against them, however research also shows race biases amongst human providers.

Marv Slepian

Ultimately, patient bias against computer-aided decisions may cause adverse health outcomes, where automated systems are actually more accurate and less biased. These facts could make effective disclosure of AIBD’s role in treatment material to securing patient consent and especially so with diverse patient populations. Our study examines diverse patient populations’ views about automated medicine. Our project has qualitative and quantitative phases. In the qualitative phase, we have conducted structured interviews with 20 patients from a range of racial, ethnic, and socioeconomic perspectives, to study understand their reactions to current and future AIBD technologies.

Andrew Woods

For the quantitative phase, we have purchased a large (n=2600) and diverse sample of American respondents from YouGov, with oversampling of Black, Hispanic, and Native American populations in particular.

Cayley Balser

Our randomized, blinded survey experiments place respondents as mock patients into clinical vignettes, and manipulate whether the physician uses AIBD for diagnosis or treatment of the patient’s condition, whether that fact is disclosed, and how it is communicated to the patient. Importantly, we manipulate the distinction between the physician deferring to versus relying upon (and potentially overriding) the AIBD system.

Michelle Johnson (moderator)

Our findings will be useful for the development of theory-based and evidence-driven recommendations for how physicians and patients might integrate AIBD into the informed consent process. We suspect that the era of AIBD may change what it means to be a human physician, as expertise gives way to an ambassadorial function, between the human patients and the expert systems.

Comments { 0 }

Somebody That I Used to Know: The Risks of Personalizing Robots for Dementia Care

Alyssa Kubota

Alyssa KubotaMaryam Pourebadi, Sharon BanhSoyon Kim, and Laurel D. Riek will present their paper, Somebody That I Used to Know: The Risks of Personalizing Robots for Dementia Care, on Saturday, September 25th at #werobot 2021. Michelle Johnson will moderate the 4:30pm – 5:30pm panel on Health Robotics.

Maryam Pourebadi

People with dementia (PwD) often live at home with a full time caregiver. That caregiver is often overburdened, stressed, and is themselves an older adult with health problems. Research has explored the use of robots that can aid both PwD and their caregivers with a range of daily living tasks, conduct household chores, provide companionship, and deliver cognitive stimulation.

Sharon Banh

A key concept discussed for these assistive robots is personalization, which is a measure of how well  the robot adapts to the person over time. Personalization offers many benefits, including treatment effectiveness, adherence to treatment, and goal-oriented health management. However, it can also jeopardize the safety and autonomy of PwD, or exacerbate their social isolation and abuse. Thus, as roboticists continue to develop algorithms to adapt robot behavior, they must critically consider what the unintended consequences to personalizing robot behavior might be.

Laurel D. Riek

Soyon Kim

Michelle Johnson (moderator)

As robot designers who also work in community health, our team is uniquely positioned to explore open technical challenges and raise ethical concerns of personalizing robot behavior to people with cognitive impairments. In this paper, we propose key technical and policy concepts to enable robot designers, law-makers, and others to develop safe and ethical approaches for longitudinal interactions with socially assistive robots, particularly those designed for people with cognitive impairments. We hope that our work will inspire roboticists to consider the potential risks and benefits of robot personalization, and support future ethically-focused robot design.

Comments { 0 }

Anti-Discrimination Law’s Cybernetic Black Hole

Marc Canellas

Marc Canellas will present his paper, Anti-Discrimination Law’s Cybernetic Black Hole, on Saturday, September 25th at 3:00pm at #werobot 2021. Cynthia Khoo will lead the discussion.

The incorporation of machines into American systems (e.g., crime, housing, family regulation, welfare) represents the peak of evolution, the perfect design, for our devotion to a colorblind society at the expense of a substantively equitable society. Machines increase the speed, scale, and efficiency of operations, while their complex inner-workings and our complex interactions with them effectively shield our consciousness and our laws from the reality that their failures disproportionately affect protected groups.

When investigating alleged discrimination, anti-discrimination law – especially Title VII’s protections against employment discrimination – is premised on two flawed assumptions: First, that discrimination always has a single, identifiable, and fixable source; second, that discrimination is exclusively the result of a human or a machine – a human villain with discriminatory animus or a faulty machine with algorithmic bias. These assumptions within anti-discrimination law are completely incompatible with the reality of how humans and machines work together, referred to as “cybernetic systems” in the engineering community. Cybernetic systems are characterized by the properties of interdependence and complexity, meaning that they have numerous, dynamic, uncertain factors that are hard to predict or identify as to how they contribute to performance or failure. The failure of the law and its commentators to understand and address fundamental conflicts between how discrimination is assumed to occur and how discrimination actually occurs within cybernetic systems means that there is no liability for cybernetic discrimination produced from the interaction of humans and machines within organizations.

Cynthia Khoo (discussant)

This cybernetic black hole within anti-discrimination law has set up a system of perverse effects and incentives. When humans and machines make decisions together, it is almost impossible for plaintiffs to identify a single, identifiable source of discrimination in either the human or machine. As machines increasingly mediate human decisions in criminal, housing, family regulation, and welfare systems, plaintiffs will increasingly lose what little protection they had from intentional discrimination (disparate treatment) or discriminatory effects (disparate impact). Decisionmakers who want to reduce liability have no need to reduce discrimination, instead they are incentivized increase adoption of complex, opaque machines to recommend and inform their human decisions – making the burden on plaintiffs as heavy as possible.

There have been endless proposed solutions to the problems of discrimination law and Title VII, but after review the truth emerges that no tweaks to anti-discrimination law or pinch of technological magic will solve the problem of cybernetic discrimination. The only meaningful solution is one that our modern jurisprudence seems to fear most: a strict liability standard where outcomes for protected classes are explicitly used to identify and remedy discrimination.

Comments { 0 }