Archive | September, 2021

Deb Raji Will Lead Discussion on “Debunking Robot Rights: Metaphysically, Ethically and Legally”

Deb Raji

Deb Raji will discuss Debunking Robot Rights: Metaphysically, Ethically and Legally at 10:00am on Saturday, September 25th at #werobot.

Deb Raji is a computer scientist and activist whose work centers on algorithmic bias, AI accountability, and algorithmic auditing. She received her degree in Engineering Science from the University of Toronto in 2019. In 2015, she founded Project Include, a nonprofit providing increased student access to engineering education, mentorship, and resources in low income and immigrant communities in the Greater Toronto Area.

She has previously worked with Joy Buolamwini, Timnit Gebru, and the Algorithmic Justice League on researching gender and racial bias in facial recognition technology. She has also worked with Google’s Ethical AI team and served as a research fellow at the Partnership on AI and AI Now Institute at New York University. There, she worked on how to operationalize ethical considerations in machine learning engineering practice. A current Mozilla fellow, she has been recognized by MIT Technology Review and Forbes as one of the world’s top young innovators.
Comments { 0 }

Cutting-Edge Posters: The Forefront of Robotics Research

The #WeRobot poster session will take place at 6:15pm on Friday, September 24. Short video previews of each poster will be available during the Lighting Poster Session at 12:30pm. The session will showcase several of the most late-breaking research developments and projects in robotics.

Privacy’s Algorithmic Turn

By Maria P. Angel

Maria P. Angel

  • The increasing relevance of algorithms has created a pivot in American legal scholars’ privacy discourse, broadening the scope of privacy’s values and rights and pushing scholars to rethink the very nature of privacy.
  • My research aims to trace how American legal scholars’ conception of the right to privacy has changed in the last 30 years.
  • I intend to conduct document analysis of documents from the Privacy Law Scholars Conferences (PLSC), as well as use the Science, Technology, and Social Studies (STS) theory of “sociotechnical imaginaries” as my theoretical framework to make sense of the changing nature of privacy.


Egalitarian Machine Learning

By Clinton Castro, David O’Brien and Ben Schwan

Clinton Castro

David O’Brien

Ben Schwan






  • The increased reliance on prediction-based decision making has been accompanied by growing concerns about the fairness of the use of this technology, made more difficult by the lack of a consensus definition of “fairness” in this context.
  • Fairness, as used in the fair machine learning community, is best understood as a placeholder term for a variety of normative egalitarian considerations, namely to not be wrongfully discriminatory.
  • We are interested in exploring how to choose a fairness measure within a context. We present a general picture for thinking about the choice of a measure and talk about the choiceworthiness of three measures (“fairness through unawareness”, “equalised odds”, and “counterfactual fairness”).

Exploring Robotic Technologies to Mediate and Reduce Risk in Domestic Violence

By Mark Juszczak

Mark Juszczak

  • I am researching applications of robotic technologies to reduce domestic violence using two different perspectives: a problem-based perspective and a robotics-platform based perspective.
  • The problem-based perspective seeks to classify the spatial-temporal conditions under which a quantifiable threat or hazard of domestic violence occurs for women.
  • The robotics-platform based perspective seeks to determine the functional limits of embodied AI in providing an enhanced security function to mediate and reduce domestic violence.

Examining Correlations between Human Empathy and Vicbots

By Catherine McWhorter

Catherine McWhorter

  • This project focuses on robots capable of fulfilling victim roles – “vicbots” – and defines them as anthropomorphic bots with advanced a.i. that plead for the cessation of harm.
  • Whether or not vicbots negatively affect their human agent’s capacity for human-to-human empathy and compassion has implications for the health of the human agent, as well as the overall safety and well-being of communities.
  • Federal regulation is difficult due to a lack of consensus in research and discourse, so it is important to first categorize these bots and understand their impacts before moving on to appropriate regulation.

Reported Ethical Concerns Over Use of Robots for COVID-19 and Recommendations for Responsible Innovation for Future Pandemics

By Robin Murphy, Paula Dewitte, Jason Moats, Angela Clendenin, Vignesh Gandudi, Henry Arendes and Shawn Abraham

  • The coronavirus pandemic has led to new robots for healthcare, public safety, continuity of work and education, and social interactions.
  • As with any new application of technology, this may pose new ethical challenges for civil and administrative law, policy, and professional ethics.
  • While responsible innovation typically takes a lengthy engagement of direct and indirect stakeholders, disasters require immediate action, so we propose a short-term framework for stakeholders and roboticists to perform a proactive demand analysis.

Artificial Intelligence: The Challenges For Criminal Law In Facing The Passage From Technological Automation To Artificial Autonomy

By Beatrice Panattoni

Beatrice Panattoni

  • The project aims to analyze the possible and future criminal policies regarding the regulation of harms related to the use and functioning of AI systems.
  • A possible technically oriented classifications of “AI crimes” will be suggested, organized into  three groups:
    • (1) Cases where the AI system is used by a criminal agent as the means to realize the crime;
    • (2) Cases where the AI system is the “object” against which is committed the crime; and
    • (3) Cases where the realization of a crime is caused by the emergent behavior of an AI system.
  • The main issue is whether there is still space for criminal law when it comes to harms related to emergent behavior of an AI system, and, if so, what kind of criminal policies are better suited in this context; we outline possible scenarios in this presentation.

Roboethics to Design & Development Competition: Translating Moral Values Into Practice

By Jimin Rhim, Cheng Lin and Ajung Moon

Jimin Rhim

Cheng Lin

Ajung Moon







  • As robots enter our everyday spaces, human-robot interactions with ethically sensitive situations are bound to occur. For instance, designing a robot to evaluate whether to obey a teenager’s request to fetch alcohol remains a socio-technical challenge.
  • Our proposed project addresses this by hosting a first-of-its-kind global robotics design competition to explore new  ways of considering human values and translating this information for robots.
  • In addition to illuminating the translation process, the accumulated competition results will form the basis for an in-depth ethics audit framework to evaluate interactive robotic systems.

How Do AI Systems Fail Socially? Social Failure Mode and Effect Analysis (FMEA) for Artificial Intelligence Systems

By Shalaleh Rismani and Ajung Moon

Shalaleh Rismani

Ajung Moon

  • Developers of Artificial Intelligent Systems (AIS) have unearthed various sociotechnical failures in many applications, including inappropriate use of language in chatbots and discriminatory automated decision support systems.
  • Our open-ended research question is: how can AIS developers use FMEAs as one of the tools for creating accountability and improving design for sociotechnical failures?
  • In this work, we build on Raji et al.’s end-to-end auditability framework and develop a novel FMEA process that allows developers to effectively discover AIS’s social and ethical failures.

Nudging Robot Engineers To Do Good: Developing Standards for Ethical AI and Robot Nudges

By John Sullins, Sean Dougherty, Vivek Nallur and Ken Bell

John Sullins

Ken Bell

Vivek Nallur

  • HyperNudging or A/IS (autonomous Intelligent systems) Nudging allows programmers to engage in changing the behavior of users, such as encouraging exercise when a user has been sedentary, as opposed to simply predicting the value of some variable.
  • Soon, we will see more robot systems designed to do similar things on a larger scale, such as promoting safe behavior in public spaces, helping officials monitor public health or enforce quarantines, or encouraging people to stay longer in shops, museums and malls.
  • In this poster, we describe two use case scenarios of A/IS Nudging and show how new standards are being designed to help engineers build systems that are attuned to producing more ethical outcomes.

Machine Learning Algorithms in the Administrative State: The New Frontier for Democratic Experimentalism

By Amit Haim

Amit Haim

  • Administrative agencies are utilizing machine learning (ML) algorithms to ameliorate inaccuracies, inconsistencies, and inefficiencies. Due to the leeway these agencies have, especially at the local level,  there is significant variation in agencies’ procedures, which may lead to reduced transparency and accountability.
  • Nevertheless, prescriptive approaches fail to recognize that flexible schemes are important for enhancing the values the administrative state often lacks; schemes are likely to stifle innovation and urge agencies to stick to the status quo.
  • I argue that internal governance processes (e.g., partnerships, independent evaluations) can promote transparency while addressing problems in algorithms such as disparities and opacity.


Comments { 0 }

Important #WeRobot Meeting Information

At We Robot we ask (and expect) that everyone reads the papers scheduled for Days One and Two in advance of those sessions. (The Workshops do not have advance papers.) In most cases, authors do not deliver their papers. Instead we go straight to the discussant’s wrap-up and appreciation/critique. The authors respond briefly, and then we open it up to Q&A from our fabulous attendee/participants. Download the program to your calendar. Download a zip file of Friday’s papers and Saturday’s papers.

We Robot 2021 will be hosted on Whova. We’ve prepared a We Robot 2021 Attendee Guide. You can also Get Whova Now.

If you have not yet registered, the Registration Page awaits you.

Comments { 0 }

Meg Leta Jones Will Lead Discussion on Regulating Driving Assistance Software

Meg Leta Jones

Meg Leta Jones will discuss Driving Into the Loop: Mapping Automation Bias & Liability Issues for Advanced Driver Assistance Systems at 5:15pm on Friday, September 24th at #werobot.

Meg Leta Jones is an Associate Professor in the Communication, Culture & Technology program at Georgetown University where she researches rules and technological change with a focus on privacy, memory, innovation, and automation in digital information and computing technologies. She is also a core faculty member of the Science, Technology, and International Affairs program in Georgetown’s School of Foreign Service, a faculty affiliate with the Institute for Technology Law & Policy at Georgetown Law Center, a faculty fellow at the Georgetown Ethics Lab, and visiting faculty at the Brussels Privacy Hub at Vrije Universiteit Brussel.

Meg Leta Jones’s research covers comparative information and communication technology law, critical information and data studies, governance of emerging technologies, and the legal history of technology. Ctrl+Z: The Right to be Forgotten, Meg’s first book, is about the social, legal, and technical issues surrounding digital oblivion. Her second book project, The Character of Consent: The History of Cookies and Future of Technology Policy, tells the transatlantic history of digital consent through the lens of a familiar technical object. She is also editing a volume with Amanda Levendowski called Feminist Cyberlaw that explores how gender, race, sexuality and disability shape cyberspace and the laws that govern it. More details about her work can be found at and

Comments { 0 }

Veronica Ahumada-Newhart Will Lead Discussion on How Child-Robot Interactions Can Affect Social Development

Veronica Ahumada-Newhart (discussant)

Veronica Ahumada-Newhart will discuss Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment at 3:45pm on Friday, September 24th at #werobot.

Dr. Newhart received her M.A. and Ph.D. in Education from the University of California, Irvine. She completed her M.Ed. in Adult Education from the University of Georgia and her B.A. in English Language and Literature from Loma Linda University. Prior to beginning her doctoral work, Dr. Newhart was a public health leader in her role as Director of Oral Health programs for the state of Montana. Her work in oral health supported key measures of Montana’s Title V Maternal and Child Health block grant and developed strong collaborations with the Centers for Disease Control and Prevention (CDC) as well as the Health Resources and Services Administration (HRSA).

She is an NIH funded postdoctoral fellow in UC Irvine’s Institute for Clinical Translational Science. Her research is focused on the use of interactive technologies (e.g., telepresence robots) to establish or augment social connectedness for improved health, academic, and social outcomes. Her research encompasses strong interdisciplinary efforts between UCI’s School of Medicine, School of Education, Department of Informatics, and Department of Cognitive Sciences. Her research interests include child health and human development, virtual inclusion, human-computer interaction, human-robot interaction, and emerging technologies that facilitate learning, human development, and social connectedness.

Comments { 0 }

Edward Tunstel Will Moderate the Field Robotics Panel

Edward Tunstel

Edward Tunstel is the moderator for the #WeRobot Field Robotics panel at 1:45pm on Friday, September 24th. The panel will feature the following papers:

Robots in the Ocean
Annie Brett

Smart Farming and Governing AI in East Africa: Taking Gendered Relations and Vegetal Beings into Account
Jeremy de BeerLaura FosterChidi OguamanamKatie Szilagyi, and Angeline Wairegi

On the Practicalities of Robots in Public Spaces
Cindy Grimm and Kristen Thomasen

Edward Tunstel received his B.S. and M.E. degrees in Mechanical Engineering, with a concentration in robotics, from Howard University. His thesis addressed the use of AI-based symbolic computation for automated modeling of robotic manipulators / arms. In 1989 he joined the Robotic Intelligence Group at the NASA Jet Propulsion Laboratory (JPL) supporting research & development activities on NASA planetary rover projects. As a JPL Fellow he received the Ph.D. in Electrical Engineering from the University of New Mexico. His dissertation addresses distributed fuzzy logic & knowledge-based control of adaptive hierarchical behavior-based systems with application to mobile robot navigation.

After 18 years at JPL, Dr. Tunstel joined the Space Department of the Johns Hopkins Applied Physics Laboratory (APL) in 2007 as its Space Robotics and Autonomous Control Lead and later served as Senior Roboticist in its Research & Exploratory Development Department and Intelligent Systems Center. After a decade with APL, Dr. Tunstel directed robotics R&D at the United Technologies Research Center for several years before joining Motiv Space Systems, Inc., where he is currently the CTO. He is a Fellow of IEEE and Jr. Past President of the IEEE SMC Society, having previously served as its President, in several of its VP roles, and as General Chair of the 2011 IEEE SMC conference. He is an active member of the IEEE SMC Technical Committees on Robotics & Intelligent Sensing, on Brain-Inspired Cognitive Systems, and on Model-Based Systems Engineering, IEEE RAS Technical Committee on Space Robotics, and the AIAA Space Automation and Robotics Technical Committee. He is an Associate Editor or Editorial Board Member of five international engineering journals. He previously served as Chief Technologist of NSBE Space, a special interest group of NSBE Professionals, and held memberships in the Sigma Xi Scientific Research Society, the New York Academy of Sciences, and ASME.

In academia, he is an adjunct faculty member of Deakin University in Australia, holds the distinction of Honorary Professor at Obuda University in Hungary, chairs an advisory board for an autonomy center of excellence (TECHLAV) at N.C. A&T State University, and has also served as NASA Technical Monitor for undergraduate student research programs and for NASA Faculty Awards for Research as well as co-advisor and committee member for graduate thesis and dissertation research at several universities. He has authored over 170 journal, book chapter and conference publications, and has edited or co-authored 5 books in his areas of expertise.

Comments { 0 }

Daniel Susser Will Lead Discussion on the Balance Between Representation and Surveillance with A.I. Facial Recognition

Daniel Susser

Daniel Susser will discuss Being “Seen” vs. “Mis-seen”: Tensions Between Privacy and Fairness in Computer Vision at 11:30am on Friday, September 24th at #werobot.

Daniel Susser is a philosopher by training and works at the intersection of technology, ethics, and policy. His research aims to highlight normative issues in the design, development, and use of digital technologies, and to clarify conceptual issues that stand in the way of addressing them through law and other forms of governance. He currently focuses on questions about privacy, online influence, and automated decision-making.

He is the Haile Family Early Career Professor and assistant professor in the College of Information Sciences & Technology, research associate in the Rock Ethics Institute, and affiliated faculty member in the Philosophy Department at Penn State University. From 2016-18, he was an assistant professor in the Philosophy Department at San Jose State University. Before that, he was a postdoctoral research fellow at the Information Law Institute at New York University’s School of Law, a member of the Institute’s Privacy Research Group, and a visiting scholar in NYU’s Department of Media, Culture, and Communication.

Comments { 0 }

Ryan Calo Will Lead Discussion of “The Legal Construction of Black Boxes”

Ryan Calo

Ryan Calo will discuss The Legal Construction of Black Boxes at 10:00am on Friday, September 24th at #werobot.

Ryan Calo is the Lane Powell and D. Wayne Gittinger Professor at the University of Washington School of Law. He is a founding co-director (with Batya Friedman and Tadayoshi Kohno) of the interdisciplinary UW Tech Policy Lab and (with Chris Coward, Emma Spiro, Kate Starbird, and Jevin West) the UW Center for an Informed Public. Professor Calo holds adjunct appointments at the University of Washington Information School and the Paul G. Allen School of Computer Science and Engineering.

Ryan Calo’s research on law and emerging technology appears in leading law reviews (California Law Review, University of Chicago Law ReviewUCLA Law Review, and Columbia Law Review) and technical publications (MIT Press, Nature, Artificial Intelligence) and is frequently referenced by the national media. His work has been translated into at least four languages. Ryan Calo has testified three times before the United States Senate and organized events on behalf of the National Science Foundation, the National Academy of Sciences, and the Obama White House. He has been a speaker at the President Obama’s Frontiers Conference, the Aspen Ideas Festival, and NPR‘s Weekend in Washington.

Ryan Calo is a board member of the R Street Institute and an affiliate scholar at the Stanford Law School Center for Internet and Society (CIS), where he was a research fellow, and the Yale Law School Information Society Project (ISP). He serves on numerous advisory boards and steering committees, including University of California’s People and Robots Initiative, the AI Now Initiative at NYU, the Electronic Frontier Foundation (EFF), the Center for Democracy and Technology (CDT), the Electronic Privacy Information Center (EPIC), Without My Consent, the Foundation for Responsible Robotics, and the Future of Privacy Forum. In 2011, Ryan Calo co-founded the premiere North American annual robotics law and policy conference We Robot with Michael Froomkin and Ian Kerr.

Ryan Calo worked as an associate in the Washington, D.C. office of Covington & Burling LLP and clerked for the Honorable R. Guy Cole, the Chief Justice of the U.S. Court of Appeals for the Sixth Circuit. Prior to law school at the University of Michigan, Ryan Calo investigated allegations of police misconduct in New York City. He holds a B.A. in Philosophy from Dartmouth College.

Professor Calo won the Phillip A. Trautman 1L Professor of the Year Award in 2014 and 2017 and was awarded the Washington Law Review Faculty Award in 2019.

Comments { 0 }

Autonomous Vehicle Fleets as Public Infrastructure

Roel Dobbe

Roel Dobbe and Thomas Gilbert will present their paper, Autonomous Vehicles Fleets as Public Infrastructure, on Saturday, September 25th at 11:30am at #werobot 2021. Madeleine Clare Elish will lead the discussion.

The promise of ‘autonomous vehicles’ (AV) to redefine public mobility makes their development political — across a variety of stakeholders. This politics may not be obvious. In their ability to optimize the local safety and efficiency of individual vehicles, AVs promise to make transportation more predictable and reliable. Trips that people find too tedious to make could be made into trips worth taking, and as this change is reflected through the broader population it has the potential to fundamentally change the relationship consumers have with transportation. AV fleets also make it possible to centralize and coordinate the routing of vehicles. At the most local level we can see coordinated routing through the large body of work in platooning, alleviating traffic congestion. Such works represent only the beginnings of what could be possible. Centralized route planning could allow load-balancing between routes on the scale of cities, the predictive placement of vehicles for the purposes of ride-sharing, special routing considerations for emergency vehicles, and the management of interactions between these considerations.

Thomas Gilbert

At the same time, AVs are disrupting legacy processes for vehicle safety certification. We are witnessing regulatory capture as AV companies hire federal and state contractors to ensure their design certifications meet legacy thresholds for liability. Companies now craft their own Operational Design Domains to meet proprietary definitions of road features (streets, lanes, city regions) that purport to be technically safe, without requisite validation by the human factors community. Finally, there is the metaphorical frame through which AVs are likely to be understood, as private companies, consulting firms, and municipal entities craft public surveys as they see fit and thereby shape the types of consumer demand that suit their own organizational priorities.

Madeleine Clare Elish (discussant)

We create a framework for mapping concrete AV development choices to current and emerging forms of sociotechnical politics, and suggest what more responsible and stakeholder-sensitive design commitments would look like. We summarize three dimensions of AV politics: jaying (which places certain mobility stakeholders “out of scope”), wearing (which damages road infrastructure in a predictable fashion), and moral crumple zoning (which allocates responsibility for accidents to the most vulnerable). Despite the common label of AVs as “autonomous”, they will be shaped by human interests and expectations, and their status as public infrastructure must be decided through ongoing normative deliberation.

Empirically, we examine the emerging regulatory landscape of AV development, based on 50 semi-structured interviews with researchers in AI theory, human factors, and AV policymakers. To our knowledge, this comprises the first qualitative dataset of insights and expert judgment from every stage of AV development, from design to training to physical deployment.

Comments { 0 }

Prescribing Exploitation

Charlotte Tschider

Charlotte Tschider will present her paper, Prescribing Exploitation, on Saturday, September 25th at #werobot 2021. Michelle Johnson will moderate the 4:30pm – 5:30pm panel on Health Robotics.

Patients increasingly rely on connected wearable medical devices that use artificial intelligence infrastructures and physical housing that directly interacts with the human body. Many folks who have traditionally relied on compulsory medical wearables have been members of legally protected groups specifically enumerated in anti-discrimination law, such as disability status. With continued aging of our generations and a longer average lifespan, the field of medical wearables is about to encounter a patient population explosion that will force the medical industry, lawyers, and advocates to find ways of balancing immensely larger scales of patient health data with maintaining a focus on patient dignity.

Michelle Johnson (moderator)

Health data discrimination results from a combination of factors essential to effective medical device AI operation: 1) existence, or approximation, of a fiduciary relationship, 2) a technology-user relationship independent of the expertise of the fiduciary, 3) existence of a critical health event or status requiring use of a medical device, 4) ubiquitous sensitive data collection essential to AI functionality and the exceptional nature of health data, 5) lack of reasonably similar analog technology alternatives, and 6) compulsory reliance on a medical device. Each of these factors increase the probability of inherent discrimination, or a deontological privacy risk resulting from healthcare AI use.

We conclude that health technologies introduce a unique combination of circumstances that create a new conception of discrimination: discrimination created by technology reliance, rather than automated or exacerbated by it. Specific groups are protected under anti-discrimination laws because there is an inherent risk of potential injury due to an individual’s status. If individuals who are compulsorily dependent on AI-enabled healthcare technologies are uniquely vulnerable relative to their non-technology-dependent peers, they are owed some additional duties.

Comments { 0 }