AUTHOR

WeRobot2021

Author Archive | WeRobot2021

Daniella DiPaola Will Answer Questions on Skills from Students – Artifacts from a Robot Interaction Design Curriculum for Fifth Grade Students

Daniella Dipaola

Daniella DiPaola will lead two live Demo Q&A sessions on Skills from Students – Artifacts from a Robot Interaction Design Curriculum for Fifth Grade Students at #werobot. The first session will be held at 3:15pm on Friday, September 24th; the second session will be held at 11:00am on Saturday, September 25th. We suggest viewing the recorded demo in advance of the Q&A.

Many applications of social robots have been created with children in mind. For example, we have used social robots to teach children new languages, help them be more creative, and develop a growth mindset. However, these applications are typically developed by professional designers and developers of robotic systems. In this demonstration, Daniella will share conversational skills that students in grades 4 and 5 created for the Jibo robotic platform. The students developed these skills while living with Jibo in their homes for two months during the Spring of 2021. Skills will consist of multi-turn dialogue and robot animations. The demonstration will center the ideas and thoughts of these students and provide insight into how children believe robot skills should be developed.

Comments { 0 }

Miles Brundage Will Answer Questions on the Societal Implications of Large Language Models

Miles Brundage

Miles Brundage will lead two live Demo Q&A sessions on Societal Implications of Large Language Models at #werobot. The first session will be held at 11:00am on Friday, September 24th; the second session will be held at 2:30pm on Saturday, September 25th. We suggest viewing the recorded demo in advance of the Q&A.

Large language models, which learn to process and produce language by consuming gigabytes of text, raise a number of societal risks and opportunities. Large language models have raised concerns about disinformation, bias, privacy, and more, and have been applied to applications ranging from search to question answering to entertainment. This demo will show the performance of GPT-3, a state of the art language model debuted last year by OpenAI, and is intended to spark a discussion about the legal and broader societal implications of such systems. In addition to showing the performance of the system interactively with conference participants, this demo might also feature commentary from the demonstrators on legal issues in which they have conducted research such as intellectual property, and possible legal interventions to mitigate some risks associated with language models, such as bot disclosure laws.

Comments { 0 }

Michelle Johnson Will Moderate the Health Robotics Panel

Michelle Johnson

Michelle Johnson is the moderator for the Field Robotics panel at 4:30pm on Saturday, September 25th at #werobot. The panel will feature the following papers:

Somebody That I Used to Know: The Risks of Personalizing Robots for Dementia Care by Alyssa KubotaMaryam Pourebadi, Sharon BanhSoyon Kim, and Laurel D. Riek

Diverse Patient Perspectives on the Role of AI and Big Data in Healthcare
Kelly BergstrandJess FindleyChristopher RobertsonMarv Slepian, and Andrew Woods

Prescribing Exploitation
Charlotte Tschider

Her research is centered in the area of robot-mediated rehabilitation. She is focused on the investigation and rehabilitation of dysfunction due to aging, neural disease, and neural injury. In particular, she is interested in 1) exploring the relationships between brain plasticity and behavioral/motor control changes after robot-assisted interventions; 2) quantifying motor impairment and motor control of the upper limb in real world tasks such as drinking; and 3) defining the methods to maintain therapeutic effectiveness while administering local and remote, robot-mediated interventions.

She directs the Rehabilitation Robotics Lab at the University of Pennsylvania Perelman School of Medicine. This is a new Lab within the Department of Physical, Medicine, and Rehabilitation in the School of Medicine. The Rehabilitation Robotics Lab mission is to use robotics, rehabilitation, and neuroscience techniques to translate research findings into the development of assistive and therapeutic rehabilitation robots capable of functioning in real-world rehabilitation environments. Michelle and the Lab’s goal is to improve the quality of life and function on activities of daily living (ADLs) of their target population in supervised or under-supervised settings.

Comments { 0 }

Cynthia Khoo Will Lead Discussion of the Problems With Liability in Anti-Discrimination Systems

Cynthia Khoo

Cynthia Khoo will discuss Anti-Discrimination Law’s Cybernetic Black Hole at 3:00pm on Saturday, September 25th at #werobot.

Cynthia Khoo is a digital rights lawyer and founder of Tekhnos Law. She is also a full-time Associate at the Center on Privacy & Technology at Georgetown Law Center, a Research Fellow at the Citizen Lab (Munk School of Global Affairs & Public Policy, University of Toronto), and a member of the Board of Directors of Open Privacy Research Society.

She has extensive experience representing clients in proceedings before the Canadian Radio-television and Telecommunications Commission (CRTC), and has represented clients as interveners before the Supreme Court of Canada. She regularly researches and writes policy submissions to government consultations and advises on legal, policy, advocacy, and campaign strategies.

In April 2021, she completed a research grant by the Women’s Legal Education and Action Fund (LEAF), resulting in the publication of the landmark report, Deplatforming Misogyny: Report on Platform Liability for Technology-Facilitated Gender-Based Violence. The report provides recommendations for legislative and other reforms, and will inform LEAF’s future litigation and legal reform strategy concerning technology-facilitate gender-based violence, abuse, and harassment (TFGBV).

Cynthia Khoo earned her J.D. from the University of Victoria and B.A. (Honours English) from the University of British Columbia. This included exchange semesters at Université Jean-Moulin Lyon III and the National University of Singapore, Faculty of Law (NUS Law). ShI also holds an LL.M. (Concentration in Law and Technology) from the University of Ottawa, where she specialized in online platform regulation and platform liability for harms to marginalized communities. Her paper based on this work was delivered at We Robot 2020, where she received the inaugural Ian R. Kerr Robotnik Memorial Award for the Best Paper by an Emerging Scholar.

Comments { 0 }

Meg Mitchell Will Lead Discussion on Understanding Consumer Contracts with Computational Language Models

Meg Mitchell

Meg Mitchell will discuss Predicting Consumer Contracts at 1:30pm on Saturday, September 25th at #werobot.

Meg Mitchell’s research primarily involves vision-language and grounded language generation, focusing on how to evolve artificial intelligence towards positive goals. This includes research on helping computers to communicate based on what they can process, as well as projects to create assistive and clinical technology from the state of the art in AI. Her work combines computer vision, natural language processing, social media, many statistical methods, and insights from cognitive science.

Before founding Ethical AI and co-founding ML Fairness at Google Research, she was a founding member of Microsoft Research’s “Cognition” group, focused on advancing artificial intelligence, and a researcher in Microsoft Research’s Natural Language Processing group.She was a postdoctoral researcher at The Johns Hopkins University Center of Excellence, where she focused on structured prediction, semantic role labeling, and sentiment analysis, working under Benjamin Van Durme. Before that, she was a postgraduate (PhD) student in the natural language generation (NLG) group at the University of Aberdeen, where she focused on how to naturally refer to visible, everyday objects. I primarily worked with Kees van Deemter and Ehud Reiter.

In 2008, she received a Master’s in Computational Linguistics at the University of Washington, studying under Emily Bender and Fei Xia. From 2005 to 2012, she worked on and off at the Center for Spoken Language Understanding, part of OHSU, in Portland, Oregon. She worked on technology that leverages syntactic and phonetic characteristics to aid those with neurological disorders.

Comments { 0 }

Madeleine Clare Elish Will Lead Discussion on the Political Implications of Autonomous Vehicles

Madeleine Clare Elish

Madeleine Clare Elish will discuss Autonomous Vehicle Fleets as Public Infrastructure at 11:30am on Saturday, September 25th at #werobot.

Madeleine Clare Elish previously led the AI on the Ground Initiative at Data & Society, where she and her team investigated the promises and risks of integrating AI technologies into society. Through human-centered and ethnographic research, AI on the Ground sheds light on the consequences of deploying AI systems beyond the research lab, examining who benefits, who is harmed, and who is accountable. The initiative’s work has focused on how organizations grapple with the challenges and opportunities of AI, from changing work practices and responsibilities to new ethics practices and forms of AI governance.

As a researcher and anthropologist, Madeleine has worked to reframe debates about the ethical design, use, and governance of AI systems. She has conducted field work across varied industries and communities, ranging from the Air Force, the driverless car industry, and commercial aviation to precision agriculture and emergency healthcare. Her research has been published and cited in scholarly journals as well as publications including The New York Times, Slate, The Guardian, Vice, and USA Today. She holds a PhD in Anthropology from Columbia University and an S.M. in Comparative Media Studies from MIT.

Comments { 0 }

Deb Raji Will Lead Discussion on “Debunking Robot Rights: Metaphysically, Ethically and Legally”

Deb Raji

Deb Raji will discuss Debunking Robot Rights: Metaphysically, Ethically and Legally at 10:00am on Saturday, September 25th at #werobot.

Deb Raji is a computer scientist and activist whose work centers on algorithmic bias, AI accountability, and algorithmic auditing. She received her degree in Engineering Science from the University of Toronto in 2019. In 2015, she founded Project Include, a nonprofit providing increased student access to engineering education, mentorship, and resources in low income and immigrant communities in the Greater Toronto Area.

She has previously worked with Joy Buolamwini, Timnit Gebru, and the Algorithmic Justice League on researching gender and racial bias in facial recognition technology. She has also worked with Google’s Ethical AI team and served as a research fellow at the Partnership on AI and AI Now Institute at New York University. There, she worked on how to operationalize ethical considerations in machine learning engineering practice. A current Mozilla fellow, she has been recognized by MIT Technology Review and Forbes as one of the world’s top young innovators.
Comments { 0 }

Cutting-Edge Posters: The Forefront of Robotics Research

The #WeRobot poster session will take place at 6:15pm on Friday, September 24. Short video previews of each poster will be available during the Lighting Poster Session at 12:30pm. The session will showcase several of the most late-breaking research developments and projects in robotics.

Privacy’s Algorithmic Turn

By Maria P. Angel

Maria P. Angel

  • The increasing relevance of algorithms has created a pivot in American legal scholars’ privacy discourse, broadening the scope of privacy’s values and rights and pushing scholars to rethink the very nature of privacy.
  • My research aims to trace how American legal scholars’ conception of the right to privacy has changed in the last 30 years.
  • I intend to conduct document analysis of documents from the Privacy Law Scholars Conferences (PLSC), as well as use the Science, Technology, and Social Studies (STS) theory of “sociotechnical imaginaries” as my theoretical framework to make sense of the changing nature of privacy.

 

Egalitarian Machine Learning

By Clinton Castro, David O’Brien and Ben Schwan

Clinton Castro

David O’Brien

Ben Schwan

 

 

 

 

 

  • The increased reliance on prediction-based decision making has been accompanied by growing concerns about the fairness of the use of this technology, made more difficult by the lack of a consensus definition of “fairness” in this context.
  • Fairness, as used in the fair machine learning community, is best understood as a placeholder term for a variety of normative egalitarian considerations, namely to not be wrongfully discriminatory.
  • We are interested in exploring how to choose a fairness measure within a context. We present a general picture for thinking about the choice of a measure and talk about the choiceworthiness of three measures (“fairness through unawareness”, “equalised odds”, and “counterfactual fairness”).

Exploring Robotic Technologies to Mediate and Reduce Risk in Domestic Violence

By Mark Juszczak

Mark Juszczak

  • I am researching applications of robotic technologies to reduce domestic violence using two different perspectives: a problem-based perspective and a robotics-platform based perspective.
  • The problem-based perspective seeks to classify the spatial-temporal conditions under which a quantifiable threat or hazard of domestic violence occurs for women.
  • The robotics-platform based perspective seeks to determine the functional limits of embodied AI in providing an enhanced security function to mediate and reduce domestic violence.

Examining Correlations between Human Empathy and Vicbots

By Catherine McWhorter

Catherine McWhorter

  • This project focuses on robots capable of fulfilling victim roles – “vicbots” – and defines them as anthropomorphic bots with advanced a.i. that plead for the cessation of harm.
  • Whether or not vicbots negatively affect their human agent’s capacity for human-to-human empathy and compassion has implications for the health of the human agent, as well as the overall safety and well-being of communities.
  • Federal regulation is difficult due to a lack of consensus in research and discourse, so it is important to first categorize these bots and understand their impacts before moving on to appropriate regulation.

Reported Ethical Concerns Over Use of Robots for COVID-19 and Recommendations for Responsible Innovation for Future Pandemics

By Robin Murphy, Paula Dewitte, Jason Moats, Angela Clendenin, Vignesh Gandudi, Henry Arendes and Shawn Abraham

  • The coronavirus pandemic has led to new robots for healthcare, public safety, continuity of work and education, and social interactions.
  • As with any new application of technology, this may pose new ethical challenges for civil and administrative law, policy, and professional ethics.
  • While responsible innovation typically takes a lengthy engagement of direct and indirect stakeholders, disasters require immediate action, so we propose a short-term framework for stakeholders and roboticists to perform a proactive demand analysis.

Artificial Intelligence: The Challenges For Criminal Law In Facing The Passage From Technological Automation To Artificial Autonomy

By Beatrice Panattoni

Beatrice Panattoni

  • The project aims to analyze the possible and future criminal policies regarding the regulation of harms related to the use and functioning of AI systems.
  • A possible technically oriented classifications of “AI crimes” will be suggested, organized into  three groups:
    • (1) Cases where the AI system is used by a criminal agent as the means to realize the crime;
    • (2) Cases where the AI system is the “object” against which is committed the crime; and
    • (3) Cases where the realization of a crime is caused by the emergent behavior of an AI system.
  • The main issue is whether there is still space for criminal law when it comes to harms related to emergent behavior of an AI system, and, if so, what kind of criminal policies are better suited in this context; we outline possible scenarios in this presentation.

Roboethics to Design & Development Competition: Translating Moral Values Into Practice

By Jimin Rhim, Cheng Lin and Ajung Moon

Jimin Rhim

Cheng Lin

Ajung Moon

 

 

 

 

 

 

  • As robots enter our everyday spaces, human-robot interactions with ethically sensitive situations are bound to occur. For instance, designing a robot to evaluate whether to obey a teenager’s request to fetch alcohol remains a socio-technical challenge.
  • Our proposed project addresses this by hosting a first-of-its-kind global robotics design competition to explore new  ways of considering human values and translating this information for robots.
  • In addition to illuminating the translation process, the accumulated competition results will form the basis for an in-depth ethics audit framework to evaluate interactive robotic systems.

How Do AI Systems Fail Socially? Social Failure Mode and Effect Analysis (FMEA) for Artificial Intelligence Systems

By Shalaleh Rismani and Ajung Moon

Shalaleh Rismani

Ajung Moon

  • Developers of Artificial Intelligent Systems (AIS) have unearthed various sociotechnical failures in many applications, including inappropriate use of language in chatbots and discriminatory automated decision support systems.
  • Our open-ended research question is: how can AIS developers use FMEAs as one of the tools for creating accountability and improving design for sociotechnical failures?
  • In this work, we build on Raji et al.’s end-to-end auditability framework and develop a novel FMEA process that allows developers to effectively discover AIS’s social and ethical failures.

Nudging Robot Engineers To Do Good: Developing Standards for Ethical AI and Robot Nudges

By John Sullins, Sean Dougherty, Vivek Nallur and Ken Bell

John Sullins

Ken Bell

Vivek Nallur

  • HyperNudging or A/IS (autonomous Intelligent systems) Nudging allows programmers to engage in changing the behavior of users, such as encouraging exercise when a user has been sedentary, as opposed to simply predicting the value of some variable.
  • Soon, we will see more robot systems designed to do similar things on a larger scale, such as promoting safe behavior in public spaces, helping officials monitor public health or enforce quarantines, or encouraging people to stay longer in shops, museums and malls.
  • In this poster, we describe two use case scenarios of A/IS Nudging and show how new standards are being designed to help engineers build systems that are attuned to producing more ethical outcomes.

Machine Learning Algorithms in the Administrative State: The New Frontier for Democratic Experimentalism

By Amit Haim

Amit Haim

  • Administrative agencies are utilizing machine learning (ML) algorithms to ameliorate inaccuracies, inconsistencies, and inefficiencies. Due to the leeway these agencies have, especially at the local level,  there is significant variation in agencies’ procedures, which may lead to reduced transparency and accountability.
  • Nevertheless, prescriptive approaches fail to recognize that flexible schemes are important for enhancing the values the administrative state often lacks; schemes are likely to stifle innovation and urge agencies to stick to the status quo.
  • I argue that internal governance processes (e.g., partnerships, independent evaluations) can promote transparency while addressing problems in algorithms such as disparities and opacity.

 

Comments { 0 }

Meg Leta Jones Will Lead Discussion on Regulating Driving Assistance Software

Meg Leta Jones

Meg Leta Jones will discuss Driving Into the Loop: Mapping Automation Bias & Liability Issues for Advanced Driver Assistance Systems at 5:15pm on Friday, September 24th at #werobot.

Meg Leta Jones is an Associate Professor in the Communication, Culture & Technology program at Georgetown University where she researches rules and technological change with a focus on privacy, memory, innovation, and automation in digital information and computing technologies. She is also a core faculty member of the Science, Technology, and International Affairs program in Georgetown’s School of Foreign Service, a faculty affiliate with the Institute for Technology Law & Policy at Georgetown Law Center, a faculty fellow at the Georgetown Ethics Lab, and visiting faculty at the Brussels Privacy Hub at Vrije Universiteit Brussel.

Meg Leta Jones’s research covers comparative information and communication technology law, critical information and data studies, governance of emerging technologies, and the legal history of technology. Ctrl+Z: The Right to be Forgotten, Meg’s first book, is about the social, legal, and technical issues surrounding digital oblivion. Her second book project, The Character of Consent: The History of Cookies and Future of Technology Policy, tells the transatlantic history of digital consent through the lens of a familiar technical object. She is also editing a volume with Amanda Levendowski called Feminist Cyberlaw that explores how gender, race, sexuality and disability shape cyberspace and the laws that govern it. More details about her work can be found at MegLeta.com and iSPYlab.net.

Comments { 0 }

Veronica Ahumada-Newhart Will Lead Discussion on How Child-Robot Interactions Can Affect Social Development

Veronica Ahumada-Newhart (discussant)

Veronica Ahumada-Newhart will discuss Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment at 3:45pm on Friday, September 24th at #werobot.

Dr. Newhart received her M.A. and Ph.D. in Education from the University of California, Irvine. She completed her M.Ed. in Adult Education from the University of Georgia and her B.A. in English Language and Literature from Loma Linda University. Prior to beginning her doctoral work, Dr. Newhart was a public health leader in her role as Director of Oral Health programs for the state of Montana. Her work in oral health supported key measures of Montana’s Title V Maternal and Child Health block grant and developed strong collaborations with the Centers for Disease Control and Prevention (CDC) as well as the Health Resources and Services Administration (HRSA).

She is an NIH funded postdoctoral fellow in UC Irvine’s Institute for Clinical Translational Science. Her research is focused on the use of interactive technologies (e.g., telepresence robots) to establish or augment social connectedness for improved health, academic, and social outcomes. Her research encompasses strong interdisciplinary efforts between UCI’s School of Medicine, School of Education, Department of Informatics, and Department of Cognitive Sciences. Her research interests include child health and human development, virtual inclusion, human-computer interaction, human-robot interaction, and emerging technologies that facilitate learning, human development, and social connectedness.

Comments { 0 }