What Makes a Great We Robot Proposal? (Sample Abstracts)

What makes a good We Robot proposal?  Although it’s difficult to give hard-and-fast rules, given the highly interdisciplinary nature of the conference and our willingness to take risks, there are a few things that you should keep in mind when you’re writing your abstract.  While this won’t guarantee that your paper proposal gets accepted, it will help the program committee more fairly evaluate it, and help ensure that it’s a good fit for the conference.

  1. Start with a great idea.  This might sound obvious, but We Robot prides itself on being a venue for timely, cutting-edge work.  While all scholarship stands on the shoulders of previous work, We Robot has historically selected paper proposals that break new ground, offer new insights, and have bold visions of how law, policy, and robotics should interact.
  2. Understand the conference.  We’re about robots and their policy and social consequences.  We define “robots” very broadly to include software-only bots, including AIs. This is not a purely technological conference: a paper on how to build a better robotic hand would be out of scope, while a paper on the possible ethical, employment, or regulatory consequences of robots with better grips would be very much in scope.
  3. Be timely (or timeless).  Robotics (and AI) is a fast-moving field, and the way that it interacts with the law and policy is constantly changing.  We Robot prides itself as a venue where people tackle the more current and pressing of problems.  Paper proposals that deal with issues that came to light a few years ago, and for which there is already an established body of scholarship, often do not review positively unless it is clear how your paper engages with and supplements that literature. (Also, please note that We Robot is for previously unpublished papers only.) All that said, some of our very best papers have addressed essentially timeless theoretical or philosophical topics..
  4. Be aware of previous work at We Robot, if such work exists.  We Robot has been going for a decade now, and we’ve covered a lot of work at the conference.  Before you write our proposal, you should take a look at previous editions of the conference, to see if your topic has been talked about before.  Not every proposal needs to build on previous work at We Robot, but you should be aware of what has come before.  Looking at previous papers from We Robot will also give you a better idea of the types of papers that tend to get accepted.
  5. Make it interdisciplinary.  We Robot is highly interdisciplinary, with attendees who study the law, public policy, robotics, computer science, ethics, philosophy, and more.  Papers that reach across traditional disciplinary boundaries are generally reviewed more favorably than those that stay within a single discipline.  While there are certainly excellent single-author papers at We Robot, the intention of the conference is to bring people from different areas together.  Papers that only appeal to one academic discipline tend to be not as good a fit as those that speak to people across disciplines or clearly have more general applications.
  6. Have an interdisciplinary team.  One of the best ways to make sure that your proposal speaks to the different audiences at We Robot is to write it with experts from other areas.  We’ve had a number of excellent single-author papers at the conference over the years, but often the most interesting new work shows up in papers written by people from widely different fields.
  7. Be clear about the contribution of the work.  Good proposals describe clearly the expected contributions of the paper, and link these to prior and current work in the field.  While it’s fine to frame the problem area in general terms, it really helps the reviewers if you have a concrete statement about what’s going to be in the paper, even if you don’t have all of the results and scholarship done at the time you write the proposal.
  8. Be clear about the stage of the work.  We realize that these are proposals for papers, and not reports of finished work.  However, the reviewers need to be confident that you’re going to be able to finish up the work and write the paper by the deadline.  We expect papers to be finished (at least in the sense of having all the data (if any) collected and being a full draft) and submitted about a month before the conference, so that the discussants have time to read them and prepare their remarks.  If you can help the reviewers understand where you are in the proposed work, what needs to be done, and maybe even a rough timeline of the work remaining, that will help them better evaluate the proposal.

Not every paper proposal will follow these suggestions, and they’re certainly not a list of requirements.  However, as you put your proposal together you might want to keep some of these ideas in mind.  As a concrete example of paper proposals that were successful at previous We Robot conferences, consider the following:

Example Proposal 1

Trust in Automation

This project asks: What drives our trust in and willingness to comply with automated decision-making?

Decisions based on algorithms or so-called intelligent machines have footprints in myriad aspects of modern life, including credit ratings (Citron & Pasquale 2014), policing (Johnson 2013; Uchida 2009), employment (Barocas & Selbst 2016), and social and romantic life (Finkel et al. 2012), to name a few. This empirical project explores perception of the legitimacy of these decisions, asking: When do we trust automated decisions and when do we not? What interventions, if any, can alter our perceptions of trust in automated decision-making? The project brings together ongoing research on trust, technology, algorithmic decision-making, robotics and human-computer interaction, human behavior, and the law.

Trust and its associated values are essential for social solidarity (Putnam 2000) and compliance with the law, in general. Today, trust research is increasingly focusing on trust in technology, trust in others in situations mediated by technology (Martin and Nissenbaum 2016), and trust as a factor in sharing (Waldman 2016, 2018). Social science research in the growing field of algorithmic technology and machine learning is recognizing that the use of “big data” to make decisions about social, professional, and government decisions can reproduce existing patterns of discrimination and reflect, or cement, the implicit biases of designers or the data they use (Crawford 2016; Levendowski 2018).

This raises important research questions. Do we trust the automated decisions that machines make for us? What factors—including human interventions, extent of notice, and interface design, among others—contribute to trust or distrust in automated processes? Does the discriminatory capacity of algorithmic decision-making affect user trust or perceptions of legitimacy? We will explore these questions through a series of experiments, qualitative interviews, and surveys.

We propose a series of studies to (1) examine what factors contribute to trust or distrust in automated processes, (2) what interventions help legitimate algorithmic decision making for users, and (3) what defensive techniques do users engage in when algorithmic decision making is not trusted.

We will use a series of factorial vignette surveys to vary the decision factors such as (a) the degree of human intervention in the decision, (b) the context of the decision (medical, employment, advertising, etc), (c) the degree and type of explanation or transparency given, (d) if the decision is about the respondent or a third party. Other factors can be included after pilot results are presented at conferences.

We will follow up with experiments (Study 2) measuring the impact interventions – such as the type of explanation – on the trust intentions and trusting behavior of the respondents. Trusting behavior can include a willingness-to-engage, willingness-to-share, or willingness-to-disclose, or willingness-to-accept the decision.

The results of these studies will drive the need for Study 3 above (what defensive techniques do users engage in when algorithmic decision making is not trusted.). The results will have both theoretical and managerial implications. The research will contribute to our growing understanding of technology as a social mediator, as well as the connection between trust and disclosure. It will also help explain why we accept or reject the legitimacy of algorithmic decision-making and technology design. With respect to law, this research can provide the empirical basis for a consumer protection agenda that protects individuals from the predatory use of automation in violation of our rights, privacy, and social expectations. We will also identify the types of algorithmic decisions and the interventions correlated with high user trust. In addition, the results will help designers of algorithmic decisions to develop trustworthy systems.

Key References:

  • de Vries, P. W. and C. Midden. “Effect of Indirect Information on System Trust and Control Allocation.” Behaviour & Information Technology 27 vol. 1, 17-29 (2008).
  • Dzindolet, Mary T., et al. “The Role of Trust in Automation Reliance.” International Journal of Human–Computer Studies 58 vol. 6, 697-718 (2003).
  • Gong, Li. “How Social is Social Responses to Computers? The Function of the Degree of Anthropomorphism in Computer Representations.” Computers in Human Behavior 24 vol. 4, 1494–1509 (2008).
  • Li, Dingjun, P.L. Patrick Rau, and Ye Li. “A Cross-Cultural Study: Effect of Robot Appearance and Task.” International Journal of Social Robotics 2, vol. 2, 175-186 (2010).
  • Nomura, T., et al. “Prediction of Human Behavior in Human-Robot Interaction Using Psychological Scales for Anxiety and Negative Attitudes Toward Robots.” IEEE Transactions on Robotics 24, vol. 2, 442-451 (2008).
  • Waldman, Ari Ezra. Privacy As Trust: Information Privacy for an Information Age. Cambridge, UK: Cambridge University Press, 2018.

Example Proposal 2

Taking Futures Seriously: Forecasting as Method in Robotics Law and Policy

“It’s tough to make predictions, particularly about the future.” – Yogi Berra

A central challenge in setting law and policy around emerging technology is predicting how technology will evolve. In failing to consider the future of technology, we are often left with laws and policies that fall short of our technological reality.

The 99th United States Congress had no experience with the commercial internet, leaving it ill-equipped to envision the future of communications technology or to understand how widespread access to citizen information would need to be regulated. Indeed the laws Congress passed in 1986, which still govern electronic communications to this day, made assumptions about the nature of remote computing that have not obtained for decades.

In the 1990’s, the Department of Transportation (DOT) envisioned that driverless cars would ride upon “smart” roads, almost like a trolley. The DOT issued extensive guidance along these lines—proposing, for instance, heavy investment in infrastructure. Today, autonomous vehicles are on the roads in several states, but they do not run on tracks. Instead, they are self-contained robots capable of sensing and responding to ordinary environments.

The difficulty in predicting the trajectory of new technology can give rise to a number of unfortunate consequences. One is staleness—outdated rules, such as the 1986 law governing electronic communications privacy, that nevertheless persist through inertia or entrenched interest. A second is waste—the over-investment in a particular instantiation of a technology, such as the investment in trolley infrastructure by the DOT in the 1990’s. Yet another is policy paralysis—a phenomenon celebrated by libertarians but bemoaned by many as abdicating governmental responsibility to channel technology in the public interest.

That’s the bad news. The good news is that methods exist to help address the thorny problem of prediction. Over the years, scholars and corporations have developed numerous qualitative and quantitative techniques by which to explore possible futures and plan for uncertainty. Known variously as “envisioning,” “forecasting,” and “future studies,” these methods are credited with assisting institutions from Shell Oil to the National Security Agency in navigating potential crises and otherwise making profitable or wise decisions. Despite their maturity and success, however, these techniques remain almost entirely unremarked within law and technology theory or practice.

The thesis of this paper, co-authored by an information scientist and a legal scholar, is that robotics law and policy as a field would benefit from exposure to rigorous methods of forecasting.

Our argument proceeds as follows: The first section introduces the reader to the field of future studies through an efficient review of the extensive literature. The academic study of forecasting emerged around the 1960’s and professional foresight has been applied in corporations, non-profit organizations, and governments ever since. Although typically practiced at the executive level to guide long-term strategic planning of an organization, the techniques hold the prospect to inform individual policymaking as well.

The second section isolates three specific methods—scenario planning, future wheels, and design fiction—and applies them to the case study of robotic delivery. We selected these methods for their feasibility, concreteness, and widespread deployment. Scenario planning, pioneered by Herman Kahn at the RAND Corporation and further developed by economists Pierre Wack and Edward Newton at Shell Oil, consists of a planning technique by which managers can confront and assess the plausibility of various local and global developments. The futures wheel is a technique developed by social scientist and civic leader Jerome Glenn to explore the ramifications of emerging technology. Design fiction is an increasingly popular mode of envisioning the evolution of technology and its social impacts through narrative iteration.

We selected robotic delivery as a case study because of its many potential configurations (e.g., drone or land-based robots) and its still-unfolding legal context. In addition to guidance generated by the Federal Aviation Administration around the possible uses of drones to deliver packages, five states (Ohio, Florida, Wisconsin, Idaho, and Virginia) have already passed laws concerning sidewalk robots. Despite the application’s plausibility, there is a dearth of discussion of robotic delivery in the legal literature.

In the final section, we leverage the insights from sections one and two to critique existing robotic delivery policies (or their absence). This section develops a case for broader application of forecasting techniques in and beyond robotics law and policy. We are mindful, of course, that making predictions is difficult and fraught. Literal “future-proofing” is a fool’s errand. Nevertheless, a systematic approach to the exercise of envisioning has the potential to significantly improve policymaking across robotics and other domains.


  • Calo, Ryan. “Robotics and the Lessons of Cyberlaw.” Cal. L. Rev., vol. 103, 2015.
  • Dunne, Anthony and Raby, Fiona. Speculative Everything: Design, Fiction, and Social Dreaming. 1st edition, The MIT Press, 2013.
  • Glenn, Jerome C. “The Futures Wheel.” AC/UNU Millennium Project. Futures Research Methodology 3.0. United Nations. New York, 1994.
  • Tribe, Laurence H. Channeling Technology Through Law. Bracton Press, 1973.
  • Wack, Pierre. “Scenarios: Uncharted Waters Ahead.” Harvard Business Review, vol. 63, no. 5, Oct. 1985.

Example Proposal 3

Through the Handoff Lens: Are Autonomous Vehicles No-Win for Driver-Passengers

The concept of handoff is a lens through which to track societal values in socio-technical systems. It applies to transitions of functional control from one component-actor to another (or others), across progressive versions of a system. This happens so much, in such small increments, and in so many mundane ways, it may barely attract notice and, when it does, there is a paucity of vocabulary for capturing its broad significance. When a company replaces human phone operators with automated phone answering systems, mechanical switches with motion operated lighting, mechanical door-locks with software operated entry systems, we routinely ask whether the transition from a previous to a different controller improves or impairs the functionality in question. Or, when we transition from reading books on paper to books on screen, watching video on DVDs to video streamed via the Web, or communicating via paper to email, our focus hovers on the efficacy of respective media, of quality of service, availability of content, and so on. Through the lens of handoff, by contrast, an analysis considers not only functionality, narrowly conceived; it expands the view to include values that may have been perturbed as a result of control transitions, possibly through the addition or subtraction of key component-actors, and a broader reconfiguration of responsibilities. E.g. An Email platform may be functionally similar to the postal service but, exposing correspondence to the platform provider, introduces unanswered privacy issues.

In the case of digital systems, handoff may apply to transitions from human actors to mechanical, from mechanical to software (programmable), from programs to so-called “smart” programs (powered by machine-learning based AI), from the human operated mechanism to embodied cyber-physical, and the multiple permutations that may follow. Garnering enormous public attention and concern are handoffs of responsibility from human to non-human agents – from human decision-makers to automated, or algorithmic decision systems, and from human controllers and actors to AI embodied physical systems – robots, drones, bombs, sensors, and vehicles.

Our paper uses the handoff lens to focus on the case autonomous vehicles (AVs). Research on AVs often explores a future where fully automated (completely driverless) vehicles operate as a type of public transport infrastructure, shuttling passengers from place to place. Our research takes a different tack, focusing on the present and near future, in which sequential transitions of driving functionality from human to machine (increasingly “smart”) controller in (privately owned) vehicles, while retaining an active driving role for humans. This arrangement of collaborative driving includes periodic transitions of control between machine, human, and hybrids thereof. These transitions necessitate signaling and communication protocols to request transitions, indicate transfer, and display states of control. This arrangement challenges how we understand the role of a human driver in AVs, and produces a new type of co-pilot, first officer, or ‘back-seat driver’ Collaborative driving arrangements, likely to persist even with higher-level automation vehicles where human drivers assume control of vehicles from remote geographic locations similar to the piloting of unmanned aerial vehicles, may appear like simple functional transitions to automation. But the handoff lens exposes a larger scenario involving multiple reconfigurations of component actors, and, potentially, shifts in values challenges.

Central to the reconfigurations wrought by a collaborative control paradigm is the nature of communication between vehicle and human. For human driver-passengers to remain “in-sync” with machine controllers comprising the AV, the AV must serve as an informational intermediary and an experiential platform for the human. Since the human may be called to decide whether to respond and what action to take, collaborative driving and control transitions require that vehicles re-represent, in a human interpretable way and in humanly accessible real-time, its operational model of the world generated through sensor data and machine learning. AV representations based on interpretations of the world generated by a vehicle’s sensory and decision-making apparatus are only metaphorically equivalent to human sensing and decision-making processes. These representations must also be communicated in the sensory environment of a fast-moving vehicle in a way that enables a human operator to reconcile their own visual and situational understanding of the world with the vehicle’s model of the world. Collaborative control of AVS thus proceeds according to ‘mash-ups’ of otherwise incongruous human and computational models of the physical world, produced by different sensory apparatus and calculating systems.1

The handsoff lens will illustrate how this collaborative paradigm does more than merely produce functionally equivalent driving. It also generates novel questions about responsibility and liability for vehicle safety between vehicle and sensor manufacturers, administrators of driving-software updates, public authorities building infrastructures that AVs navigate, licensing authorities ensuring capable human drivers, and humans themselves. 2 This question is complicated further by vehicle manufacturers ceding control over the informational experience of vehicles due to the influx of third-party applications for navigation, media, and other operations.3What was once the heavily standardized dash-board has become a highly contingent and incredibly complex interface. 4 This new responsibility calculus has already resulted in changing laws to produce new forms of human engagement in AVs, sometimes by compelling driver attention, and sometimes requiring remotely situated drivers.5 Requiring driver-passenger attention may also necessitate new surveillance mechanisms like internal cabin cameras with head and eye tracking or steering-wheel sensors.6 Increased surveillance of driver-passengers implicates privacy and autonomy, while also creating data governance questions around the movement and storage of data, reporting obligations, and procedures for law enforcement access.

A growing community of Human-Machine Interaction researchers is actively working on the best approaches for creating and managing the communications channels and interfaces to help reconcile AVs’ and human models of the world for collaborative driving. Our paper accordingly involves a collaboration between Human-Machine Interaction researchers, philosophers, and legal scholars, in order to track how, on one hand, different interface configurations implicate performance, control, safety and user-experience, but on the other hand also bring new actors into the control environment, re-situate existing actors, re-distribute responsibility and liability, affect driver privacy and autonomy, and change the nature of road rules enforcement.

  1. David Miller and Wendy Ju, ‘Joint Cognition in Automated Driving: Combining Human and Machine Intelligence to Address Novel Problems (2015) AAAI Spring Symposium – Ambient Intelligence for Health and Cognitive Enhancement 37.
  2. Jason Miller and Ian Kerr, ‘Delegation, relinquishment, and responsibility: The prospect of expert robots’ in Ryan Calo, A. Michael Froomkin and Ian Kerr (eds) Robot Law (Edwards Elgar 2016), 102.
  3. See e.g. Gianpaolo Macario et al ‘An In-Vehicle Infortainment Software Architecture Based on Google Android’ (2009) IEEE SIES 257.
  4. See e.g. Aaron Marcus, ‘The Next Revolution: Vehicle User Interfaces’ in Aaron Marcus (ed) HCI and User-Experience Design (Springer 2015)
  5. National Conference of State Legislation, ‘Self-Driving Vehicles Enacted Legislation’ available <http://www.ncsl.org/research/transportation/autonomous-vehicles-self-driving-vehicles-enacted-legislation.aspx>.
  6. Luke Fletcher and Alexander Zelinsky, ‘Driver Inattention Detection based on Eye Gaze-Road Event Correlation’ (2009) 28(6) The International Journal of Robotics Research 774.