Reap What You Sow? Precision Agriculture and The Privacy of Farm Data
George Kantor will lead a discussion of Karen Levy, Solon Barocas, & Alexandra Mateescu‘s
Reap What You Sow? Precision Agriculture and The Privacy of Farm Data on Friday, April 12 at 11:30 a.m. at #werobot 2019.
Across rural America, the day-to-day lives of farmers are changing. Traditional forms of land management are rapidly shifting into data management, as global agriculture firms such as Monsanto and John Deere have begun to furnish farm equipment with a variety of sensors that detect and transmit fine-grained information about nearly every aspect of farm conditions and operations, including soil and weather conditions, seeding and fertilizer applications, and crop yield. While monitoring and mechanization have a long history in farming, recent developments in so-called “precision agriculture” aim to move beyond a one-size-fits-all approach to customizable, plot-specific strategies.
Precision agriculture arose from the fact that productivity in a field can vary widely as a result of differences in terrain, soil, irrigation, and other conditions within and across fields. To mitigate against the unevenness of these starting conditions, precision agriculture aims to measure the exact needs of square units of land, and this information is used to develop farming strategies (called “prescriptions”) tailored to these conditions on a unit-by-unit basis. Precision agriculture refers to a wide range of tools and practices, but generally comprises a combination of equipment-mounted sensors, farm data management software, and analytics services that often combine farm-level data with country-wide agronomic and weather information. Sensor-derived data is used to more closely measure agricultural productivity, to facilitate operational decision-making on the farm, and to meet the data collection standards for compliance with environmental and other regulation that requires reporting to government.
While adoption of such tools has been uneven, precision agriculture techniques are becoming the norm. In 2014, the American Farm Bureau Federation surveyed farmers on the issue of big data in farming, and found that more than half of respondents were planning to invest within the following year or two in additional data-driven technologies. For many farmers, precision agriculture has become necessary for keeping up production and minimizing costs. In particular, farmers have embraced precision agriculture as a way to improve environmental sustainability, while also increasing profits. The more precise application of fertilizer and pesticide, for example, ensures that farms do not apply more than necessary, limiting environmental impact and reducing costs.
More broadly, the industry has begun to look to precision agriculture as a means to build “resilience” into food systems as climate change destabilizes centuries-old food production patterns and practices. Many of these innovations have introduced new forms of data collection and information flow, transforming the information ecology of farming and raising concerns with the privacy of agricultural data in the process.
The Institutional Life of Algorithmic Risk Assessment
Kristian Lum will lead a discussion of Alicia Solow-Niederman, YooJung Choi, and Guy Van den Broeck‘s The Institutional Life of Algorithmic Assessment on Friday, April 12, at 10:15 a.m. at #werobot 2019.
On August 28, 2018, California passed the California Money Bail Reform Act, also known as Senate Bill 10 (SB 10), and eliminated the state’s system of money bail, replacing it in part with a risk assessment algorithm. Though SB 10 has been temporarily stayed, pending resolution of a 2020 ballot referendum, we cannot stay the bigger picture questions about risk assessment tools in the criminal justice system. Building from a long-standing critique of actuarial assessments in criminal justice, a rapidly-growing legal and technical literature recognizes that risk assessment algorithms are not automatically unbiased. Research to date tends to focus on fairness, accountability, and transparency within the tools, urging technologists and policymakers to contend with the normative implications of these technical interventions. While questions such as whether these instruments are fair or biased are normatively essential, this Essay contends that looking at these issues in isolation risks missing a critical broader point. Automated risk assessment systems do not operate in a vacuum; rather, they are deployed within complex webs of new and preexisting policy and legal structures.
This Essay’s detailed analysis of SB 10 concretely illustrates how algorithmic risk assessments statutes and regulations require tradeoffs and tensions between global and local authority. Specifically, using SB 10 as a not-so-theoretical hypothetical reveals a tension between, on one hand, a top-down, global understanding of fairness, accuracy, and lack of bias and, on the other, a tool that is well-tailored to local considerations. There is a general conceit in the law that a principle like fairness is universal. SB 10’s text and legislative history support this globally-applicable perspective, calling for “validated risk assessment tools” that are “demonstrated by scientific research to be accurate and reliable.”
Concepts like accuracy, reliability, and non-discrimination are fixed principles from such a legal and policy standpoint. But there is a tension between such global principles and the validation and deployment of a tool in particular jurisdictions (typically at the county level). Anytime there is both a more centralized body that sets overarching guidelines about the tool and a risk assessment algorithm that must be tailored to reflect local jurisdictional conditions, this algorithmic federalism will give rise to a global-local tension. This tension results in a number of technical challenges, including questions about the treatment of proxies, Simpson’s paradox, and thresholding choices. Accordingly, we call for increased attention to the design of algorithmic risk assessment statutes and regulations, and specifically to their allocation of decision-making responsibility and discretion. Only then can we begin to assess the impact of these risk assessment algorithms when it comes to core criminal justice decisions about life and liberty.
The Reasonable Coder
Bryan Choi will lead a discussion of Petros Terzis‘s The Reasonable Coder on Friday, April 12 at 9:15 a.m. at #werobot 2019.
Algorithmic decision-making tools are no longer a mere field of scientific exploration and academic debates. Today data, code and algorithms, blended altogether in the concept of Machine Learning (ML), are used to identify potential aggressors and victims of violence, to assess a defendant’s risk of committing new crimes, to decide whether you deserve a job opportunity, to –literally- monitor the financial markets or to evaluate school teachers. In light of these ML applications an exponentially growing part of the scientific community has focused its attention on the issues of fairness, transparency and accountability that algorithms should mirror. Building next to these interdisciplinary endeavors for the development of safeguarding techniques, this paper is an attempt to explore the boundaries of liability that surround the agents involved in the development of these tools.
The capacity of these system to interact and trigger changes to the physical world as well as their potential for causing harm to a person, challenges the notions of duty of care, causality and foreseeability that historically dominated liability ascription. In the meantime, the absence of a solid regulation renders developers legally exposed to the rules of strict liability. My paper is a jurisprudential journey throughout the boundaries of contract and tort law of USA jurisdictions aiming at extracting common principles that could shape the proposed doctrine of the “Reasonable Coder”. This journey starts from the fundamental question of whether “smart-software” should be regarded as “good” or “service”, and progresses to the normative connotation of the concepts of causality and foreseeability in the digital sphere. The notion of “Reasonable Coder” is ultimately premised on the idea that either regarded as “services” or as “products”, “smart-systems” will eventually be examined through the regime of negligence.
Full Text of We Robot Papers Now Available! (Updated)
Full text of the papers that will be presented at We Robot 2019 are now available on our Program page. If you are attending We Robot we strongly advise you to read the papers before you get to Miami.
We Robot doesn’t work like ordinary conferences: other than on panels, most authors do not present their papers. Rather, we assume everyone has done their homework, and go straight to the response by our expert discussants. What’s more, the discussants only speak for a short time, and then we open it up to your questions and comments. his makes for a much more interesting and engaging events, and takes advantage of the terrific people who come to We Robot – but it does mean that if you haven’t read the papers, you won’t be ready to take full part.
Update: Download all the We Robot 2019 Papers in one convenient zip file.
We Robot 2019 Poster Proposals Due Soon
A reminder that the (extended) deadline for poster proposals is March 21 — pretty soon. See the Call for Posters for details.
We want to hear about your late-breaking and cutting edge projects! All accepted proposals get a free registration, and entry for the $500 prize for best poster.
It’s Time to Register for We Robot 2019
We Robot, now heading into its 8th year, |
||
|
||
CLICK HERE for program details This year’s We Robot will also have a day of workshops on April 11th before the main conference. The poster session will be on April 12th during the main conference in order to showcase late-breaking research and developments. Do you have a late-breaking or cutting-edge project? April 11 – 13, 2019 Newman Alumni Center #WeRobot
|
We Robot Preliminary Program, Update 1
We’ve posted a revised version of the We Robot 2019 Preliminary Program. Check it out!
We Robot 2019 Call for Posters
We invite poster submissions for the 8th annual robotics law and policy conference—We Robot 2019—to be held at the University of Miami in Coral Gables, Florida, USA, on April 11-13, 2019. Previously, the conference has been held at University of Miami, University of Washington, Stanford, and Yale. The conference web site is at http://robots.law.miami.edu/2019.
We Robot 2019 seeks contributions by American and international academics, practitioners, and others, in the form of scholarly papers, technological demonstrations, or posters. We Robot fosters conversations between the people designing, building, and deploying robots and the people who design or influence the legal and social structures in which robots will operate. We particularly encourage papers that reflect interdisciplinary collaborations between developers of robotics, AI, and related technology and experts in the humanities, social science, and law and policy.
This conference will build on a growing body of scholarship exploring how the increasing sophistication and autonomous decision-making capabilities of robots and their widespread deployment everywhere from the home, to hospitals, to public spaces, to the battlefield disrupts existing legal regimes or requires rethinking policy issues.
How to Submit a Poster Proposal
We Robot’s poster session is designed to accommodate late-breaking and cutting edge projects. This session is ideal for researchers to get feedback on a work in progress. At least one of the authors of each accepted poster should plan to be present at the poster during the entire poster session on the afternoon of April 12, 2019 and for a “lightning round” of one-minute presentations during the main session. If your poster is accepted, we will waive all conference fees. You can bring the poster or, in some cases, with sufficient lead time we may be able to print it in Miami for you. If accepted, you will also need to provide a web-friendly summary of the work that we can post on the conference web site.
How to propose a poster session. Please send an up to 500 word description of what you have or are doing, with links to any relevant photos or audio visual information, as well as your C.V. via the conference submission portal. Please be sure to choose the “Posters ” track for your upload. Submissions open January 15, 2019 and are due by March 21, 2019, and we will send acceptances on a rolling basis.
We Robot 2019 Preliminary Workshop Schedule
We Robot starts with a day of optional Workshops on April 11. Because We Robot is so interdisciplinary, we’ve found it helpful to offer attendees introductions to basic concepts in a variety of fields so that we can all have a common vocabulary. The workshop day is optional: you can register for it when reserve your place at We Robot 2019 (April 12-13).
Preliminary Schedule (subject to change):
9:00-10:00 | This is not Magic: Basic Technical Concepts for the Latest Developments in Robotics & AI | Bill Smart Cindy Grimm |
10:00-11:00 | Alexa, What’s a Tort? It Sounds Delicious: Basic Legal Concepts for the Latest Developments in Robotics & AI | Ryan Calo (Woodrow Hartzog moderating) |
11:00-11:30 | Break | |
11:30-12:30 | Anything You Can Do I Can Do Better: Basic Economic Concepts for the Latest Developments in Robotics & AI | Rob Seamans (NYU Stern) |
12:30-1:30 | Lunch | |
1:30-2:30 | Don’t Look at Me Like That: The Latest Developments in Social Science/Philosophy for Robotics & AI | Madeleine Elish Ari Waldman |
2:30-3:30 | We’re Not Gonna Take I.T.: Advocacy in Robotics and AI | Jay Stanley (ACLU) Kathrine Pratt |
3:30-4:00 | Break | |
4:00-5:00 | Get Ready for the Robot Olympics: Japan’s “Robotics 2020” Policy Initiative | Fumio Shimpo, Hideyuki Matsumi, Takayuki Kato (Woody Hartzog moderating) |
5:00-6:30 | Robots and Academics: The Nerdiest Trivia of All Time (plus light appetizers) | Rebecca Crootof Howard Chizeck Woody Hartzog |
Preliminary Program for We Robot 2019
Thank you to everyone who submitted papers for We Robot 2019! The Program Committee a very large number of interesting paper proposals, which lead to an acceptance rate of under 20%.
Below we list the accepted papers. A fuller program, including dates and times for panels and presentations, discussants, and including demos and information about the poster session will follow:
- Taking Futures Seriously: Forecasting as Method in Robotics Law and Policy by Stephanie Ballard (University of Washington) & Ryan Calo (University of Washington)
- Using The Robot Koseki – Using Japanese Family Law as a Model for Regulating Robots by Colin P. Jones (Doshisha Law School)
- Reap What You Sow? Precision Agriculture and The Privacy of Farm Data by Karen Levy (Cornell University), Solon Barocas (Cornell University), & Alexandra Mateescu (Data & Society Research Institute)
- Delivery Robots and the Influence of Warehouse Logic on Public Spaces by Mason Marks (Yale Law School & NYU Law School)
- Why the Moral Machine is a monster by Abby Everett Jaques (MIT)
- Emerging Legal and Policy Trends in Recent Robot Science Fiction by Robin R Murphy (Texas A&M University)
- The Institutional Life of Algorithms: Lessons from California’s Money Bail Reform Act by Alicia Solow-Niederman (UCLA School of Law), Guy Van den Broeck (UCLA), & YooJung Choi (UCLA)
- Administering Artificial Intelligence by Alicia Solow-Niederman (UCLA School of Law)
- The Reasonable Coder by Petros Terzis (University of Winchester)
- Toward a Comprehensive View of the Influence of Artifical Intelligence on International Affairs by Jesse Woo (Kyoto University)
- Panel: Robot/Human Handoffs
- The Human/Weapon Relationship in the Age of Autonomous Weapons and the Attribution of Criminal Responsibility for War Crimes by Marta Bo (Graduate Institute of International and Development Studies)
- Through the Handoff Lens: Are Autonomous Vehicles No-Win for Driver-Passengers by Jake Goldenfein (Cornell Tech); Wendy Ju (Cornell Tech); Deirdre Mulligan (UC Berkeley School of Information); Helen Nissenbaum (Cornell Tech)
- AI, professionals, and professional work: The practice of law with automated decision support technologies by Deirdre Mulligan (UC Berkeley School of Information) & Daniel N Kluttz (UC Berkeley School of Information)
- Panel: AI & Authorship
-
Artificial Intelligence Patent Infringement by Tabrez Y Ebrahim (California Western School of Law)
-
That Thou Art Mindful: Emergent creativity and the unromantic author by Ian Kerr (University of Ottawa); Carys Craig (York University)
- Jack of All Trades, Master of None: Is Copyright Protection Justified for Robotic Faux-riginality? by Sarit Mizrahi (University of Ottawa)
- We Are Not the Same: Consequences of AI Identity Disclosure on User Expectations and Behavior by Anastasia Usova (NYU Stern); Hallie Cho (INSEAD)
-
Again, thank you to everyone who submitted paper proposals. We are looking forward to April and could not be more excited about these papers.
Please note: Early Bird Registration closes January 11 — save money by registering now.