Posters

Do We Need to Establish Guidelines or Basic Principles for Promoting the Use and Research & Development of AI?

by Fumio Shimpo, Takayuki Kato, Kaori Ishii, Takashi Hatae, Hideyuki Matsumi.

This poster session will give a report regarding the recent policy and research trends of AI and robot law mainly in Japan by reference to the necessity of proceeding with academic research regarding Robot Law including AI and the Internet of Things(IoT) and its relationship with Data Protection and Privacy.

As far as the fundamental principles of law and legal thinking are concerned, the debate over new problems involving AI and autonomous robots use may possibly spark a paradigm shift in law. Therefore, it has been thought that it is necessary to begin discussions and considerations toward formulating international guidelines or principles for promoting the use and Research & Development of AI.

The possibility of an AI running out of control and posing a danger to humans has been noted as one of the potential threats posed by robots that are becoming more and more autonomous. AI and autonomous robots may well be able to realize the creation and development of a totally new, and as yet unknown, set of social and legal systems which will have to be used successfully on a daily basis in the real world. Such new AI society where personal data will be put to a sophisticated use under IoT, advances in data analytics by machine learning make it possible to identify particular person or infer sensitive information including those from unexpected sources. The misuse of AI can affect core values and principles, such as individual autonomy, privacy and may have a broader impact on society as a whole. Therefore, we have to consider carefully all of these potentially serious problems in order to secure safe procedures for their use in a potentially very different and new, human environment.

Some recent Japanese governmental efforts should be heighted in terms of developments of AI technologies. Ministry of Economy, Trade and Industry issued “Guidelines on Contracts concerning the Use of AI and Data” on June 15th, 2018. This guidelines aim at facilitating conclusions of appropriate contract clauses among businesses by exploring legal issues on software developments and uses, including intellectual properties, anti-competition, and international disputes. Another effort comes from the Institute for
Information and Communications Policy of the Ministry of Internal Affairs and Communications. It issued “AI Network Society Promoting Committee Report in 2018: Toward Facilitating AI Usages and Sound Developments of AI Networks” in July 2018. The Committee proposed “The Draft Principles on the Use of AI.” This draft principles are composed of 10 principles: Proper Use of AI, Appropriate Data Learning, Interoperability, Safety for People, Security for AI Systems and Services, Privacy Protection, Human Dignity, Fairness for Individuals, Transparency, and Accountability. They are based on the notion of “human centered society”.

A Reasonable Expectation of Neural Privacy

by Katherine Pratt

The video of a robot doing parkour (Atlas, Boston Dynamics) provokes an instinctive fear of Terminator-like androids coming to overthrow humanity. But one need only watch the blooper reel from DARPA’s Robotics Challenge to understand how far away we really are. Human control of robots and other physical and digital devices will continue to be much more mainstream (at least until Data becomes a reality). Therefore, novel modalities for controlling things as varied as a telepresence robot or smartphone is an area of increased interest, as researchers and industry innovate methods to make such control as seamless and intuitive as possible.

One method may come straight from the brain, by recording neural signals and using that information to control a device separate from the body: a brain-computer interface (BCI). There is a novelty in playing a videogame by simply putting on a hat with embedded electrodes, or controlling a prosthetic arm like Luke Skywalker, but such innovation hides deep ethical and policy questions about commercial access to and use of neural signals.

Neural signals that are recorded under the auspices of medical care or research are covered in the US by HIPAA or consent form approved by an institutional research board, respectively. But what happens when it is a commercial entity that is recording the neural signals? Startups and tech billionaires alike are researching how to commercialize BCIs, thus creating a grey area of who “owns” someone’s neural signals and the information derived from them. Researchers have already demonstrated the feasibility of using neural signals to elicit everything from month of birth to a 4-digit PIN number [1][4][5][6]; the risks and potential responses (such as a BCI anonymizer) are also published [2][3]. One of the biggest challenges in the field is the quantitative uncertainty of the risk to consumers, both with current technology and extrapolated future capabilities.

Concurrent with this quantitative work are questions involving ethics and policy; if potential consumers learn about the possibility that private information can be derived neurally, there may be greater grievance. Thus, it is not an objection to the information that is compromised, but a foreign entity interpreting an individual’s neural signals. The opinions of potential users may help to drive the minimum regulations or protections that are desired for neurally-controlled devices. This tie between ethics and policy represents a novel but necessary area of discussion that should become commonplace for all emerging technologies.
This poster will discuss the preliminary findings of dissertation work that addresses the questions of using neural signals to elicit private information, how potential consumers perceive this risk, and policy approaches to protect neural signals. Additionally, it will suggest a model framework for others to apply to mutidisciplinary engineering research projects.

 

The VIROS Project: Vulnerability in the Robot Society

by Tobias Mahler & Lee Bygrave

The objective of this poster is to introduce the VIROS project at the University of Oslo, Norway. This project will run over 5 years, starting in 2019, facilitating a collaboration between researchers in robot engineering, law, and social sciences.

The increasing deployment of robots and artificial intelligence (AI) systems is introducing new layers of critical infrastructures in various areas of our society. This, in turn, contributes to new digital vulnerabilities and poses novel legal and regulatory questions. The VIROS project investigates the challenges and solutions in regulating robotics – legally and technically – particularly with respect to addressing the safety, security and privacy concerns such systems raise. The impact of the project will be ensured by involving multiple relevant stakeholders in the Norwegian public sector, consumer advocates, three robotics companies (two Norwegian and one Japanese), and leading international roboticists.

The overarching issues that the project will tackle are: (i) how can we address the challenges robotics poses to human security, privacy and safety through technological and regulatory choices? (ii) how does the design of the physical aspect of robots affect the security, privacy and safety concerns and to what degree does the physical component of robotics justify its conception as a distinct regulatory field?; (iii) in light of on-going technical development of robots, to what degree are existing legal and ethical frameworks in the area of security, privacy and safety adequate to deal with developments in robotics and AI?; (iv) which technological choices and regulatory models could be revised and/or devised, and in which ways, to develop policies and technical solutions that further facilitate robotics and AI in publicly acceptable directions?

In tackling these issues, the project will examine the validity of at least three general hypotheses. One hypothesis is that the importation of robots from factories and other controlled environments to less controlled settings involving close interaction with humans (such as homes) necessitates rethinking of how security, privacy and safety issues are addressed in robot design and regulation. The key focus here is on an integrated assessment that takes into account technical, social, ethical and legal factors. Another hypothesis is that soft law is particularly suited to regulating the highly complex and rapidly evolving field of smart robotics, but that certain basic requirements and incentives may nevertheless need to be introduced through legislation. A third hypothesis is that making a careful selection of hardware and software technologies applied in robotics will substantially ameliorate the robot-induced threats to security, safety and privacy.

The project will address these issues along two main prongs of research:
– Prong 1 – smart robots, privacy, security and safety
– Prong 2 – healthcare robots

 

The Confrontation Clause and Artificial Intelligence 

by Brian Sites

Once, courts eschewed “the spector [sic] of trial by machine” and the possibility that “each man’s sworn testimony may be put to the electronic test.” Judges worried “jurors w[ould] abdicate their responsibility for determining credibility, and rely instead upon the assessment of a machine.” Forty years later, that fear has metamorphosed into welcoming machine evidence in place of human accusers. But these “machine accusers,” as creations of the imperfect, are fallible. And as tools operated by imperfect human agents, even an otherwise neutral machine can advance an ulterior agenda. Courts across the nation, however, seem unconcerned as case after case is handed down without permitting the defendant to peer behind the circuit-board curtain.

The number of potential machine accusers directly relevant to criminal proceedings is staggering, and today’s robotic offerings look increasingly like the science fiction of years past. But their ascendance has only just begun. A far-from-exhaustive list of potential machine accusers now includes: machines that map crime scenes; biometric-based recognition tools such as facial recognition and tattoo recognition programs; devices that locate cell phones (e.g., Stingrays); automated license plate readers; drug-, firearm-, and general crime-detecting devices; software that estimates a defendant’s “future dangerousness” in the context of sentencing and parole; and innumerable laboratory machines that produce increasingly automated results. Sometimes machines do what humans can do as well; in that situation, should a criminal defendant’s rights turn on whether the prosecution employs a man instead of a machine? But machines allegedly also do what even skilled humans generally cannot; against such an accuser, the right to test the evidence is essential. In short, “trial by machine” is now quite present, but that trial favors the machines over the human defendants.

In a world where machines increasingly assume the “accuser” roles previously filled primarily by human actors in criminal proceedings, how should courts and legislatures respond? The goal, as in most things, is finding the right balance. Machines are vital tools for investigating crimes. In this digital age, they make crime prevention possible in ways previously inconceivable. They offer the potential of a brighter, safer future, and courts and rulemakers must strike a balance between due scrutiny and an acknowledgment of the realities of how machines are used.

Achieving that balance is, however, easier said than done. What rights does a criminal defendant have as to robotic accusers? What does the Confrontation Clause demand from such machines—which we cannot simply place on the witness stand and cross-examine? How can a defendant “confront” the machine while still protecting any relevant intellectual property rights? Are the rules of evidence a better solution to managing machine accusers? And, more generally, are existing legal norms sufficient when applied to robotic accusers, or does the evolution in technology warrant a similar evolution in law?

This poster, building on the presenter’s articles in these areas, explores these questions.

Made by AI: Can AI-Generated Inventions Be Patentable?

by Elena Ponte

My work aims to answer: can AI-generated inventions be patentable? My research is a conceptual analysis of legal tradition in patent law. The concepts that underpin the US patent system are ill-suited for today’s deep learning programming paradigm. Fluid concepts need to be developed so that ‘AI-inventors’ will seek protection under the patent system and not recur to other legal solutions like trade secrets. A world of technology protected by trade secrets is a world of silos, killing the collaborative and cumulative nature of the open source environment.

I present two key obstacles to the patentability of AI-inventions in the US patent system: one, the nonobviousness threshold; and two, the qualifications for inventorship and, consequently, ownership. My work proposes a resolution to these obstacles by considering the requirements for patentability as they work under the patent systems in Europe (as under the European Patent Convention) and Canada:

To the first obstacle, the European system qualifies the nonobviousness requirement differently: to be patentable, an invention must demonstrate an ‘inventive step.’ Where the US nonobviousness threshold is about the mental state of the ‘person skilled in the art’ (that is, a potential inventor), the ‘inventive step’ speaks more to the relation between the invention and the ‘prior art’ (that is, the universe of existing inventions). Further, the European ‘inventive step’ is about the process as much as it is about the product. US patent law is biased towards products over process. The European framing of this patentability requirement allows for the reality that an AI-inventor will have its own inventive process.

To the second obstacle, in the Canadian system the inventor must show conception or discovery of the elements that give the invention patentable weight. This means that an invention must be the inventor’s own discovery as opposed to mere verification by her of previous predictions. In this context, Canada has developed a client/assistant exception: assistants will not be granted status of inventor when they are just acting under instructions. This categorization of inventor and assistant could be instrumental in developing a specific solution to AI-inventors. The US patent system narrowly requires a human inventor to conceive an idea and reduce it to practice. The conceptualization of inventorship in the Canadian system is better suited for the AI-inventor paradigm. Where in the US patent system the patent always vests in the ‘true inventor,’ the European and Canadian systems allow for diversity and flexibility in the inventive process and in the ownership of a patent.

We need to bridge the programming paradigm of deep learning with an effective patent law that incentivizes disclosure and so maintains a cooperative inventive environment. If the US patent system does not reform, it will kill the open source reality that is the core of deep learning.

Religion and Robots: How Religious Ideas Shape Societal Attitudes Towards Robotic Technology

by Milenko Budimir

In examining the legal and social structures in which robots operate, it is helpful to also explore some religious ideas and concepts that often times inform and underlie larger societal, cultural and legal attitudes towards robots and robotic technology. An examination of the world’s major religious traditions reveals the source for some broad differences in societal attitudes towards robots. These differences line up more or less according to the common “East/West” religious divide, which will be the focus for this presentation.

For instance, in the monotheistic Abrahamic religions of Judaism, Christianity and Islam, there is an emphasis on the idea of idolatry. This concern has manifested itself throughout the bound-up histories of these faiths, from Judaism’s early emphasis on and prohibitions of idol worship and the general absence of human images in worship, to Islam’s prohibitions against depictions of humans and other sentient beings. Such focus on idolatry has influenced, in part, present day thinking on the proper attitude toward the development of robotic technology. A related consideration is the potential challenge robots offer to a theological understanding of human beings as the pinnacle of the created world, resulting in a greater degree of mistrust and skepticism of the direction of robotic development. This partly helps to explain why robots are largely understood as helpers in Asian societies whereas in Western countries there exist the familiar fears of the robotic overlords wresting control over society from human beings.

In contrast, Eastern religions and philosophies (Buddhism, Shinto, Confucianism) lack the monotheistic tendencies found in Western religions as well as corresponding injunctions against idolatry. There is more of a prevalence and acceptance of forms of polytheism derived from folk religions as well as ideas of incarnations of various divine entities. Related to polytheism is a type of animism whereby natural objects, forces, and even some artifacts of human origin can contain a kind of elementary “soul” or “spirit.” This helps to explain an important feature of robotic culture in Japan; that is, the idea that objects (both natural objects and human artifacts), including robots, may be said to be “ensouled.” This idea is not common in the West largely as a result of the dominance of monotheistic ideas and theological arguments against what were seen as pagan or pre- Christian notions of animated or ensouled nature. Consequently, this type of thinking fell out of favor with the development of mainstream culture and society.
In summary, the hope is that an investigation of this sort will facilitate and engage thought on the topic of robotics in the context of religious belief systems, which can help to explain some of the present day attitudes and judgments about robots and their place in society.

Rage Against Machine Learning

by Aaron Mannes

The current turmoil and frenzy characterizing American politics is in reaction to current events, but also a manifestation of Creedal Passion, an effort by the American people to address the gap between the founding ideals of the United States and American reality. These eras (the most recent past era was the 1960s) bring major reforms as Americans seek to break up concentrations of power and rectify longstanding sources of inequality. Technology has played an important role in past eras of Creedal Passion, but advances in robotics and artificial intelligence have become increasingly central to the social issues facing the United States today.

In his 1981 book American Politics: The Promise of Disharmony, political scientist Samuel Huntington posited that every six or seven decades the United States goes through an era of Creedal Passion. Eras of Creedal Passion, which besides 1960s and 1970s, include the Progressive Era of the early 20th century, the Jacksonian era, and the American Revolution.

Huntington stated that if the last era was during the 1960s and 1970s, the next era would be the first and second decades of the 21st century. The current focus on institutional racism and other forms of inequality, the questioning of traditional sources of authority, and the emergence of a new form of media that enables the exposure of injustices are all characteristics of an era of Creedal Passion.

Examining the reforms of past eras of Creedal Passion can provide insight into how social forces might shape the development of technology and technology policy. Past eras of Creedal Passion brought the first regulation of a new technology (steamboats during the Jacksonian Era) and the establishment of powerful new regulatory agencies (the FDA and FTC during the Progressive Era. Eras of Creedal Passion also saw the explosive growth of new social movements and norms (such as Abolitionism or Women’s Suffrage).

There are several ways in which AI, robotics, and related technological innovations could become targets of Creedal Passion. These rationales may overlap with one another.

  • AI could be seen as a tool that gives additional advantages to powerful organizations, increasing concentrations of power. This could include government agencies as well as businesses.
  • The creators of AI could be seen as a concentration of power in their own right.
  • AI could be seen as exacerbating inequality or discrimination.
  • AI could threaten individual freedom and privacy.
  • AI could be seen as reducing individual autonomy, placing people at the mercy of AI when they deal with the government or businesses.
  • Overshadowing all of these issues are the potential for AI to displace workers.

Looking back to look forward can help to understand how the Creedal Passion will shape technology policy in the coming decade.

Serious Games: Simulations for Robot Risk Assessment and Communication

by Aaron Mannes

Along with new opportunities, new technologies bring new and often unpredictable risks. Simulations, war-games and tabletop exercises (TTX) can be useful mechanisms, not only for assessing and managing risks, but also for the equally vital task of risk communication. Failures to properly assess risks and engage in risk communication have stymied technological development in the past.

As AI, whether virtual or embodied as robots or IoT, becomes ubiquitous the potential for accidents and failures, both mundane and dramatic will become increasingly likely. These failures can include tangible harms such as an autonomous vehicle causing an accident or algorithmic bias denying an individual benefits. Failures may also be more subtle, but still harmful, such as incidents that undermine individual dignity.

Simulations can be used both to identify risks, but also to consider how best to manage these risks. Wargames, in which teams compete against one another, can be used to consider how criminals might manipulate AI. TTX can be used to test a crisis management plan, so that an organization can prevent a minor failure from becoming a larger one. Simulations have been used for these purposes, as well as to teach general concepts, across a wide variety of domains including disaster response, national security, and corporate crisis management.

Given the scale at which AI is being deployed and the unpredictable nature of both AI itself and of how it will interact with individuals, organizations, and society more broadly, some failures and accidents are inevitable. Properly conducted risk communication can help build trust between communities using and affected by AI so that when failures and accidents occur they can collaborate effectively to address the situation. When risk communication does not focus on building this relationship of trust, technological failures can lead to a strongly negative public response that can stymie technological development. (The Three Mile Island nuclear incident is the classic case of poor risk communication leading the general public to turn against a technology.)

Simulations in which representatives of stakeholder communities participate can help build this trust. On the one hand, stakeholders will have the opportunity to observe decision-making by those creating and deploying AI. On the other hand, the creators and implementers of AI may make assumptions about stakeholder attitudes and reactions. Bringing the stakeholders into the simulation can elicit their attitudes and values so that AI can be developed and implemented accordingly. By bringing communities together in an environment that can be stimulating and sometimes fun, trust can be developed so that when accidents and failures occur they do not derail promising AI applications.

From Seeds to Bytes: Data Transformations in the Agricultural Sector

by Rian Wanstreet

In 2015, Agricultural Economist Dr. Lowenberg-DeBoer wrote an essay in Foreign Affairs that contained the following statement: “Eventually, precision agriculture could take humans out of the loop entirely. Once that happens, the world won’t just see huge gains in productivity. It will see a fundamental shift in the history of agriculture: farming without farmers.” What was unstated, but is implicitly understood, was that replacing those farmers would be algorithmically- mediated robots.

Precision Agriculture (or PA) is a method of farming which uses tech and big data from historical records, satellites, and sensors to create what’s called “prescriptions” which can tell a farmer (or farmers’ machines) where to plant, how much water or fertilizer to use, how much phosphorus to apply, etc. It is marketed as being more efficient, cost-effective and sustainable, and there are reports that it is some of those things. For example, some studies indicate less water and fertilizer is used when PA systems are applied and—ostensibly—utilizing robots replaces the need for human labor. And there are an increasing number of robots.

PA is being heralded as a panacea to the demands of an ever-growing population, but the remarkable speed with which IoT technologies are being adopted on farms should give anyone familiar with the challenges of data management, security, and upkeep pause. As drones and robots replace workers and data-driven algorithms make decisions about where to plant, generational human knowledge is being replaced.

While scholars in social sciences and related disciplines have increasingly been looking critically at the repercussions of the shift towards datafication in various occupations, little attention has been paid to the impact of big data in the agricultural sector. Considering the quite important role agriculture plays in our society, this oversight is concerning. Additionally, PA technologies have the potential to exacerbate existing inequities between farmers and corporations. They are also ossifying sociotechnical assemblages which privilege certain worldviews about appropriate ways to farm which may run counter to contemporary goals of sustainability. As such, identifying the potential ramifications of the adoption of these technologies is an important goal for scholars, particularly because application of PA is still nascent and thus intervention that encourages transparent, environmentally friendly, and equitable implementation still possible.

This poster will outline the current state of these new communication technologies in the agricultural sector in the United States. I discuss the ways that societal discourse and knowledge-producers are encouraging PA uptake and highlight the top-level policy concerns from privacy, security, and regulatory perspectives. I surface several provocative issues and outline a critical research agenda that extends these questions globally.

Robotic Combat, Control, and Collaboration Through Virtual Twins

by Chris Edwards & Tristan Fogt

In August 2018, Chatila et. al. stated in Frontiers in Robotics and AI “We still lack a genuine theory of the underlying principles and methods that would enable robots to understand their environment, to be cognizant of what they do, to take appropriate and timely initiatives….” We propose, implement, test, and demonstrate a simple and effective framework that addresses this problem. This framework allows for safe human-robot interactions, and can be used to provide haptic feedback in virtual reality. Further, it can be implemented with industrial robots and can improve resource management, data collection, and efficiency.

Underlying Principles:
The basis of this framework is a virtual environment with a virtual model of the robot with an inverse kinematics library. To control the robot, we instruct the virtual model of the robot to move in its virtual world. The system monitors the movements of the virtual model and its interactions with its virtual world, and it commands the physical robot to replicate those movements. In this system, the goal is not to make the robot directly aware of its physical environment. Instead, the system exploits its complete knowledge of the virtual environment and the virtual model of the robot. This simplifies the original problem into two parts: interaction between a virtual robot and a virtual environment, and the collection of information about the physical environment and replicating it in the virtual environment.

Implementation & Tests:
To test the system, we programmed several scenarios that demonstrate this framework’s unique abilities in the Unity game engine. The first scene is a swordfight simulation with a player in VR fighting a knight. For this scene the robot is a 6-axis robotic arm that wields a prop sword and matches the motions of the virtual knight’s sword. The models use realistic collision physics so that the opponent’s sword – and the robot following it – clash and deflect against the player’s sword, and this technique is also used to project a protective barrier around the player’s body. The second scene is a VR boxing simulator in which a virtual opponent matches the position of a boxing dummy which is mounted to a moving robotic base. The user fights with the virtual opponent, and his hits impact the dummy which provides force feedback. The final scene is a virtual model of a car assembly process with multiple robotic arms, and all elements can be adjusted in real-time and information can be easily visualized.

Implications & Conclusion:
This system demonstrates an intuitive, advanced robotic simulator that can control robots and adapt to the environment in real-time. Further, the framework allows for data to be collected in the virtual environment – this simplifies the implementation of resource monitoring and planning. The implementation with VR allows haptic and force feedback for entertainment, and AR-assisted management and monitoring for industry.
Overall, this is a promising framework for robotic control systems and human-robot interaction.

 

Robots and the Curse of Being Important

by Robin Murphy

One way to view how government regulation of robotics comes about is through the lens of the “curse of being important.” The curse refers to science that is recognized as being important to public good at the time of its emergence and thus attracts immediate governmental policy and regulation. Nuclear power and vaccine development are two examples. The curse is that, while it is reasonable and necessary for society to regulate, and even mandate adoption of, important emerging technologies, governmental processes for developing regulations are generally slow. The supposed silver lining of the curse is that technologies that are not deemed important are allowed to progress without government influence, but this may jeopardize the public in the long run. An example is workplace automation, which began transforming industry in the 1970s but, despite worker deaths, has not been subject to anything more stringent than guidelines for worker safety.

Robots, especially autonomous cars and small unmanned aerial vehicles, seem to suffer from a variant: The curse of being somewhat important. The Brookings Institute noted that regulations for testing self-driving cars vary between states and that, despite two deaths in 2018, no state has modified their regulations in response to those deaths or has mechanisms for rapidly changing rules when data challenges underlying assumptions. It seems that self-driving vehicles are important enough for legislation but not important enough for legislation that might interfere with economic development. A similar pattern has occurred with small unmanned aerial vehicles, where federal regulations are unenforced, either within the government or with the public. Thus, UAS are important enough for legislation but not important enough to actually implement.

We posit that technologies, notably nuclear power, vaccines, self-driving cars, and UAS, that use governmentally controlled infrastructures will be cursed as important, while technologies, such as computers, the Internet, factory automation, and social media, that do not use public infrastructure will escape notice. Nuclear power stemmed from weapons research and development required national security considerations. Vaccines require the public health infrastructure to distribute. UAS make use of civilian air space, but generally use sections previously been ignored by the FAA and thus are less important. Testing for self-driving cars occurs on the transportation infrastructure, but these roads may be under local, county, state, or federal control, allowing developers to shop around for the most favorable regulations. Computers were related to weapons development and the space race, but the real technological disruption was in personal computing for individuals and small businesses, which did not need public infrastructure. The Internet avoided being treated as a public utility, possibly because it emerged from a network of academic institutions which provided the core infrastructure. Factory automation is purchased by private industry and does not engage any public infrastructure. Social media makes use of the Internet, so it also escaped the curse of being important.

This interpretation suggests that i) agencies explicitly consider the potential impact of a robotic application in determining candidacy for regulations, rather than rely on an implicit criterion of whether the technology makes use of existing public infrastructure, and ii) that if regulated, regulations are applied swiftly and uniformly, and are based on, and rapidly revised by, evidence.

 

How Robots and Autonomous Weapon Systems are Changing the Norms and Laws of War

by William J. Barry

In the last nine months, the Executive Branch, the Department of Defense (DoD), and the Army have created a multiplicity of strategies, centers, and programs on Artificial Intelligence (AI). On 12 February 2019, the DoD unveiled its AI Strategy. This was the day after the White House released an executive order that created the American Artificial Intelligence Strategy which is directly tied into the National Defense Strategy. Further, the Joint Artificial Intelligence Center (JIAC) was stood up at the Pentagon in June of last year to have oversight over almost all service and defense agency AI efforts. This coordination “is crucial to the emerging AI arms race with China and Russia.” Additionally, on the first of last month (FEB 2019), the Army activated the AI Task Force at the birthplace of AI itself: Carnegie Mellon University.

Essential Course Questions:
An interdisciplinary team of military and civilian professors of philosophy,electricalengineering, computer science, and robot engineers in the Robotic Research Center at the United States Military Academy at West Point are currently piloting a cutting-edge interdisciplinary project designed for cadets to explore, both in the classroom and lab environments, artificial intelligence (AI) powered robots, drones, self-driving cars, and explore emerging technologies such as robot swarms. Cadets learn how the increasing sophistication and autonomous decision-making capabilities of AI robots on the battlefield, and in conflict operations, is disrupting existing status quo legal and moral norms and requires rethinking the sacrosanct idea of traditional Just War Theory as the moral compass for justice in declaring and fighting wars. The unpredictable pace of change is revolutionizing the concepts of just war, agency, and human purpose.
The emerging international AI and automated weapon systems arms race raises essential questions we explore in this project: Should we legally allow machines the decision to kill or injure? Due to AI- guided automated weapons systems being more precise and making fewer mistakes than human agents, would they be more discriminate and proportional than humans? Who bears the ultimate responsibility of death caused by robots and automated weapon systems? Are all three approaches to technology in war legal and ethical: human in, on, and out of the loop?

Course:
We present cadets the argument that the lack of legal and ethical constraint by potential adversaries compels the U.S. to develop and employ autonomous weapons, both defensively and offensively. Cadets are challenged to consider the legality of the U.S. entering into an Autonomous Weapon Systems (AWS) arms race, both lethal and non-lethal, and explore if there should be international legal restrictions on their use in regard to humans in, on, and/or out of the loop? West Point’s recently-created Robotic Research Center aims to capture not just the engineering component of robotics, but also the crucial value and legal questions associated with these emerging technologies and their applications in war. In this way, the center contributes to West Point’s overall mission to create leaders prepared for the challenges of future battlefields.

Even though autonomous weapons systems must still adhere to our commitments to discrimination and proportionality, removing humans from the loop is legally and morally significant from the perspective of respecting the sanctity of human life. Human out of the loop weapons notably undermines our traditional notions of agency, responsibility, a warrior’s ethos, human dignity, and all nations should seek to mitigate that damage where we can. We challenge cadets ethical reasoning and strategic thinking to consider being reluctant to make a choice for full autonomy and only to make it in the direst of circumstances, but, ultimately, any rights-adhering nation that faces annihilation by a rights-violating, autocratic regime must be prepared to dirty its hands. If human beings face death or severe injury by algorithms and microprocessors, we challenge cadets, the future leaders of the United States Army, to consider removing humans from the loop.

Early Findings:
Our position as instructors of cadets is that AI, machine learning, and robotics are reshaping every aspect of public and private life, and national security and defense are no exception. At the same time, we are coming to understand the many ways in which these technologies raise deep ethical, legal, psychological, social, and existential issues (both positive and negative). The traditional conception of Just War Theory may require revision and the future of AI, AI Robots, and AWS development requires interdisciplinary cooperation and the inclusion of ethicists.

 

Authoring Identity: Copyright, Privacy, and Commodity Dissonance in the Digital Age 

by Bita Amani

In the age of artificial intelligence, smart machines are increasingly designed to embrace human identifiable characteristics. One of the biggest concerns emerging with the accelerating development of humanoid robots is whether they will threaten our sense of human identity. Anthropomorphic appearances may blur the identifiable distinction between humans and robots, instilling a growing anticipation over our inability to differentiate humans from machine. It would appear that one thing makes us uniquely human, and capable of individualizing that uniqueness within the human family, is our ability to consciously and cognitively author our own identity by controlling its expressive representations; it is also our ability to “read” others.

We humans are not only “smart” and “adaptive” — we are diplomatic, discerning, enterprising, social and political. Regarded from this perspective, our identity is narrative, historically specific, expressively personal, and culturally situated; its social existence is itself a work for which we seek legal protection and remedy. AI, however, appears to be threatening that capacity to author ourselves and in doing so, it also threatens to blur the identifiable distinction between humans and their nominate proxies.

In the copyright context, the author strives for self-narration, seeking control over the representations of the self. There is much debate as to whether copyright ought to be extended to machine generated works. Such recognition would advance an anomalous conception of authorship. It may also, paradoxically, generate a barrier for correcting algorithmic decision-making errors in our efforts to author our own identities, bringing into the spotlight and under sharp relief the commodity dissonance of the digital age.

This is an age in which identity is forged in discursive and symbiotic relation to the machinery of the internet, and in which we no longer labour to create just alienable, material things. Rather, today, we are the very thing that we create. This is also an age in which identity can be appropriated, distorted, misrepresented, and destroyed as much by algorithmic decision making as with human error with a click of a button. While identity harms may be perpetrated by people and protected against by the law of defamation, it may not always be the case that defamation will apply. And what of non-human errors?

While much of the focus in research is dedicated to exploring the efficiencies of artificial intelligence and the utility of machine learning, this paper is interested in exploring the nature of legal interests that may arise when AI makes identity based mistakes (moving beyond bias to matching errors) and the ability of the law to respond. The analysis draws on my personal experience with a merged digital identity based on a shared name; a name is only an approximation of the self, as anyone who has suffered from mistaken identity may attest. While copyright prefers to protect fictions, potentially privileging the “work” in which the error resides, privacy law seeks to protect facts. In this context, how might the law appropriately grant us the authority to author the author, now confronted by the growing capacity of AI? Whatever may be the status of an emerging right to be forgotten, is there a right to be remembered accurately? By corollary, how are we to author our own identities —our most important and carefully constructed work, in such uncertain terrain?

The Theater Method: Exploring Unethical Research Topics in Human-Robot Interaction

by Samarendra Hedaoo & Heather Knight

The Theater Method explores many research variables at a time using the same script structure repeated many times with small variations based on the research variables. Inspired by acting, the Theater Method allows research participants to have in-person experiences of a robot doing unethical actions, for example, violating their ​character’s privacy without actually being emotionally damaged themselves. As an accoutrement to the in-person actor perspective, we can also collect data about the audience perspective via online video study, which also helps identify research variables for in-person exploration.
The Theater Method has benefits similar to traditional user studies for new research topic areas, but provides greater psychological and informational safety to its participants [1] [2] because the violations are simulated. Previous methods exploring sensitive topics like people’s privacy expectations of a robot are often at a distance from the privacy-violation (survey, video studies [3]); conservative (user study), or at a danger of putting the participant at risk (live deployment).
In our first experiment utilizing the Theater Method, we explored people’s attitudes toward robot data use in two phases: (I) where the participants see a robot interaction from an audience perspective and then (II) where the participants get to experience interaction with the robot from an actor’s perspective as they act in the same scene as (I) along with the robot and the same professional actor from (I). At the end of each scene in (II), the participants complete a survey.
In varying the many scripts, we were able to explore themes of the robot’s data use, how it used that data, whether the comment it made was positive or negative, to whom it addressed the information. The audience perspective is great for collecting large-scale data about what might matter in an interaction, while the actor perspective provoked quite a lot of emotions from our participants to a range of comments by the robot.
In future work, we would like to apply the Theater Method to other sensitive questions in robotics, such as a robot’s moral decision-making (the trolley problem is not something that should be evaluated live), or a robot’s potential role in mediating workplace harassment and civil interactions. We believe that the Theater Method will be particularly helpful when designing new social functionality into machines, and in areas where user sensitivities are not yet known.