{"id":1666,"date":"2019-04-05T12:21:36","date_gmt":"2019-04-05T16:21:36","guid":{"rendered":"https:\/\/robots.law.miami.edu\/2019\/?page_id=1666"},"modified":"2019-04-05T12:23:01","modified_gmt":"2019-04-05T16:23:01","slug":"special-art-installation","status":"publish","type":"page","link":"https:\/\/robots.law.miami.edu\/2019\/special-art-installation\/","title":{"rendered":"Special Art Installation"},"content":{"rendered":"<h2><em><strong>Moral Labyrinth<\/strong><\/em><\/h2>\n<p>By Sarah Newman &amp; Jessica Fjeld<\/p>\n<blockquote><p><em>Would you trust a robot trained on your behaviors? How will we know when a machine becomes sentient? What does it mean to be moral?<\/em><\/p><\/blockquote>\n<p style=\"text-align: right;\">\u2013excerpts from \u200bMoral Labyrinth<\/p>\n<p>As machines get smarter, more complex, and able to operate autonomously in the world, we\u2019ll need to program them with certain \u201cvalues.\u201d Yet we do not agree on what we value: across cultures, across individuals, even within \u00a0ourselves. We often do not act in accordance with what we say we value, so should these systems learn from what we say or what we do? What are the implications of how our current belief systems manifest in the swiftly approaching technological future? As we \u00a0anticipate such change, can we use this technological moment to become more honest, humble, and compassionate?<br \/>\n<img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-1683 alignright\" src=\"https:\/\/robots.law.miami.edu\/2019\/wp-content\/uploads\/2019\/04\/ravensbourne1_1750-300x190.jpg\" alt=\"\" width=\"300\" height=\"190\" srcset=\"https:\/\/robots.law.miami.edu\/2019\/wp-content\/uploads\/2019\/04\/ravensbourne1_1750-300x190.jpg 300w, https:\/\/robots.law.miami.edu\/2019\/wp-content\/uploads\/2019\/04\/ravensbourne1_1750-768x486.jpg 768w, https:\/\/robots.law.miami.edu\/2019\/wp-content\/uploads\/2019\/04\/ravensbourne1_1750-1024x648.jpg 1024w, https:\/\/robots.law.miami.edu\/2019\/wp-content\/uploads\/2019\/04\/ravensbourne1_1750.jpg 1750w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/p>\n<p>Moral Labyrinth is a 5&#215;5 meter art installation that takes shape as a physical walking labyrinth, comprised of philosophical questions, that deal, either directly or obliquely, with our complex relationships to technology, and more specifically with the machines that we build to serve us. The inspiration for the work and its form are taken from our increasingly complex and self-reflective relationships to emerging technologies, drawing inspiration as well from moral philosophy and socratic dialogue. What can we learn about ourselves by how we engage and interact with technology? What new questions will arise for us after walking the labyrinth? The work is a meditation on perennial\u2014and now particularly pressing\u2014aspects of being human.<br \/>\n<img loading=\"lazy\" decoding=\"async\" class=\" wp-image-1684 alignleft\" src=\"https:\/\/robots.law.miami.edu\/2019\/wp-content\/uploads\/2019\/04\/18095D100031-2_1750-300x233.jpg\" alt=\"\" width=\"285\" height=\"221\" srcset=\"https:\/\/robots.law.miami.edu\/2019\/wp-content\/uploads\/2019\/04\/18095D100031-2_1750-300x233.jpg 300w, https:\/\/robots.law.miami.edu\/2019\/wp-content\/uploads\/2019\/04\/18095D100031-2_1750-768x596.jpg 768w, https:\/\/robots.law.miami.edu\/2019\/wp-content\/uploads\/2019\/04\/18095D100031-2_1750-1024x794.jpg 1024w, https:\/\/robots.law.miami.edu\/2019\/wp-content\/uploads\/2019\/04\/18095D100031-2_1750.jpg 1750w\" sizes=\"auto, (max-width: 285px) 100vw, 285px\" \/><br \/>\nFor the Labyrinth\u2019s installation at WeRobot, <a href=\"https:\/\/www.sarahwnewman.com\/art-research\">Sarah Newman<\/a> has collaborated with <a href=\"https:\/\/hls.harvard.edu\/faculty\/directory\/11766\/Fjeld\">Jessica Fjeld<\/a> and Jessica Yurkofsky to create a miniature labyrinth, broken into pieces, which conference attendees will receive in the form of an invitation: an invitation to engage with the work and with each other, and to explore their own relationships to our compex moral world.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Moral Labyrinth By Sarah Newman &amp; Jessica Fjeld Would you trust a robot trained on your behaviors? How will we know when a machine becomes sentient? What does it mean to be moral? \u2013excerpts from \u200bMoral Labyrinth As machines get smarter, more complex, and able to operate autonomously in the world, we\u2019ll need to program [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":70,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-1666","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/robots.law.miami.edu\/2019\/wp-json\/wp\/v2\/pages\/1666","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/robots.law.miami.edu\/2019\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/robots.law.miami.edu\/2019\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/robots.law.miami.edu\/2019\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/robots.law.miami.edu\/2019\/wp-json\/wp\/v2\/comments?post=1666"}],"version-history":[{"count":8,"href":"https:\/\/robots.law.miami.edu\/2019\/wp-json\/wp\/v2\/pages\/1666\/revisions"}],"predecessor-version":[{"id":1695,"href":"https:\/\/robots.law.miami.edu\/2019\/wp-json\/wp\/v2\/pages\/1666\/revisions\/1695"}],"wp:attachment":[{"href":"https:\/\/robots.law.miami.edu\/2019\/wp-json\/wp\/v2\/media?parent=1666"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}