Alice Xiang will present her paper, Being “Seen” vs. “Mis-seen”: Tensions Between Privacy and Fairness in Computer Vision, on Friday, September 24th at 11:30am at #werobot 2021. Daniel Susser will lead the discussion.
The rise of AI technologies has caused growing anxiety that AI may create mass surveillance systems and entrench societal biases. Major facial recognition systems are less accurate for women and individuals with darker skin tones due to a lack of diversity in the training datasets. Efforts to diversify datasets can raise privacy issues; plaintiffs can argue that they had not consented to having their images used in facial recognition training datasets.
This highlights the tension that AI technologies create between representation vs. surveillance: we want AI to “see” and “recognize” us, but we are uncomfortable with the idea of AI having access to personal data about us. This tension is further amplified when the need for sensitive attribute data to detect or mitigate bias is considered. Existing privacy law addresses this area primarily by erring on the side of hiding people’s sensitive attributes unless there is explicit informed consent. While some have argued that not being “seen” by AI is preferable—that being under-represented in training data might allow one to evade mass surveillance—incomplete datasets may result in detrimental false-positive identification. Thus, not being “seen” by AI does not protect against being “mis-seen.”
The first contribution of this article is to characterize this tension between privacy and fairness in the context of algorithmic bias mitigation. In particular, this article argues that the irreducible paradox underlying current efforts to design less biased algorithms is the simultaneous desire to be both “seen” yet “unseen” by AI. Second, the Article reviews the viability of strategies that have been proposed for addressing the tension between privacy and fairness and evaluates whether they adequately address associated technical, operational, legal, and ethical challenges. Finally, this article argues that solving the tension between representation and surveillance requires considering the importance of not being “mis-seen” by AI rather than simply being “unseen.” Untethering these concepts (being seen, unseen, vs. mis-seen) can bring greater clarity around what rights relevant laws and policies should seek to protect. Given that privacy and fairness are both critical objectives for ethical AI, it is vital to address this tension head-on. Approaches that rely purely on visibility or invisibility will likely fail to achieve either objective.