AI Is Biased: Why it Matters and How to Fix Algorithmic Injustice

Share:

Listens: 0

Policy Punchline

Miscellaneous


Multiple tech companies (Amazon, IBM) have—in a rather surprising turn of events—pledged to stop providing facial recognition technology to police departments in light of the #BlackLivesMatter movement. In this episode, Dr. Annette Zimmerman of Princeton University gives context to the wider public debate on algorithmic justice and the biases of artificial intelligence technology that is rapidly unfolding right now. Dr. Zimmermann, Arjun, and Tiger touch on some of the most long-standing questions in the field of AI research and moral philosophy: - What is algorithmic bias? Is all bias bad? How do we understand algorithmic bias from a moral and philosophical perspective, as well as from a technical perspective? - Where do we see algorithms exacerbating structural injustices in society, and in what precise ways are algorithms doing so? - What are some of the questions unworthy of asking or are merely “AI alarmism” that is not helpful to the discourse?  - AI fairness is an active research area. Is reducing bias in the algorithm a fundamental solution, or more of a patch to a deeper problem? If we remove certain features and modify the data and/or algorithm, are we playing God to some extent? In other words, what gives AI researchers the right (let alone the responsibility) to change aspects of the dataset and algorithm? - Is it possible to assign responsibility for the decisions of an AI system to the creators of the system, since they have enough of a degree of control (e.g. the features, the predicted variable)? Is the creator of an AI system is culpable for the decisions made by that AI? - What are the dangers of private companies providing AI services to public institutions? How can we combine top-down and bottom-up approaches to reduce the prevalence of biased algorithms making societal decisions (e.g. SF banning facial recognition)? Do we need a redesign or revamp of social and democratic institutions to deal with the idea of algorithmic decision making? What do you see as the fundamental societal changes we need to make? Dr. Zimmermann is a postdoctoral researcher at Princeton University, affiliated with the University Center for Human Values (UCHV) as well as with the Center for Information Technology Policy (CITP). Currently, she is focusing on the ways in which disproportionate distributions of risk and uncertainty associated with the use of emerging technologies like algorithmic decision-making and machine learning — such as algorithmic bias and opacity — impact democratic values like equality and justice. She has a particular interest in algorithmic decision-making in criminal justice and policing.