Department of Philosophy

DEPARTMENT OF

Philosophy

site header

The Department of Philosophy and School of Computing Colloquium Series presents

Catherine Stinson
University of Bonn & University of Cambridge

Algorithms are Not Neutral: Bias in Recommender Systems

Thursday, February 6, 2020

3:00 pm

Watson Hall, Room 517

Efforts to shine a light on algorithmic bias tend to focus on examples where either the data or the people building the algorithms are biased. This gives the impression that clean data and good intentions could eliminate bias in machine learning. The apparent neutrality of the algorithms themselves is defended by high profile AI researchers and companies with an interest in business as usual, but algorithms are not neutral. In addition to biased data and biased algorithm makers, AI algorithms themselves can be biased. This is illustrated with the example of collaborative filtering (an algorithm commonly used in recommender systems), which is known to suffer from popularity, and homogenizing biases. The larger class of iterative information filtering algorithms create a selection bias in the course of learning from user responses to items that the algorithm recommended. These are not merely biases in the statistical sense; these statistical biases cause bias of moral import. People on the margins in the sociocultural sense are literally on the margins of data distributions, as work in disability studies has shown. Popularity and homogenizing biases have the effect of further marginalizing the already marginal, which means that "Customers who bought this item also bought…" style recommendations do not meet everyone's needs. This source of bias warrants serious attention given the ubiquity of algorithmic decision-making.

Accessibility requirements? sheena.wilkinson@queensu.ca

Tags: