This time the general topic will be Digital Discrimination. This is certainly related to novel forms of discrimination caused by technological development in society, but we’ll go even further and discuss about discrimination that may lurk into our research methods.
We have a visitor speaker, Dr. Indrė Žliobaitė. She is a researcher in computer science at Aalto University, HIIT, and University of Helsinki. Her research interests include predictive modeling with streaming/sensory data; fairness, transparency and accountability in machine learning; and computational data analysis applications in general.
Her will give an overview of the current state and research trends on fairness-aware machine learning and data mining that is an emerging discipline at an intersection of computer science, law and social sciences, aiming at understanding, diagnosing and preventing such discrimination. More specific topic of her talk will be:
How can decision making by algorithms discriminate people, and how to prevent that
- Big data driven algorithms are increasing used in many situations of our life, for example, they can decide the prices we pay, select the ads we see, the news we read online or the people we meet, match job descriptions and candidate CVs, decide who gets a loan, who goes to an extra airport security check, or who gets released on a parole.
- Yet growing evidence suggests that decision making by inappropriately trained algorithms can potentially discriminate people. This may happen even if the computing process is fair and well-intentioned.
After the talk we’ll have free discussion on the subject. Is this kind of discrimination something we should be worried of and should it be studied more? Is it possible that even our more traditional methods are vulnerable to similar discrimination and what can be done to circumvent such biases in our research?
Last updated on 15 Oct 2015 by Matti Nelimarkka - Page created on 15 Oct 2015 by Matti Nelimarkka