Resources – on automated systems and bias

Last updated: 07/05/2019

If you are a data scientist, a software developer, or in the social and human sciences with interest in digital humanities, then you’re no stranger to the ongoing discussions on how algorithms embed and perpetuate human biases. Ethical considerations and critical engagement are urgently needed.

I have keenly been following these discussions for a while and this post is an attempt to put together the articles, books, book reviews, videos, interviews, twitter threads and so on., that I’ve come across, in one place so they can be used as resources.

This list is by no means exhaustive and as we are becoming more and more aware of the catastrophic consequences of these technologies, more and more pieces/articles/journal papers are being written about it on a daily basis. I plan to update this site regularly. Also, if you think there are relevant material that I have not included, please leave them as a comment and I will add them.

Link for books here

Link for a more scholarly read here

TED Talks, podcasts, and interviews 

The era of blind faith in big data must end TED Talk by Cathy O’Neil, April, 2017

Machine intelligence makes human morals more important November 11, 2017. In this TED Talk, Zeynep Tufekci emphasizes the importance of human values and ethics in the age of machine intelligence and algorithmic decision making.

We’re building an artificial intelligence-powered dystopia, one click at a time, another thought provoking TED Talk from techno-sociologist Zeynep Tufekci.

How I’m fighting bias in algorthims TED Talk – MIT Researcher Joy Buolamwini, November 2016

AI, Ain’t I A Woman? Joy Buolamwini

Data is the new gold, who are the new thieves? TED Talk – Tijmen Schep 2016

O’Neil’s interview with Politics Weekly podcast (starts 30mins in) July 5, 2017. O’Neil calls for public awareness on how algorithms are used, often without our knowledge, in job interviews, for example., and explains why we should question and interrogate these algorithms which are often presented to us as authoritative.

A short interview with Frank Pasquale on his book Black Box Society May 12, 2016. Pasquale emphasizes the opaqueness of algorithms and argues on why we should demand transparency.

A 2 minutes video, a prototype example, of algorithms being used in recruitment. A working example of the kind of dangerous AI used for recruiting that experts such as O’Neil constantly warn against. This post provides a critical analysis of why such endeavors are futile and dangerous. Here’s another related video on how facial recognition technology will go mainstream in 2018. In fact, such technology has gone mainstream in China. Here is a short video where a BBC reporter experimented with the world’s largest surveillance system.

Tom Chatfield on Critical Thinking October 2, 2017 In this philosophically themed podcast, Chatfield discusses issues such as “how new digital realities interact with old human biases” with Dave Edmonds.

When algorithms discriminate: Robotics, AI and ethics November 18, 2017. Stephen Roberts, professor of computer science at the University of Oxford, discusses the threats and promises of artificial intelligence and machine learning with Al Jazeera.

Here is a series of talks, from the ABC Boyer Lectures, hosted by Professor Genevieve Bell. The series is called Fast, Smart and Connected: What is it to be Human, and Australian, in a Digital World? The issues discussed include “How to build our digital future.”

You and AI – Just An Engineer: The Politics of AI (July, 2018). Kate Crawford, Distinguished Research Professor at New York University, a Principal Researcher at Microsoft Research New York, and the co-founder and co-director the AI Now Institute, discusses the biases built into machine learning, and what that means for the social implications of AI.

How will AI change your life? AI Now Institute founders Kate Crawford and Meredith Whittaker explain. (8 April, 2019)

Facebook: Last Week Tonight with John Oliver (HBO) an extremely funny and super critical look at Facebook.

Humans are biased, and our machines are learning from us — ergo our artificial intelligence and computer programming algorithms are biased too. Joanna Bryson explains how human bias is learned by taking a closer look at how AI bias is learned.

Websites

Social Cooling is a term that refers to a gradual long term negative side effects of living in an digital society where our digital activities are tracked and recorded. Such awareness of potentially being scored by algorithms leads to a gradual behaviour change: self-censorship and self-surveillance. Here is a piece on what looks like social cooling in action. The website itself has plenty of resources that can aid critical thinking and touches up on big philosophical, economical and societal questions in relation to data and privacy.

bias-in-bias-out-sc593da2a154050-1280
www.socialcooling.com

For those interested in critical thinking, data and models Calling Bullshit offers various resources and tools for spotting and calling bullshit. This website, developed for a course entitled ‘Calling Bullshit’, is a great place to explore and learn about all things “data reasoning for the digital age”.

Another important website that is worth a mention here is Algorithmic Justice League where you can report algorithm bias, participate in testing software for inclusive training set, or where you can simply donate and contribute raising awareness about existing bias in coded systems. More on AI face misclassification and accountability by Joy Buolamwini here. With a somewhat similar aim is the Data Harm Record website – a running record of harms that have been caused by uses of big data.

fast.ai a project that aims to increase diversity in the field of deep learning and make deep learning accessible and inclusive to all. Critical Algorithm Studies: a Reading List – a great website with links to plenty of material on critical literature on algorithms as social concerns. Here is the Social Media Collective Reading List where you’ll find further material on Digital Divide/ Digital Inclusion and Metaphors of Data.

The AI Now Institute at New York University is an interdisciplinary research center dedicated to understanding the social implications of artificial intelligence. Data & Society is a research institute focused on the social and cultural issues arising from data-centric technological developments.  FAT/ML is a website on Fairness, Accountability, and Transparency in Machine Learning with plenty of resources and events, run by a community of researchers. Litigating Algorithms: Challenging Government Use of Algorithmic Decision Systems. An AI Now Institute Report.

ConceptNet Numberbatch 17.04: better, less-stereotyped word vectors This is not a website but a blogpost. I am putting it here with other websites as the author offers some solution to reducing biases when building algorithms for natural language understanding beyond simply stating that such algorithms are biased.

Auditing Algorithms – a useful website for those teaching/interested in accountability in automated systems. The site includes films festivals, videos, etc,.

The Ethics and Governance of Artificial Intelligence – a cross-disciplinary course that investigates the implications of emerging technologies, with an emphasis on the development and deployment of Artificial Intelligence. Here’s an Introduction to Data Ethics by Markkula Center for Applied Ethics. More here for the recent details.  

Google launches a new course to teach people about fairness in machine learning.

Biology/genetics  – (Digital phrenology?) 

It is difficult to draw a line and put certain articles under the category of “social”, “biological”, “political”, or other as the boundaries between these categories are blurred and most of the themes are somehow all interlinked. Nonetheless, I think the following articles can loosely be described as dealing with biological/genetics/personality material. Furthermore, towards the end of this post, I have also thematized some articles under the category of “political”.

In a recent preprint paper “Deep Neural Networks Can Detect Sexual Orientation From Faces” (here are the Gurdian and the Economist reportings) Yilun Wang and Michal Kosinski calmed that their deep neural network can be trained to discern individuals’ sexual orientations from their photographs. The paper has attracted and continues to attract a massive attentions and has generated numerous responses, outrages and discussion. Here is an in-depth analysis from Calling Bullshit and here for a detailed technical assessment and here for a comprehensive and eloquent response from Greggor Mattson. Here is another response and another one here from a data scientist’s perspective and another recent response from O’Neil here. If you only want to read just one response, I highly recommend reading Mattson’s. There have been been plenty of discussions and threads on Twitter – here and here are a couple of examples. It is worth noting that Kosinski, one of the authors of the above paper, is listed as one of the the advisers for a company called Faception, an Israeli security firm that promises clients to deploy “facial personality profiling” to catch pedophiles and terrorists among others.

Do algorithms reveal sexual orientation or just expose our stereotypes? by @blaiseaguera et al., is the latest (January 11, 2018) response to the above Wang and Kosinski “gaydar” paper. In this critical analysis, @blaiseaguera et al., argue that much of the ensuing scrutiny of Wang and Kosinski work has focused on ethics, implicitly assuming that the science is valid. However, on a closer inspection, et al., find that the science doesn’t stand up to scrutiny either.

When advanced technologies in genetics and face recognition are applied with the assumption that “technology is neutral”, the consequences are often catastrophic and dangerous. These two pieces, Sci-fi crime drama with a strong black lead and Traces of Crime: How New York’s DNA Techniques Became Tainted provide some in-depth analysis of such.

Physiognomy’s New Clothes this is a comprehensive and eloquent piece and well worth your time. Physiognomy, the practice of using people’s outer appearance to infer inner character is a practice that is now discredited and discarded as phrenology. However, this piece illustrates how such practice is alive and well in the era of big data and machine learning. Here is more on the Wu and Zhang paper that the Physignomy’s New Clothes authors cover in the above piece. Further examples of digital phrenology can be found here and here here.

General articles on various automated systems and bias, discrimination, unfairness, ethical concerns, etc., listed in order of publication dates starting from the latest.

Frank Pasquale testifies (video, written testimony) Before the United States House of Representatives Committee on  Energy and Commerce Subcommittee on Digital Commerce and Consumer Protection in relation to “Algorithms: How Companies’ Decisions About Data and Content Impact Consumers”. Here for more written testimony on Algorithmic Transparency from the Electronic Privacy Information Center – November 29, 2017.
ProPublica
Image Courtesy of ProPublica

There’s software used across the country to predict future criminals. And it’s biased against blacks. May 23, 2016 The company that sells this program (Northpointe) has responded to the criticisms here. Northpointe asserts that a software program it sells that predicts the likelihood a person will commit future crimes is equally fair to black and white defendants. Following such response, Jeff Larson and Julia Angwin has written another response (Technical Response to Northpointe) re-examined the data. They argue that they have considered the company’s criticisms, and stand by their conclusions.

 

Politics

Algorithmic processes and politics might seem far removed from each other. However, if anything, the recent political climate is indicative of how algorithms can be computational tools for political agendas. Here and here are exemplar twitter threads that highlight particular Twitter accounts used as tools for political agenda. The articles below are, in some way or another, related to algorithms in the political arena.

Forum Q&A: Philip Howard on Computational Propaganda’s Challenge to Democracy July 25, 2017. “Computational propaganda, or the use of algorithms and automated social media accounts to influence politics and the flow of information, is an emerging challenge to democracy in the digital age. Using automated social media accounts called bots (or, when networked, botnets), a wide array of actors including authoritarian governments and terrorist organizations are able to manipulate public opinion by amplifying or repressing different forms of political content, disinformation, and hate speech.”

 

6 comments

Leave a comment