Last updated: 18/02/2018
If you are a data scientist, a software developer, or in the social and human sciences with interest in digital humanities, then you’re no stranger to the ongoing discussions on how algorithms embed and perpetuate human biases. Ethical considerations and critical engagement are urgently needed.
I have keenly been following these discussions for a while and this post is an attempt to put together the articles, books, book reviews, videos, interviews, twitter threads and so on., that I’ve come across, in one place so they can be used as resources.
This list is by no means exhaustive and as we are becoming more and more aware of the catastrophic consequences of these technologies, more and more pieces/articles/journal papers are being written about it on a daily basis. I plan to update this site regularly. Also, if you think there are relevant material that I have not included, please leave them as a comment and I will add them.
Weapons of math destruction: how big data increases inequality and threatens democracy by Cathy O’Neil. A great number of the article on the list below are written by O’Neil. She is also active on Twitter regularly posting links and interesting critical insights on everything to do with mathematical models and bias. Here is my own review of O’Neil’s book with plenty of relevant links itself and here for another excellent review of O’Neil’s book.
Algorithms of oppression: How search engines reinforce – below is an excerpt from Nobel’s book:
Run a Google search for “black girls”—what will you find? “Big Booty” and other sexually explicit terms are likely to come up as top search terms. But, if you type in “white girls,” the results are radically different. The suggested porn sites and un-moderated discussions about “why black women are so sassy” or “why black women are so angry” presents a disturbing portrait of black womanhood in modern society.In Algorithms of Oppression, Safiya Umoja Noble challenges the idea that search engines like Google offer an equal playing field for all forms of ideas, identities, and activities. Data discrimination is a real social problem; Noble argues that the combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of Internet search engines, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color, specifically women of color.
Algorithms to Live By: The Computer Science of Human Decisions by Brian Christian and Tom Griffiths. This book is concerned with the workings of the human mind and how computer science can help human decision making. Here is a post by Artem Kaznatcheev on Computational Kindness which might give you a glimpse of the some of the issues that book covers. Here is a long interview with Brian Christian and Tom Griffiths and a TED Talk with Tom Griffiths on The Computer Science of Human Decision Making.
The Black Box Society: The Secret Algorithms That Control Money and Information by Frank Pasquale. You can read the introduction and conclusion chapters of his book here. And here is a good review of Pasquale’s book. You can follow his twitter stream here.
Here is a synopsis: A revealing look at how tech industry bias and blind spots get baked into digital products—and harm us all.
Buying groceries, tracking our health, finding a date: whatever we want to do, odds are that we can now do it online. But few of us ask why all these digital products are designed the way they are. It’s time we change that. Many of the services we rely on are full of oversights, biases, and downright ethical nightmares: Chatbots that harass women. Signup forms that fail anyone who’s not straight. Social media sites that send peppy messages about dead relatives. Algorithms that put more black people behind bars.
Sara Wachter-Boettcher takes an unflinching look at the values, processes, and assumptions that lead to these and other problems. Technically Wrong demystifies the tech industry, leaving those of us on the other side of the screen better prepared to make informed choices about the services we use—and demand more from the companies behind them.
Paula Boddington, Oxford academic and author of Towards a Code of Ethics for Artificial Intelligence, recommends the five best books on Ethics for Artificial Intelligence. Here is the full interview with Nigel Warburton, published on December 1, 2017.
“Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor” by Virginia Eubanks is being published and will be released on January 23, 2018. Here is an excerpt from Danah Boyd’s blog:
“Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor” is a deeply researched accounting of how algorithmic tools are integrated into services for welfare, homelessness, and child protection. Eubanks goes deep with the people and families who are targets of these systems, telling their stories and experiences in rich detail. Further, drawing on interviews with social services clients and service providers alongside the information provided by technology vendors and government officials, Eubanks offers a clear portrait of just how algorithmic systems actually play out on the ground, despite all of the hope that goes into their implementation.
TED Talks, podcasts, and interviews
The era of blind faith in big data must end TED Talk by Cathy O’Neil, April, 2017
Machine intelligence makes human morals more important November 11, 2017. In this TED Talk, Zeynep Tufekci emphasizes the importance of human values and ethics in the age of machine intelligence and algorithmic decision making.
We’re building an artificial intelligence-powered dystopia, one click at a time, another thought provoking TED Talk from techno-sociologist Zeynep Tufekci.
How I’m fighting bias in algorthims TED Talk – MIT Researcher Joy Buolamwini, November 2016
Data is the new gold, who are the new thieves? TED Talk – Tijmen Schep 2016
O’Neil’s interview with Politics Weekly podcast (starts 30mins in) July 5, 2017. O’Neil calls for public awareness on how algorithms are used, often without our knowledge, in job interviews, for example., and explains why we should question and interrogate these algorithms which are often presented to us as authoritative.
A short interview with Frank Pasquale on his book Black Box Society May 12, 2016. Pasquale emphasizes the opaqueness of algorithms and argues on why we should demand transparency.
A 2 minutes video, a prototype example, of algorithms being used in recruitment. A working example of the kind of dangerous AI used for recruiting that experts such as O’Neil constantly warn against. This post provides a critical analysis of why such endeavors are futile and dangerous. Here’s another related video on how facial recognition technology will go mainstream in 2018. In fact, such technology has gone mainstream in China. Here is a short video where a BBC reporter experimented with the world’s largest surveillance system.
Tom Chatfield on Critical Thinking October 2, 2017 In this philosophically themed podcast, Chatfield discusses issues such as “how new digital realities interact with old human biases” with Dave Edmonds.
When algorithms discriminate: Robotics, AI and ethics November 18, 2017. Stephen Roberts, professor of computer science at the University of Oxford, discusses the threats and promises of artificial intelligence and machine learning with Al Jazeera.
Here is a series of talks, from the ABC Boyer Lectures, hosted by Professor Genevieve Bell. The series is called Fast, Smart and Connected: What is it to be Human, and Australian, in a Digital World? The issues discussed include “How to build our digital future.”
Social Cooling is a term that refers to a gradual long term negative side effects of living in an digital society where our digital activities are tracked and recorded. Such awareness of potentially being scored by algorithms leads to a gradual behaviour change: self-censorship and self-surveillance. Here is a piece on what looks like social cooling in action. The website itself has plenty of resources that can aid critical thinking and touches up on big philosophical, economical and societal questions in relation to data and privacy.
For those interested in critical thinking, data and models Calling Bullshit offers various resources and tools for spotting and calling bullshit. This website, developed for a course entitled ‘Calling Bullshit’, is a great place to explore and learn about all things “data reasoning for the digital age”.
Another important website that is worth a mention here is Algorithmic Justice League where you can report algorithm bias, participate in testing software for inclusive training set, or where you can simply donate and contribute raising awareness about existing bias in coded systems. With a somewhat similar aim is the Data Harm Record website – a running record of harms that have been caused by uses of big data.
fast.ai a project that aims to increase diversity in the field of deep learning and make deep learning accessible and inclusive to all. Critical Algorithm Studies: a Reading List – a great website with links to plenty of material on critical literature on algorithms as social concerns. Here is the Social Media Collective Reading List where you’ll find further material on Digital Divide/ Digital Inclusion and Metaphors of Data.
The AI Now Institute at New York University is an interdisciplinary research center dedicated to understanding the social implications of artificial intelligence. Data & Society is a research institute focused on the social and cultural issues arising from data-centric technological developments. FAT/ML is a website on Fairness, Accountability, and Transparency in Machine Learning with plenty of resources and events, run by a community of researchers.
ConceptNet Numberbatch 17.04: better, less-stereotyped word vectors This is not a website but a blogpost. I am putting it here with other websites as the author offers some solution to reducing biases when building algorithms for natural language understanding beyond simply stating that such algorithms are biased.
Auditing Algorithms – a useful website for those teaching/interested in accountability in automated systems. The site includes films festivals, videos, etc,.
Biology/genetics – (Digital phrenology?)
It is difficult to draw a line and put certain articles under the category of social, biological, political, or other as they seem to be somehow all interlinked. Nonetheless, I think the following articles can loosely be described as dealing with biological/genetics material. Towards the end of this post, I have also thematized some articles under the category of ‘political’.
In a recent preprint paper “Deep Neural Networks Can Detect Sexual Orientation From Faces” (here are the Gurdian and the Economist reportings) Yilun Wang and Michal Kosinski calmed that their deep neural network can be trained to discern individuals’ sexual orientations from their photographs. The paper has attracted and continues to attract a massive attentions and has generated numerous responses, outrages and discussion. Here is an in-depth analysis from Calling Bullshit and here for a detailed technical assessment and here for a comprehensive and eloquent response from Greggor Mattson. Here is another response and another one here from a data scientist’s perspective and another recent response from O’Neil here. If you only want to read just one response, I highly recommend reading Mattson’s. There have been been plenty of discussions and threads on Twitter – here and here are a couple of examples. It is worth noting that Kosinski, one of the authors of the above paper, is listed as one of the the advisers for a company called Faception, an Israeli security firm that promises clients to deploy “facial personality profiling” to catch pedophiles and terrorists among others.
Do algorithms reveal sexual orientation or just expose our stereotypes? by @blaiseaguera et al., is the latest (January 11, 2018) response to the above Wang and Kosinski “gaydar” paper. In this critical analysis, @blaiseaguera et al., argue that much of the ensuing scrutiny of Wang and Kosinski work has focused on ethics, implicitly assuming that the science is valid. However, on a closer inspection,
@blaiseaguera et al., find that the science doesn’t stand up to scrutiny either.
When advanced technologies in genetics and face recognition are applied with the assumption that “technology is neutral”, the consequences are often catastrophic and dangerous. These two pieces, Sci-fi crime drama with a strong black lead and Traces of Crime: How New York’s DNA Techniques Became Tainted provide some in-depth analysis of such.
Physiognomy’s New Clothes this is a comprehensive and eloquent piece and well worth your time. Physiognomy, the practice of using people’s outer appearance to infer inner character is a practice that is now discredited and discarded as phrenology. However, this piece illustrates how such practice is alive and well in the era of big data and machine learning. Here is more on the Wu and Zhang paper that the Physignomy’s New Clothes authors cover in the above piece.
General articles on various automated systems and bias, discrimination, unfairness, ethical concerns, etc., listed in order of publication dates starting from the latest.
- Ideologies of Boring Things: The Internet and Infrastructures of Race – Los Angeles Review of Books: February 13, 2018
- Tech’s Ethical ‘Dark Side’: Harvard, Stanford and Others Want to Address It February 12, 2018
- Facial Recognition Is Accurate, if You’re a White Guy February 9, 2018
- To Make AI Smarter, Humans Perform Oddball Low-Paid Tasks February 9, 2018
- How big data is helping states kick poor people off welfare February 6, 2018
- When Criminalizing the Poor Goes High-Tech February 6, 2018
- Silicon Valley engineer Erica Joy Baker wishes people would stop telling women that they’re strong February 6, 2018
- Power, Polarization, and Tech February 5, 2018
- The dark side of the ‘data-driven’ February 5, 2018
- An incredibly important paper on whether data can ever be “anonymized” and how we should handle release of large data-sets February 1, 2018
- Policing Poverty Through Automation January 31, 2018
- The Irish Data Protection Bill published on January 30, 2018
- GDPR breach reporting tips (UK)
- The Latest Data Privacy Debacle January 30, 2018
- Strava’s data lets anyone see the names (and heart rates) of people exercising on military bases January 30, 2018
- How to make Artificial Intelligence fair, transparent and accountable: January 27, 2018
- Algorithms are making American inequality worse January 26, 2018
- Engineered for Dystopia January 24, 2018
- An Introduction to Data Ethics – Markkula Center for Applied Ethics January 23, 2018
- The Injustice of Algorithms January 23, 2018
- A Popular Algorithm Is No Better at Predicting Crimes Than Random People – The Atlantic January 17, 2018
- Software ‘no more accurate than untrained humans’ at judging reoffending risk January 17, 2018
- Mechanical Turkers may have out-predicted the most popular crime-predicting algorithm January 17, 2018
- It’s the (Democracy-Poisoning) Golden Age of Free Speech January 16, 2018
- Maybe Facebook Should Abandon the News Feed Altogether January 16, 2018
- Beyond the Rhetoric of Algorithmic Solutionism January 11, 2018
- Why AI Is Still Waiting For Its Ethics Transplant January 11, 2018
- Amazon turns over record amount of customer data to US law enforcement January 5, 2018
- Will Mark Zuckerberg, with his promise to ‘fix’ Facebook, give up revenue to do what’s right? January 7, 2018
- Don’t Be Evil January 3, 2018
- The Algorithms Aren’t Biased, We Are January 3, 2018
- Fair and Balanced? Thoughts on Bias in Probabilistic Modeling December 27, 2017
- What Amazon Echo and Google Home Do With Your Voice Data—And How to Delete It December 24, 2017
- Should AI decide who gets a kidney? December 21, 2017
- In 2017, society started taking AI bias seriously December 21, 2017
- Dozens of companies are using Facebook to exclude older workers from job ads December 20, 2017
- Could Facebook Be Tried for Human-Rights Abuses? – The Atlantic December 20, 2017
- Data Violations: Germany unfriends Facebook December 19, 2017
- Facebook Can Now Find Your Face, Even When It’s Not Tagged December 19, 2017
- Silicon Valley Is Turning Into Its Own Worst Fear December 18, 2017
- Spurred by a ProPublica report, the New York City Council passed the country’s first bill to address algorithmic discrimination in city government December 18, 2017
- ‘The Basic Grossness of Humans’ – The Atlantic December 15, 2017
- Engineers, philosophers and sociologists release ethical design guidelines for future technology December 14, 2017
- Artificial Intelligence Seeks An Ethical Conscience December 12, 2017
- Australian media watchdog to investigate Google and Facebook December 5, 2017
- Debugging data: Microsoft researchers look at ways to train AI systems to reflect the real world December 4, 2017
- Why Autocomplete Is Only Funny for Those Who Can Afford It by Safiya Umoja Noble: December 4, 2017
- Predictive algorithm under wraps December 3, 2017
- The Rhetorical “We” and the Ethics of Technology December 1, 2017
- Artificial intelligence doesn’t have to be evil. We just have to teach it to be good November 30, 2017
- U.S. House Hearing on Algorithms & Big Data: 5 Takeaways for Schools November 29, 2017
- Facebook to temporarily block advertisers from excluding audiences by race November 29, 2017
- Facebook ad targeting is about to get a whole lot creepier November 28, 2017
- Why We Had to Buy Racist, Sexist, Xenophobic, Ableist, and Otherwise Awful Facebook Ads November 27, 2017
- Facebook hasn’t done enough to tell customers they were duped by Russian propaganda November 25, 2017
- Facebook (still) letting housing advertisers exclude users by race November 21, 2017
- Tim Berners-Lee on the future of the web: ‘The system is failing’ November 16, 2017
- Ray Dalio has an unbelievable algorithm November 15, 2017
- How One Woman’s Digital Life Was Weaponized Against Her November 14, 2017
- Maybe Facebook Is Broken. How can you stop people from sharing biased and misleading stuff? November 7, 2017
- Bringing A.R.T. to A.I. November 6, 2017
- Computer says no: why making AIs fair, accountable and transparent is crucial November 5, 2017
- Why we need a 21st-century Martin Luther to challenge the church of tech October 29, 2017
- Facebook must face local data protection regulations, EU court opinion finds October 25, 2017
- Key GDPR Guidance on Behavioral Advertising, Profiling and Automated Decision-Making October 24, 2017
- It’s time for more transparency in A.I. October 24, 2017
- Federal judge unseals New York crime lab’s software for analyzing DNA evidence October 20, 2017
- AI Experts Want to End ‘Black Box’ Algorithms in Government October 18, 2017
- Estonia Proposes Bill of Rights and Responsibilities for Robots October 17, 2017
- Asking the Right Questions About AI October 12, 2017
- Google’s AI chief says forget Elon Musk’s killer robots, and worry about bias in AI systems instead October 3, 2017
- Researchers Are Upset That Twitter Is Dismissing Their Work On Election Interference October 3, 2017
- Facebook’s Ad Scandal Isn’t a ‘Fail,’ It’s a Feature September 23, 2017
- BBC News – Facebook can’t hide behind algorithms September 22, 2017
- Data power could make 1984 ‘look like a Teddy bear’s picnic’ September 21, 2017
- Machines Taught by Photos Learn a Sexist View of Women September 21, 2017
- AI Research Is in Desperate Need of an Ethical Watchdog September 18, 2017
- Getting serious about research ethics: AI and machine learning September 18, 2017
- Machines are getting schooled on fairness September 16, 2017
- Facebook and Google, show us your ad data Understanding how they influence us is crucial to the future of our democracy. September 13, 2017
- Understanding Bias in Algorithmic Design Human judgement lies behind every data-driven decision. Left unexamined, value-laden software can have unintended discriminatory effects. September 6, 2017
- Report: Britain’s Cops Have Big Data But Not Big Analysis September 6, 2017#
- Turns out algorithms are racist August 31, 2017
- AI programmes are learning to exclude some african american voices August 16, 2017
- FaceApp Is Very Excited About Its New Line of Ultra-Racist Filters August 8, 2017
- Rise of the racist robots – how AI is learning all our worst impulses August 8, 2017
- Artificial intelligence ethics the same as other new technology July 29, 2017
- Technology is biased too. How do we fix it? July 20, 2017
- How can we stop algorithms telling lies? July 16, 2017
- Lack of ethics education for computer programmers shocks expert July 2, 2017
- Facebook’s secret censorship rules protect white men from hate speech but not black children June 28, 2017
- We need to shine more light on algorithms so they can help reduce bias, not perpetuate it June 12, 2017
- How to Call B.S. on Big Data: A Practical Guide June 3, 2017
- Pitfalls of artificial intelligence decision-making highlighted in Idaho ACLU case June 2, 2017
- The bigot in the machine: Tackling big data’s inherent biases June 1, 2017
- Secret algorithms threaten the rule of law June 1, 2017
- Algorithms aren’t racist. Your skin is just too dark. May 29, 2017
- ‘A white mask worked better’: why algorithms are not colour blind May 28, 2017
- On Facebook May 7, 2017
- AI & Machine Learning Black Boxes: The Need for Transparency and Accountability: April 25, 2017
- FaceApp sorry for ‘racist’ filter that lightens skin to make users ‘hot’ April 25, 2017
- Robots are racist and sexist. Just like the people who created them April 20, 2017
- How artificial intelligence learns to be racist April 17, 2017
- Courts are using AI to sentence criminals. That must stop now. April 17, 2017
- An AI stereotype catcher April 14, 2017
- AI picks up racial and gender biases when learning from what humans write April 13, 2017
- AI programs exhibit racial and gender biases, research reveals April 13, 2017
- AI learns gender and racial biases from language April 13, 2017
- Will the future be full of biased robots? March 31, 2017
- Algorithms can be pretty crude toward women March 24, 2027
- Algorithms learn from us, and we can be better teachers March 13, 2017
- Data-driven crime prediction fails to erase human bias March 8, 2017
- Big data, big problems – interview with Cathy O’Neil March 1, 2017
- How to Keep Your AI From Turning Into a Racist Monster February 13, 2017
- Code-Dependent: Pros and Cons of the Algorithm Age February 6, 2017
- We put too much trust in algorithms and it’s hurting our most vulnerable December 29, 2016
- Be Healthy or Else: How Corporations Became Obsessed with Fitness Tracking December 27, 2016
- Discrimination by algorithm: scientists devise test to detect AI bias December 19, 2016
- A simplified political history of Big Data December 16, 2016
- Hiring Algorithms Are Not Neutral December 9, 2016
- How Algorithms Can Bring Down Minorities’ Credit Scores December 2, 2016
- Put Away Your Machine Learning Hammer, Criminality Is Not A Nail November 29, 2016
- The Foundations of Algorithmic Bias November 7, 2016
- Unregulated Use of Facial Recognition Software Could Curb 1st Amendment Rights October 30, 2016
- Should we trust predictive policing software to cut crime? October 27, 2016
- Google researchers aim to prevent AIs from discriminating October 7, 2016
- To predict and serve? October 7, 2016
- How algorithms rule our working lives September 1, 2016
- White House plan to use data to shrink prison populations could be a racist dumpster fire July 1, 2016
- Is criminality predictable? Should it be? June 30, 2016
- Artificial Intelligence’s White Guy Problem June 25, 2016
- In Wisconsin, a Backlash Against Using Data to Foretell Defendants’ Futures June 22, 2016
- Algorithmic risk-assessment: hiding racism behind “empirical” black boxes May 24, 2016
There’s software used across the country to predict future criminals. And it’s biased against blacks. May 23, 2016 The company that sells this program (Northpointe) has responded to the criticisms here. Northpointe asserts that a software program it sells that predicts the likelihood a person will commit future crimes is equally fair to black and white defendants. Following such response, Jeff Larson and Julia Angwin has written another response (Technical Response to Northpointe) re-examined the data. They argue that they have considered the company’s criticisms, and stand by their conclusions.
- Python Meets Plato: Why Stanford Should Require Computer Science Students to Study Ethics May 16, 2016
- The Real Bias Built In at Facebook May 19, 2016
- Twitter taught Microsoft’s friendly AI chatbot to be a racist asshole in less than a day March 24, 2016
- The Iron Cage in binary code: How Facebook shapes your life chances – Sociological Images: December 30, 2015
- As World Crowds In, Cities Become Digital Laboratories December 11, 2015
- Google Photos Tags Two African-Americans As Gorillas Through Facial Recognition Software July 1, 2015
- How big data is unfair September 26, 2014
- Facebook reveals news feed experiment to control emotions June 30, 2014
- The Hidden Biases in Big Data by Kate Crawford April 1, 2013
Algorithmic processes and politics might seem far removed from each other. However, if anything, the recent political climate is indicative of how algorithms can be computational tools for political agendas. Here and here are exemplar twitter threads that highlight particular Twitter accounts used as tools for political agenda. The articles below are, in some way or another, related to algorithms in the political arena.
Facebook could get a massive fine if it continues tracking people online February 17, 2018
Facebook admits social media sometimes harms democracy January 22, 2018
How Facebook’s Political Unit Enables the Dark Art of Digital Propaganda December 21, 2017
Inside the world of Brazil’s social media cyborgs December 13, 2017
More than a Million Pro-Repeal Net Neutrality Comments were Likely Faked November 23, 2017
Extreme Vetting by Algorithm November 20, 2017
How a half-educated tech elite delivered us into evil November 19, 2017
How to Fool Americans on Twitter November 6, 2017
Opinion | Silicon Valley Can’t Destroy Democracy Without Our Help November 2, 2017
When Data Science Destabilizes Democracy and Facilitates Genocide November 2, 2017
How People Inside Facebook Are Reacting To The Company’s Election Crisis October 20, 2017
Tech Giants, Once Seen as Saviors, Are Now Viewed as Threats October 12, 2017
Russian Facebook ads: 70 million people may have seen them October 4, 2017
Google and Facebook Have Failed Us – The Atlantic October 2, 2017
Zuckerberg’s Preposterous Defense of Facebook September 29, 2017
“Fake news” tweets targeted to swing states in election, researchers find September 28, 2017
As Google Fights Fake News, Voices on the Margins Raise Alarm September 26, 2017
Facebook blocked an ad for a march against white supremacy: September 25, 2017
Facebook enabled advertisers to reach “Jew haters” September 14, 2017
Facebook and Google, show us your ad data Understanding how they influence us is crucial to the future of our democracy. September 13, 2017
RT, Sputnik and Russia’s New Theory of War September 13, 2017
American politics needs new rules for the Facebook era September 12, 2017
Russia’s Facebook Fake News Could Have Reached 70 Million Americans September 8, 2017
Forum Q&A: Philip Howard on Computational Propaganda’s Challenge to Democracy July 25, 2017. “Computational propaganda, or the use of algorithms and automated social media accounts to influence politics and the flow of information, is an emerging challenge to democracy in the digital age. Using automated social media accounts called bots (or, when networked, botnets), a wide array of actors including authoritarian governments and terrorist organizations are able to manipulate public opinion by amplifying or repressing different forms of political content, disinformation, and hate speech.”
Voter profiling in the 2017 Kenyan election June 6, 2017
Confronting a Nightmare for Democracy May 4, 2017
Robert Mercer: the big data billionaire waging war on mainstream media Feburary 26, 2017
Revealed: how US billionaire helped to back Brexit Feburary 26, 2017
The Truth About The Trump Data Team That People Are Freaking Out About Feburary 16, 2017
The Data That Turned the World Upside Down Jan 28, 2017
Inside the Trump Bunker, With Days to Go: Win or lose, the Republican candidate and his inner circle have built a direct marketing operation that could power a TV network—or finish off the GOP. October 27, 2016
For a more scholarly read
- Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact.
- Barabas, C., Dinakar, K., Virza, J. I., & Zittrain, J. (2017). Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment.arXiv preprint arXiv:1712.08238.
- Bolukbasi, T., Chang, K. W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Advances in Neural Information Processing Systems (pp. 4349-4357).
- Caliskan-Islam, A., Bryson, J. J., & Narayanan, A. (2016). Semantics derived automatically from language corpora necessarily contain human biases. arXiv preprint arXiv:1608.07187.
- Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments.arXiv preprint arXiv:1703.00056. (PDF)
- Datta, A., Sen, S., & Zick, Y. (2016, May). Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In Security and Privacy (SP), 2016 IEEE Symposium on (pp. 598-617). IEEE. (PDF)
- Datta, A., Tschantz, M. C., & Datta, A. (2015). Automated experiments on ad privacy settings.Proceedings on Privacy Enhancing Technologies, 2015(1), 92-112.
- Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems (TOIS), 14(3), 330-347.
Jawaheri, H. A., Sabah, M. A., Boshmaf, Y., & Erbad, A. (2018). When A Small Leak Sinks A Great Ship: Deanonymizing Tor Hidden Service Users Through Bitcoin Transactions Analysis. arXiv preprint arXiv:1801.07501.
Monahan, J., & Skeem, J. L. (2016). Risk assessment in criminal sentencing.Annual review of clinical psychology, 12, 489-513.
Narayanan, A., Huey, J., & Felten, E. W. (2016). A precautionary approach to big data privacy. In Data protection on the move (pp. 357-385). Springer, Dordrecht.
Munoz, C., Smith, M., & Patil, D. (2016). Big data: A report on algorithmic systems, opportunity, and civil rights.Executive Office of the President. The White House.
Yeung, K. (2017). Algorithmic Regulation: A Critical Interrogation.
Zafar, M. B., Valera, I., Gomez Rodriguez, M., & Gummadi, K. P. (2017, April). Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web (pp. 1171-1180). International World Wide Web Conferences Steering Committee.
Zhang, B. H., Lemoine, B., & Mitchell, M. (2018). Mitigating Unwanted Biases with Adversarial Learning. arXiv preprint arXiv:1801.07593.