- Ananny, M., & Crawford, K. (2016). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. new media & society, 1461444816676645.
- Ajunwa, I., Crawford, K., & Schultz, J. (2017). Limitless worker surveillance. Cal. L. Rev., 105, 735.
- Barabas, C., Dinakar, K., Virza, J. I., & Zittrain, J. (2017). Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment.arXiv preprint arXiv:1712.08238.
- Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact.
- Bartlett, A., Lewis, J., Reyes-Galindo, L., & Stephens, N. (2018). The locus of legitimate interpretation in Big Data sciences: Lessons for computational social science from-omic biology and high-energy physics. Big Data & Society, 5(1), 2053951718768831.
- Bolukbasi, T., Chang, K. W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Advances in Neural Information Processing Systems (pp. 4349-4357).
- Caliskan-Islam, A., Bryson, J. J., & Narayanan, A. (2016). Semantics derived automatically from language corpora necessarily contain human biases. arXiv preprint arXiv:1608.07187.
- Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments.arXiv preprint arXiv:1703.00056. (PDF)
- Creemers, R., (2018) China’s Social Credit System: An Evolving Practice of Control
- Datta, A., Sen, S., & Zick, Y. (2016, May). Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In Security and Privacy (SP), 2016 IEEE Symposium on (pp. 598-617). IEEE. (PDF)
- Datta, A., Tschantz, M. C., & Datta, A. (2015). Automated experiments on ad privacy settings.Proceedings on Privacy Enhancing Technologies, 2015(1), 92-112
- Edwards, L., Martin, L., & Henderson, T. (2018). Employee Surveillance: The Road to Surveillance is Paved with Good Intentions.
- Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems (TOIS), 14(3), 330-347
- Jawaheri, H. A., Sabah, M. A., Boshmaf, Y., & Erbad, A. (2018). When A Small Leak Sinks A Great Ship: Deanonymizing Tor Hidden Service Users Through Bitcoin Transactions Analysis. arXiv preprint arXiv:1801.07501.
-
Keyes, O. (2018). The misgendering machines: Trans/HCI implications of automatic gender recognition. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 88.
- Kleinberg, J., & Mullainathan, S. (2018). Simplicity Creates Inequity: Implications for Fairness, Stereotypes, and Interpretability. arXiv preprint arXiv:1809.04578.
- Kuehlkamp, A., Becker, B., & Bowyer, K. (2017, March). Gender-From-Iris or Gender-From-Mascara?. In Applications of Computer Vision (WACV), 2017 IEEE Winter Conference on (pp. 1151-1159). IEEE.
-
Lau, J., Zimmerman, B., & Schaub, F. (2018). Alexa, Are You Listening?: Privacy Perceptions, Concerns and Privacy-seeking Behaviors with Smart Speakers. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 102.
- Lipton, Z. C., & Steinhardt, J. (2018). Troubling Trends in Machine Learning Scholarship. arXiv preprint arXiv:1807.03341.
- Miller, T. (2017). Explanation in artificial intelligence: insights from the social sciences. arXiv preprint arXiv:1706.07269.
- Monahan, J., & Skeem, J. L. (2016). Risk assessment in criminal sentencing.Annual review of clinical psychology, 12, 489-513.
- Munoz, C., Smith, M., & Patil, D. (2016). Big data: A report on algorithmic systems, opportunity, and civil rights.Executive Office of the President. The White House.
- Narayanan, A., Huey, J., & Felten, E. W. (2016). A precautionary approach to big data privacy. In Data protection on the move (pp. 357-385). Springer, Dordrecht.
-
Raji, I. D., & Buolamwini, J. (2019). Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products. In AAAI/ACM Conf. on AI Ethics and Society.
-
Richardson, R., Schultz, J., & Crawford, K. (2019). Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice. New York University Law Review Online, Forthcoming.
-
Sadowski, J. (2019). When data is capital: Datafication, accumulation, and extraction. Big Data & Society, 6(1), 2053951718820549.
- Silver, E., & Miller, L. L. (2002). A cautionary note on the use of actuarial risk assessment tools for social control. Crime & Delinquency, 48(1), 138-161.
-
Stark, L. (2019). Facial recognition is the plutonium of AI. XRDS: Crossroads, The ACM Magazine for Students, 25(3), 50-55.
-
Stark, L., & Hoffmann, A. L. (2018). Data Is The New What?: Popular Metaphors & Professional Ethics in Emerging Data Cultures. Journal of Cultural Analytics.
-
Suresh, H., & Guttag, J. V. (2019). A Framework for Understanding Unintended Consequences of Machine Learning. arXiv preprint arXiv:1901.10002.
-
van Wynsberghe, A., & Robbins, S. (2018). Critiquing the reasons for making artificial moral agents. Science and engineering ethics, 1-17.
- Weinberg, J. (2018). “Know Everything that Can Be Known About Everybody”: The Birth of the Credit Report.
- Wilson, B., Hoffman, J., & Morgenstern, J. (2019). Predictive Inequity in Object Detection. arXiv preprint arXiv:1902.11097.
- Yeung, K. (2017). Algorithmic Regulation: A Critical Interrogation.
- Zafar, M. B., Valera, I., Gomez Rodriguez, M., & Gummadi, K. P. (2017, April). Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web (pp. 1171-1180). International World Wide Web Conferences Steering Committee.
- Zhang, B. H., Lemoine, B., & Mitchell, M. (2018). Mitigating Unwanted Biases with Adversarial Learning. arXiv preprint arXiv:1801.07593.
This list is not exhaustive by any means but work that is relevant to my work and a list I revise and revisit regularly
Link for the main resources page here
One comment