Month: September 2017

Resources – on automated systems and bias

Last updated: 21/12/2018

If you are a data scientist, a software developer, or in the social and human sciences with interest in digital humanities, then you’re no stranger to the ongoing discussions on how algorithms embed and perpetuate human biases. Ethical considerations and critical engagement are urgently needed.

I have keenly been following these discussions for a while and this post is an attempt to put together the articles, books, book reviews, videos, interviews, twitter threads and so on., that I’ve come across, in one place so they can be used as resources.

This list is by no means exhaustive and as we are becoming more and more aware of the catastrophic consequences of these technologies, more and more pieces/articles/journal papers are being written about it on a daily basis. I plan to update this site regularly. Also, if you think there are relevant material that I have not included, please leave them as a comment and I will add them.

Books

Weapons of math destruction: how big data increases inequality and threatens democracy by Cathy O’Neil. A great number of the article on the list below are written by O’Neil. She is also active on Twitter regularly posting links and interesting critical insights on everything to do with mathematical models and bias. Here is my own review of O’Neil’s book with plenty of relevant links itself and here for another excellent review of O’Neil’s book.

ageofsurveillanceThe challenges to humanity posed by the digital future, the first detailed examination of the unprecedented form of power called “surveillance capitalism,” and the quest by powerful corporations to predict and control our behavior.

Shoshana Zuboff’s interdisciplinary breadth and depth enable her to come to grips with the social, political, business, and technological meaning of the changes taking place in our time. We are at a critical juncture in the confrontation between the vast power of giant high-tech companies and government, the hidden economic logic of surveillance capitalism, and the propaganda of machine supremacy that threaten to shape and control human life. Will the brazen new methods of social engineering and behavior modification threaten individual autonomy and democratic rights and introduce extreme new forms of social inequality? Or will the promise of the digital age be one of individual empowerment and democratization?

The Age of Surveillance Capitalism is neither a hand-wringing narrative of danger and decline nor a digital fairy tale. Rather, it offers a deeply reasoned and evocative examination of the contests over the next chapter of capitalism that will decide the meaning of information civilization in the twenty-first century. The stark issue at hand is whether we will be the masters of information and machines or its slaves.

We Are DataWe Are Data: Algorithms and the Making of Our Digital Selves (2018) by John Cheney-Lippold. Below is the first few paragraph from a review by Daniel Zwi, a lawyer with an interest in human rights and technology. Here is also a link to my twitter thread where you can read excerpts from the book that I tweeted as I read the book.

In 2013, a 41-year-old man named Mark Hemmings dialled 999 from his home in Stoke-on-Trent. He pleaded with the operator for an ambulance, telling them that ‘my stomach is in agony’, that ‘I’ve got lumps in my stomach’, that he was vomiting and sweating and felt light-headed. The operator asked a series of questions — ‘have you any diarrhoea or vomiting?’; ‘have you passed a bowel motion that looks black or tarry or red or maroon?’ — before informing him that he did not require an ambulance. Two days later Mr Hemmings was found unconscious on the floor of his flat. He died of gallstones shortly after reaching hospital.

This episode serves as the affective fulcrum of We Are Data: Algorithms and the Making of Our Digital Selves, John Cheney-Lippold’s inquiry into the manner in which algorithms interpret and influence our behaviour. It represents the moment at which the gravity of algorithmic regulation is brought home to the reader. And while it may seem odd to anchor a book about online power dynamics in a home telephone call (that most quaint of communication technologies), the exchange betokens the algorithmic relation par excellence. Mr Hemmings’s answers were used as data inputs, fed into a sausage machine of opaque logical steps (namely, the triaging rules that the operator was bound to apply), on the basis of which he was categorised as undeserving of immediate assistance.

The dispassionate, automated classification of individuals into categories is ubiquitous online. We either divulge our information voluntarily — when we fill out our age and gender on Facebook, for example — or it is hoovered up surreptitiously via cookies (small text files which sit on our computer and transmit information about our browsing activity to advertising networks). Our media preferences, purchases and interlocutors are noted down and used as inputs according to which we are ‘profiled’ — sorted into what Cheney-Lippold calls ‘measureable types’ such as ‘gay conservative’ or ‘white hippy’ — and served with targeted advertisements accordingly.

Algorithms of oppressionAlgorithms of oppression: How search engines reinforce – below is an excerpt from Nobel’s book:

Run a Google search for “black girls”—what will you find? “Big Booty” and other sexually explicit terms are likely to come up as top search terms. But, if you type in “white girls,” the results are radically different. The suggested porn sites and un-moderated discussions about “why black women are so sassy” or “why black women are so angry” presents a disturbing portrait of black womanhood in modern society.
In Algorithms of Oppression, Safiya Umoja Noble challenges the idea that search engines like Google offer an equal playing field for all forms of ideas, identities, and activities. Data discrimination is a real social problem; Noble argues that the combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of Internet search engines, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color, specifically women of color.

Screenshot 2017-09-15 at 9.09.59 PM - Edited

Algorithms to Live By: The Computer Science of Human Decisions by Brian Christian and Tom Griffiths. This book is concerned with the workings of the human mind and how computer science can help human decision making.  Here is a post by Artem Kaznatcheev on Computational Kindness which might give you a glimpse of the some of the issues that book covers. Here is a long interview with Brian Christian and Tom Griffiths and a TED Talk with Tom Griffiths on The Computer Science of Human Decision Making.

The Black Box Society: The Secret Algorithms That Control Money and Information by Frank Pasquale. You can read the introduction and conclusion chapters of his book here And here is a good review of Pasquale’s book. You can follow his twitter stream here.

Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech by Sara Wachter-Boettcher

Technically wrongHere is a synopsis:  A revealing look at how tech industry bias and blind spots get baked into digital products—and harm us all.

Buying groceries, tracking our health, finding a date: whatever we want to do, odds are that we can now do it online. But few of us ask why all these digital products are designed the way they are. It’s time we change that. Many of the services we rely on are full of oversights, biases, and downright ethical nightmares: Chatbots that harass women. Signup forms that fail anyone who’s not straight. Social media sites that send peppy messages about dead relatives. Algorithms that put more black people behind bars.

Sara Wachter-Boettcher takes an unflinching look at the values, processes, and assumptions that lead to these and other problems. Technically Wrong demystifies the tech industry, leaving those of us on the other side of the screen better prepared to make informed choices about the services we use—and demand more from the companies behind them.

Paula Boddington, Oxford academic and author of Towards a Code of Ethics for Artificial Intelligence, recommends the five best books on Ethics for Artificial Intelligence. Here is the full interview with Nigel Warburton, published on December 1, 2017.

Automating inequality“Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor” by Virginia Eubanks is being published and will be released on January 23, 2018. Here is an excerpt from Danah Boyd’s blog:

“Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor” is a deeply researched accounting of how algorithmic tools are integrated into services for welfare, homelessness, and child protection. Eubanks goes deep with the people and families who are targets of these systems, telling their stories and experiences in rich detail. Further, drawing on interviews with social services clients and service providers alongside the information provided by technology vendors and government officials, Eubanks offers a clear portrait of just how algorithmic systems actually play out on the ground, despite all of the hope that goes into their implementation. Additionally, Berkman Klein discusses “Algorithms and their unintended consequences for the poor” with Eubanks here.

The Big Data AgendaThe Big Data Agenda: Data Ethics and Critical Data Studies by Annika Richterich PDF available through the link here.

“This book highlights that the capacity for gathering, analysing, and utilising vast amounts of digital (user) data raises significant ethical issues. Annika Richterich provides a systematic contemporary overview of the field of critical data studies that reflects on practices of digital data collection and analysis. The book assesses in detail one big data research area: biomedical studies, focused on epidemiological surveillance. Specific case studies explore how big data have been used in academic work.

The Big Data Agenda concludes that the use of big data in research urgently needs to be considered from the vantage point of ethics and social justice. Drawing upon discourse ethics and critical data studies, Richterich argues that entanglements between big data research and technology/ internet corporations have emerged. In consequence, more opportunities for discussing and negotiating emerging research practices and their implications for societal values are needed.”

 

Re-Engineering HumanityRe-Engineering Humanity by professor Evan Selinger and Brett Frischmann

Every day, new warnings emerge about artificial intelligence rebelling against us. All the while, a more immediate dilemma flies under the radar. Have forces been unleashed that are thrusting humanity down an ill-advised path, one that’s increasingly making us behave like simple machines? In this wide-reaching, interdisciplinary book, Brett Frischmann and Evan Selinger examine what’s happening to our lives as society embraces big data, predictive analytics, and smart environments.

OutnumberedOutnumbered: From Facebook and Google to Fake News and Filter-bubbles – The Algorithms That Control Our Lives (featuring Cambridge Analytica) by David Sumpter.

A review from Financial Times, here.

 

TED Talks, podcasts, and interviews 

The era of blind faith in big data must end TED Talk by Cathy O’Neil, April, 2017

Machine intelligence makes human morals more important November 11, 2017. In this TED Talk, Zeynep Tufekci emphasizes the importance of human values and ethics in the age of machine intelligence and algorithmic decision making.

We’re building an artificial intelligence-powered dystopia, one click at a time, another thought provoking TED Talk from techno-sociologist Zeynep Tufekci.

How I’m fighting bias in algorthims TED Talk – MIT Researcher Joy Buolamwini, November 2016

AI, Ain’t I A Woman? Joy Buolamwini

Data is the new gold, who are the new thieves? TED Talk – Tijmen Schep 2016

O’Neil’s interview with Politics Weekly podcast (starts 30mins in) July 5, 2017. O’Neil calls for public awareness on how algorithms are used, often without our knowledge, in job interviews, for example., and explains why we should question and interrogate these algorithms which are often presented to us as authoritative.

A short interview with Frank Pasquale on his book Black Box Society May 12, 2016. Pasquale emphasizes the opaqueness of algorithms and argues on why we should demand transparency.

A 2 minutes video, a prototype example, of algorithms being used in recruitment. A working example of the kind of dangerous AI used for recruiting that experts such as O’Neil constantly warn against. This post provides a critical analysis of why such endeavors are futile and dangerous. Here’s another related video on how facial recognition technology will go mainstream in 2018. In fact, such technology has gone mainstream in China. Here is a short video where a BBC reporter experimented with the world’s largest surveillance system.

Tom Chatfield on Critical Thinking October 2, 2017 In this philosophically themed podcast, Chatfield discusses issues such as “how new digital realities interact with old human biases” with Dave Edmonds.

When algorithms discriminate: Robotics, AI and ethics November 18, 2017. Stephen Roberts, professor of computer science at the University of Oxford, discusses the threats and promises of artificial intelligence and machine learning with Al Jazeera.

Here is a series of talks, from the ABC Boyer Lectures, hosted by Professor Genevieve Bell. The series is called Fast, Smart and Connected: What is it to be Human, and Australian, in a Digital World? The issues discussed include “How to build our digital future.”

You and AI – Just An Engineer: The Politics of AI (July, 2018). Kate Crawford, Distinguished Research Professor at New York University, a Principal Researcher at Microsoft Research New York, and the co-founder and co-director the AI Now Institute, discusses the biases built into machine learning, and what that means for the social implications of AI.

Facebook: Last Week Tonight with John Oliver (HBO) an extremely funny and super critical look at Facebook.

Humans are biased, and our machines are learning from us — ergo our artificial intelligence and computer programming algorithms are biased too. Joanna Bryson explains how human bias is learned by taking a closer look at how AI bias is learned.

Websites

Social Cooling is a term that refers to a gradual long term negative side effects of living in an digital society where our digital activities are tracked and recorded. Such awareness of potentially being scored by algorithms leads to a gradual behaviour change: self-censorship and self-surveillance. Here is a piece on what looks like social cooling in action. The website itself has plenty of resources that can aid critical thinking and touches up on big philosophical, economical and societal questions in relation to data and privacy.

bias-in-bias-out-sc593da2a154050-1280

www.socialcooling.com

For those interested in critical thinking, data and models Calling Bullshit offers various resources and tools for spotting and calling bullshit. This website, developed for a course entitled ‘Calling Bullshit’, is a great place to explore and learn about all things “data reasoning for the digital age”.

Another important website that is worth a mention here is Algorithmic Justice League where you can report algorithm bias, participate in testing software for inclusive training set, or where you can simply donate and contribute raising awareness about existing bias in coded systems. More on AI face misclassification and accountability by Joy Buolamwini here. With a somewhat similar aim is the Data Harm Record website – a running record of harms that have been caused by uses of big data.

fast.ai a project that aims to increase diversity in the field of deep learning and make deep learning accessible and inclusive to all. Critical Algorithm Studies: a Reading List – a great website with links to plenty of material on critical literature on algorithms as social concerns. Here is the Social Media Collective Reading List where you’ll find further material on Digital Divide/ Digital Inclusion and Metaphors of Data.

The AI Now Institute at New York University is an interdisciplinary research center dedicated to understanding the social implications of artificial intelligence. Data & Society is a research institute focused on the social and cultural issues arising from data-centric technological developments.  FAT/ML is a website on Fairness, Accountability, and Transparency in Machine Learning with plenty of resources and events, run by a community of researchers. Litigating Algorithms: Challenging Government Use of Algorithmic Decision Systems. An AI Now Institute Report.

ConceptNet Numberbatch 17.04: better, less-stereotyped word vectors This is not a website but a blogpost. I am putting it here with other websites as the author offers some solution to reducing biases when building algorithms for natural language understanding beyond simply stating that such algorithms are biased.

Auditing Algorithms – a useful website for those teaching/interested in accountability in automated systems. The site includes films festivals, videos, etc,.

The Ethics and Governance of Artificial Intelligence – a cross-disciplinary course that investigates the implications of emerging technologies, with an emphasis on the development and deployment of Artificial Intelligence. Here’s an Introduction to Data Ethics by Markkula Center for Applied Ethics.

Google launches a new course to teach people about fairness in machine learning.

Biology/genetics  – (Digital phrenology?) 

It is difficult to draw a line and put certain articles under the category of “social”, “biological”, “political”, or other as the boundaries between these categories are blurred and most of the themes are somehow all interlinked. Nonetheless, I think the following articles can loosely be described as dealing with biological/genetics/personality material. Furthermore, towards the end of this post, I have also thematized some articles under the category of “political”.

In a recent preprint paper “Deep Neural Networks Can Detect Sexual Orientation From Faces” (here are the Gurdian and the Economist reportings) Yilun Wang and Michal Kosinski calmed that their deep neural network can be trained to discern individuals’ sexual orientations from their photographs. The paper has attracted and continues to attract a massive attentions and has generated numerous responses, outrages and discussion. Here is an in-depth analysis from Calling Bullshit and here for a detailed technical assessment and here for a comprehensive and eloquent response from Greggor Mattson. Here is another response and another one here from a data scientist’s perspective and another recent response from O’Neil here. If you only want to read just one response, I highly recommend reading Mattson’s. There have been been plenty of discussions and threads on Twitter – here and here are a couple of examples. It is worth noting that Kosinski, one of the authors of the above paper, is listed as one of the the advisers for a company called Faception, an Israeli security firm that promises clients to deploy “facial personality profiling” to catch pedophiles and terrorists among others.

Do algorithms reveal sexual orientation or just expose our stereotypes? by @blaiseaguera et al., is the latest (January 11, 2018) response to the above Wang and Kosinski “gaydar” paper. In this critical analysis, @blaiseaguera et al., argue that much of the ensuing scrutiny of Wang and Kosinski work has focused on ethics, implicitly assuming that the science is valid. However, on a closer inspection, et al., find that the science doesn’t stand up to scrutiny either.

When advanced technologies in genetics and face recognition are applied with the assumption that “technology is neutral”, the consequences are often catastrophic and dangerous. These two pieces, Sci-fi crime drama with a strong black lead and Traces of Crime: How New York’s DNA Techniques Became Tainted provide some in-depth analysis of such.

Physiognomy’s New Clothes this is a comprehensive and eloquent piece and well worth your time. Physiognomy, the practice of using people’s outer appearance to infer inner character is a practice that is now discredited and discarded as phrenology. However, this piece illustrates how such practice is alive and well in the era of big data and machine learning. Here is more on the Wu and Zhang paper that the Physignomy’s New Clothes authors cover in the above piece. Further examples of digital phrenology can be found here and here here.

General articles on various automated systems and bias, discrimination, unfairness, ethical concerns, etc., listed in order of publication dates starting from the latest.

Frank Pasquale testifies (video, written testimony) Before the United States House of Representatives Committee on  Energy and Commerce Subcommittee on Digital Commerce and Consumer Protection in relation to “Algorithms: How Companies’ Decisions About Data and Content Impact Consumers”. Here for more written testimony on Algorithmic Transparency from the Electronic Privacy Information Center – November 29, 2017.
ProPublica
Image Courtesy of ProPublica

There’s software used across the country to predict future criminals. And it’s biased against blacks. May 23, 2016 The company that sells this program (Northpointe) has responded to the criticisms here. Northpointe asserts that a software program it sells that predicts the likelihood a person will commit future crimes is equally fair to black and white defendants. Following such response, Jeff Larson and Julia Angwin has written another response (Technical Response to Northpointe) re-examined the data. They argue that they have considered the company’s criticisms, and stand by their conclusions.

Politics

Algorithmic processes and politics might seem far removed from each other. However, if anything, the recent political climate is indicative of how algorithms can be computational tools for political agendas. Here and here are exemplar twitter threads that highlight particular Twitter accounts used as tools for political agenda. The articles below are, in some way or another, related to algorithms in the political arena.

Forum Q&A: Philip Howard on Computational Propaganda’s Challenge to Democracy July 25, 2017. “Computational propaganda, or the use of algorithms and automated social media accounts to influence politics and the flow of information, is an emerging challenge to democracy in the digital age. Using automated social media accounts called bots (or, when networked, botnets), a wide array of actors including authoritarian governments and terrorist organizations are able to manipulate public opinion by amplifying or repressing different forms of political content, disinformation, and hate speech.”

For a more scholarly read

Advertisements

Afrofeminist epistemology and dialogism: a synthesis (work in progress)

Embodied, enactive and dialogical approaches to cognitive science radically depart from traditional Western thought in the manner with which they deal with life, mind and the person. The former can be characterised as emphasising interdependence, relationships, and connectedness with attempts to understanding organisms in their milieu. Acknowledgements of complexities and ambiguities of reality form the starting points for epistemological claims.  The latter, on the other hand, tends to strive for certainty and logical coherence in an attempt to establish stable and relatively fixed epistemological generalisations. Individuals, which often are perceived as independent discreet entities, are taken as the primary subjects of knowledge and the units of analysis.

Collins’s proposed black feminist epistemology, hereafter “Afrofeminist epistemology”, opposes the traditional Western approach to epistemology as well as the largely Positivist scientific view inherited from it.  As such, it is worth drawing attention to the similarities between Black feminist thought and dialogical approaches to the cognitive sciences. In what follows I seek to reveal a striking convergence of themes between these two schools of thought. In so doing, I intend to illustrate that the two traditions – cognitive sciences, especially the dialogical approach to epistemology, and Afrofeminist epistemology, particularly, the type proposed by Patricia Hill Collins (2002) – can inform one another through dialogue.

General characterization of classic Western approach to epistemology and the Cartesian inheritance

The classic Western approach to epistemology tends to be monological; meaning it tends to focus on individuals and their cognition and behaviour. When relationships and interactions enter the equation, individuals and their relations are often portrayed as distinct entities that can be neatly separated. Dichotomous thinking — subject versus object, emotion versus reason – persists within this tradition. Ethical and moral values and questions are often treated as clearly separable from “objective scientific work” and as something that the scientist need not contaminate her “objective” work with. In its desire for absolute rationality, Western thought wishes to cleave thought from emotion, cultural influence and ethical dimensions. Cognition, evaluation and emotions are treated as if they are entities that shouldn’t be contaminated. Abstract and intellectual thinking are regarded as the most trustworthy forms of understanding and rationality is fetishized.

In the classic Western epistemological tradition, abstract reasoning is taken to be the highest cognitive goal, and certainty as a necessary component for knowledge.  Since the ultimate goal is to arrive at timeless, universally applicable laws, establishing certainty is pivotal for laying the foundations. Although there are historical antecedents leading up to and contributing towards what is generally regarded as Western tradition – in particular, Plato in his dialogues Meno and Phaedo – Descartes represents the pinnacle of Western thought (Gardiner 1998, Toulmin 1992). The subject as autonomous and self-sustaining entity or a Cartesian cogito, which we have inherited from Cartesian thinking, remains prevalent in most current Western philosophy as well as in the background assumptions of the human sciences. The way the individual self is taken as the unquestioned origin of knowledge of the world and others is a legacy of this tradition (Linell 2009).

Black feminist criticism of dominant approach and the proposed alternative

Contrary to the classic Western epistemological tradition, in Afrofeminist epistemology ethical and moral values and questions are inseparable from our enquires into knowledge. Similarly, knowledge claims and knowledge validation processes are not independent of the interests and values of those who define what knowledge is, what is important and worthy of study, and what the criteria for epistemological justification are (Collins 2002). Such definitions and criteria are guarded fiercely by the institutions and individuals who act as the ‘gatekeepers’ of the classic Western epistemological tradition. This traditional Western epistemology, Collins points out, predominantly represents Western, elite, and white, male interests and values. In fact, a brief review of the history of Western philosophical canon reveals that knowledge production processes and the criteria for knowledge claims have predominantly been set by elite, white, Western men.

Scholars like Karen Warren (2009) have cogently argued that the history of classical Western philosophy has, for centuries, almost exclusively consisted of elite, white, Western European men giving the illusion that Western white men are the epitome of intellectual achievement. Women’s voices and perspectives were diminished, ignored, and systematically excluded from the canon. In her ‘recovery project’, Warren finds that women philosophers nonetheless have made important contributions throughout the history of philosophy and that you find them when you go looking for them. This, to a great extent, remains the case not only in philosophy, but also in much of the rest of the academic tradition. A brief look at any philosophy curricula would reveal that white European male philosophers and their views remain dominant and definitive.

Traditional approaches taken as the “normal” and “acceptable” ways to theorise and generalise about people’s lived experiences means that any other approaches to theorising about groups of people that are not aligned with canonical intellectual currents (often white European male) are dismissed as “anomalies”. For Collins, it is indisputable that different people experience reality differently and that all social thought somewhat reflects the realities and interests of its creators. Political criteria influence knowledge production and validation processes in one way or another.  Collins asserts, in studying Black women’s realities, the typical perspectives on offer have either identified black women with the oppressor, in which case Black women lack an independent interpretation of their own realities, or have characterised Black women as less human than the oppressor, in which case Black women lack the capacity to articulate their own standpoint. While in the first perspective independent Black women’s realities are seen as not their own, in the latter, it is seen as inferior. For that reason, the traditional epistemology is inadequate to capture and account for the lived experiences of black women – hence Collins’ proposal for an Afrocentric feminist epistemology which is grounded in black women’s values and lived experiences.

Black women’s lived experiences are different in important ways. The kind of relationships Black women have, and the kind of work they engage in are notable examples that demonstrate the differing realities and lived experiences. Intuitive knowledge, what Collins calls wisdom, is crucial to the everyday lives and survival of black women. While wisdom and intuitions, as opposed to abstract intellectualizing, might be excluded as irrelevant, and at best, less credible as far as the traditional epistemologies are concerned, they are highly valued within black communities:

“The distinction between knowledge and wisdom, and the use of experience as the cutting edge dividing them, has been key to Black women’s survival. … knowledge without wisdom is adequate for the powerful, but wisdom is essential to the survival of the subordinate.” (Collins 1989, p. 759)

The desire for complete objectivity and universally generalisable theories in the dominant Western tradition has led to a focus on abstract analysis of the nature of concepts like ‘knowledge’ and ‘justification’, with little to no grounding of complex lived experience. Its portrayal of reason and rationality in direct contrast with emotions – the former to arrive at pure, objective knowledge –  has led to dichotomous thinking, thus blinding us to continuities and complementarities. Consequently, “reason” has been privileged over emotions. This in turn has impeded emotional and bodily knowledge, what Foucault (1980) calls ‘subjugated knowledge’ often expressed through music, drama, etc., as less important. However, ‘subjugated knowledge’ is crucial and is part of a way of life and survival for black communities. Such knowledge, grounded in concrete experiences and recognised through connectedness, dialogues and relationships, is what is of real value for Black women.

That knowledge claims should be grounded in concrete, lived experience rather than abstract intellectualising is crucial to Collins’s Afrocentric feminist epistemology. Collins’s Afrocentric epistemology prioritizes wisdom over knowledge and has, at its core, black women’s experiences of race and gender oppression. Black women have shared experience of oppression, imperialism, colonialism, slavery, and apartheid as well as roots in the core African value system prior to colonization. The roots of Afrocentric epistemology can be traced back to African-based oral traditions. As such, dialogues occupy an important place. Dialogues, so far as the Afrocentric epistemology is concerned, are an essential method for assessing knowledge claims.

This Afrocentric epistemology, grounded in the lived experience of black women, that employs dialogues as a way of validating knowledge claims, stands in a stark contrast with that of the Eurocentric epistemology. Connectedness rather than separation is an essential component of the knowledge validation process. Individuals are not detached observers of stories or folktales, but rather active participants, listeners and speakers and part of the story. Dialogues explore and capture the fundamentally interactive connected nature of people and relationships.

Ethical claims lie at the heart of an Afrocentric feminist epistemology, in contrast to the classical Western epistemology that considers ethical issues as separate from and independent of ‘objective scientific investigations’. Afrocentric feminist epistemology is about employing emotions, wisdom, ethics and reason as interconnected and equally essential components in assessing knowledge claims with reference to a particular set of historical conditions.

Dialogical criticism to dominant approach and its alternative

The dialogical approach to cognitive science – inspired by Mikhail Mikhailovich Bakhtin’s (1895 – 1975) thinking and further developed by dialogists such as Per Linell (2009) – objects to the dominant Western epistemological approach. Dialogical theories which have roots in the Bakhtin Circle, a 20th century school of Russian thought, have had a massive influence on social theory, philosophy and psychology. At the centre of dialogical theories lies the view that linguistic production, the notion of self-hood, and knowledge are essentially dialogic. Dialogical approaches are concerned with conceptualizing and theorizing human-sense making and they do so based on a set of assumptions some of which stand in stark opposition to traditional Western philosophy and science. These assumptions include: individual selves cannot be assumed to exist as agents and thinkers before they begin to interact with others and the world; our sense-makings are not separable from our historical antecedents and current cultural and societal norms and value systems. The interrelation between self, others and the environment are there from the start in the infant’s life and the awareness of self and others co-develop over time; they are two sides of the same process. Classical Western philosophy and science has tried to reduce the world to rational individual subjects in attempt to establish stable universals. The origin of knowledge of the world and of others is the discreet individual person. So far as dialogical approaches go, most traditional Western epistemological approaches are rooted in Cartesian individualism and are monological – meaning, that they only encompass individuals and their cognition and environments. Groups and societies are nothing but ensembles of individuals:

“Individuals alone think, speak, carry responsibilities, and other individuals at most have a casual impact on their activities and stances.” (Linell 2009, p. 44)

Dialogism[1], in contrast, insists that interdependencies, co-dependencies, and relationships between the individual and the world are most fundamental components in understanding the nature of selves and furthermore, of knowledge. The term intersubjectivity captures this concept well:

“The term “intersubjectivity”—or what Hannah Arendt calls “the subjective in-between”—shifts our emphasis away from notions of the person, the self, or the subject as having a stable character and abiding essence, and invites us to explore the subtle negotiations and alterations of subjective experience as we interact with one another, intervocally or dialogically (in conversation or confrontation), intercorporeally (in dancing, moving, fighting, or competing), and introceptively (in getting what we call a sense of the other’s intentions, frame of mind, or worldview).”  (Jackson 2002, p. 5)

Cultures and societies are typically conceived as objective, stable structures so far as Western epistemologies go. Dialogism by contrast conceives cultures and societies as dynamic, living and partly open, with tensions, internal struggles and conflicts between majorities and minorities and different value systems. “Knowledge is necessarily constructed and continually negotiated (a) in situ and in sociocultural traditions, and (b) in dialogue with others; individuals are never completely autonomous1 as sense-makers.” (Linell 2009, p. 46) The individual is not a separate, discrete, fixed and stable entity that stands independent from others, but rather one that is always in dynamical interactions with and interdependent with others. Knowledge claims and knowledge validation processes need therefore to reflect these continual tensions and dynamic interactions.

Concluding remarks: drawing similarities between dialogical approaches and Afrofeminist epistemology

So, what are the implications, if any, of drawing these commonalities between Afrofeminist epistemology and dialogical approaches to epistemology, and their common refutation of traditional Western epistemology? Collins has described Afrofeminist and Western epistemological grounds as competing and at times irreconcilable:

“Those Black feminists who develop knowledge claims that both epistemologies can accommodate may have found a route to the elusive goal of generating so called objective generalizations that can stand as universal truths.”  (Collins 1989, p.773)  

The synthesis and incorporation of dialogism with Afrofeminist epistemology is, in a sense, not the discovery of that elusive finding into “objective generalization” or “universal truths” that satisfy both epistemologies. Rather such synthesis, I argue, is a means towards epistemological approaches that aspire to embed Afrofeminist values and dialogical epistemological underpinnings to our understandings of personhood and knowledge. Such epistemological approaches acknowledge that knowledge claims, knowledge validation processes and any scientific endeavours in general are value-laden and cannot be considered independent of underlying values and interests. A move towards epistemological approaches that acknowledge the role of the scientist/theorist which Barad (2007) captures concisely:

“A performative understanding of scientific practices, for example, takes account of the fact that knowing does not come from standing at a distance and representing but rather from a direct material engagement with the world.”   (Barad 2007, p. 49)

Connectedness and relationships rather than disinterested, disembodied, and detached Cartesian individuals form a central component of analysis. Great emphasis is placed on extensive dialogues and not to become a detached observer of stories. In so doing, individual expressiveness, emotions, the capacity for empathy and the fact that ideas cannot be divorced from those who create and share them need to be key factor for this epistemology. Such is an epistemological approach that aspires to embed Afrofeminist values and dialogical underpinnings.

Knowledge is specific to time and place and is not rooted in the individual person but in relationships between people. Individuals exist in a web of relations and co-dependently of one other, negotiating meanings and values through dialogues. As Bakhtin, pioneer of dialogism has emphasized, we are essentially dialogical beings, and it is only through dialogues with others that we come to realise and sustain a coherent – albeit continually changing –  sense of self. Reality is messy, ambiguous, and complex. Any epistemological approach that takes the person as fully autonomous, fixed, and a self-sufficient agent whose actions are guided by pure rationality fail to recognise the complexities and ambiguities of reality, time and context-bound nature of knowledge. At the core of this proposed Afrofeminist/dialogical approach to epistemology is an attempt to bring values as important constituent factor to the dialogical, intersubjective embodied, in a constant flux person and the epistemologies that drive from it.

[1] It is important to note that individuals do not disappear in dialogism, rather, the individual is a social being who is interdependent with others, “not an autonomous subject or a Cartesian cogito.” (Linell 2009)

Bibliography

Barad, K. (2007). Meeting the universe halfway: Quantum physics and the entanglement of matter and meaning. duke university Press.

Collins, P. H. (1989). The social construction of black feminist thought. Signs: Journal of Women in Culture and Society14(4), 745-773.

Collins, P. H. (2002). Black feminist thought: Knowledge, consciousness, and the politics of empowerment. Routledge.

Foucault, M. (1980). Language, counter-memory, practice: Selected essays and interviews. Cornell University Press.

Gardiner, M. (1998). The incomparable monster of solipsism: Bakhtin and Merleau-Ponty. Bakhtin and the human sciences. Sage, London, 128-144.

Jackson, M. (2012). Lifeworlds: Essays in existential anthropology. University of Chicago Press.

Linell, P. (2009). Rethinking language, mind, and world dialogically. IAP.

Toulmin, S. E., & Toulmin, S. (1992). Cosmopolis: The hidden agenda of modernity. University of Chicago Press.

Warren, K. (Ed.). (2009). An unconventional history of Western philosophy: conversations between men and women philosophers. Rowman & Littlefield.