interdisciplinarity

Humility is a luxury the privileged can afford

I had the privilege of participating in a science communication conference last week (12, December 2018). Some of the speakers beautifully and convincingly articulated the argument for the importance of academics communicating their work with non-academics as well as other academics from different disciplines and how to do it. Alan Alda’s talk, in particular was deep,insightful and thought-provoking.

Alda’s “Communication is not something you add to science; it is the essence of science” captures his key message that communication is an essential part of doing science and not something separate and extra. There is very little dispute regarding the importance of sharing one’s work with the general public as well as scientists, and with academics outside one’s field. However, there is very little guidance as to how one ought to go about it. Alda’s talk during the SCI:COM conference in Dublin provided some of the most insightful advice by far that I have come across.

Alda suggests, talk TO and not AT people. This seemingly obvious but powerful statement is a way of shifting the mindset from “giving a talk” or “delivering a lecture” which treats knowledge as something that can be simply dispersed to communication as two-way shared activity.

Science commination is a reciprocal process that involves both the speaker and the audience. It is vital that the communicator pays attention to the person that they are communicating with. “It is up to you,the communicator, to ensure that the person is following and to bring them onboard.” And this requires understanding your audience. As Alda puts it: “the speaker needs to listen harder than the listener”.

Communication, Alda argues, is not about me figuring out the best message and spraying it at you, it is building a reciprocal dynamic relationship that changes both the speaker and the audience. Effective communication is understanding your audience and knowing how to connect with them. In order to do so, we don’t start with crafting the best message; we start with awareness of the audience.

Good science communication, Alda emphasises, requires reputation, which is intrinsically connected to trust. Speaking from a position of authority is different from speaking as an equal fellow human being. Your audience is more likely to trust you when you speak as a fellow human and this requires humility, which brings me to central point of my blog.

I wholeheartedly agree with Alda’s approach to communication and also think that humility is a virtue that needs to be highly valued.However, whether humility is viewed as a virtue is dependent on societal stereotypes, hence my conflict with it. Humility doesn’t yield trust and reputation for everyone and I speak from a perspective of a black woman in academia. 

In academia, we often have an ideal representation or an image of what an ‘intellectual’ looks like. This is typically a white, middle-class, cis, male. Society’s stereotypes make this group of people automatically perceived as authoritative. Academia’s structure means that people who fit the stereotypically ‘intellectual’ are seen as as unquestionable experts. And for the privileged who fit society’s ‘intellectual’, where coming across as authoritative is the default, humility and speaking to their audience as a fellow human, gains them trust. On the other hand, academics that don’t fit society’s stereotypical ‘intellectual’ often have to work hard to simply prove that they are as capable of their white male counterparts. In an academic environment where looks, gender and race are part of ‘fitting in’ and getting acknowledgements as an intellectual, humility, which is an admirable character for the white male, can be a way of proving that you are not capable, for a black woman. When the default assumption is often you might lack the capacities due to your race or gender, humility might seem like conforming people’s assumptions. Humility, downplaying one’s skills and achievements, for the black woman who already struggles to establish herself as an intellectual, can be a self-imposed punishment which underestimates her intellectual capacity. Humility, then seems, a luxury that the privileged can afford.

Having said that, I must emphasize that the problem is not humility itself but societal stereotypes and rigid academic structures. I still think humility is a character we need to treasure, both in academia and outside. I just hope that we gradually challenge these stereotypes of what an expert intellectual looks like, which will then afford minority’s the luxury for humility and not punish them for it.


Advertisements

Why model?

I came across this little paper on the Introduction to Dynamical Systems and Chaos online course from Santa Fe. It was provided as a supplementary reading in the ‘Modelling’ section. The paper lays out some of the most enduring misconceptions about building models.

“The modeling enterprise extends as far back as Archimedes; and so does its misunderstanding.” Epstein (2008)

So, why model? What are models? And who are modellers?

Prior to reading this paper, my short answers to these questions would have been in accordance with the widely held misconceptions that:

We model to explain and or predict. Models are formal representations (often mathematical) of phenomenon or processes. And a modeller is someone who builds these explicit formal mathematical models. However, Epstein explains:

“Anyone who ventures a projection, or imagines how a social dynamic—an epidemic, war, or migration—would unfold is running some model.”

I like the idea that we all run some implicit models all the time. In the social and political sphere, where it is extremely difficult to operationalize and specify variables, this perspective gives implicit modelling such as drawing dynamical analogies, its due importance.

The paper lays out 16 reasons other than prediction for building models. And the idea that prediction and explanation aren’t the only modelling goals was revelation to me given that I’ve had a love hate relationship with modelling in the past. I am attracted to models, specially those with dynamical systems inclination but the overall tendency towards prediction as a goal often frustrates me. Just to clarify, prediction is a fine goal but my objection arise when 1) we’re deluded into thinking that models give us the tools to predict specific individual behaviours and 2) we can model a phenomenon, especially human behaviour, without first understanding it.

ML

xkcd: Machine Learning

Let me elaborate further in the context of automated predictive system that are currently trending (at least, within my academic circle) and often preoccupy my thinking. Claims to predict “criminal” and “risky” behaviour are examples from last week’s headlines: UK police wants Artificial Intelligence (AI) to predict criminal behaviour before it happens and Predictim, a commercial data analytics firm, claims its AI can flag “risky” babysitters. Unfortunately, these are not the outrageous exceptions but the general direction where things in the digital surveillance sphere seem to be heading.

Behaviours such as “criminal” or “risky” are very complex adaptive behaviours which are a result of infinite ongoing factors, which we can never fully specify in the first place. This makes it impossible to predict criminal behaviour with certainty. Juarrero reminds us why it is impossible to predict human behaviour with precision:

“When we are dealing with complex adaptive systems, surprises are unavoidable. Because of their sensitivity to initial conditions – due, in turn, to their contextual and temporal embeddedness – complex adaptive systems are characterized by unusual twists and novel turns. Since we will never be able to specify any dynamical system’s initial conditions to the requisite (infinite) degree, a fortiori we will never be able to capture all the details and circumstances of anyone’s life and background. Given this limitation, we must always keep in mind that reconstructing specific instances of behavior will always be, at best, an interpretation and not a deduction – a much more fallible type of explanation than we had previously hoped was available. Interpretations of human action are always tentative. Absolute certainty about either what the agent just did, or what he or she will do – specifically – a year from now, is therefore impossible.” (Juarrero 1999, p. 225)

These claims to predict “criminal” or “risky” behaviour are more than a mere misunderstanding of human nature or simple illusions about what AI tools are capable of doing. As these tools are being implemented into the social world, they have grave consequences on people’s lives. When claiming to predict someone’s potential criminality, errors are inevitable. The stakes are high when we get things wrong. Unsurprisingly, it is often society’s most vulnerable, those who are disfranchised, that pay a high price. Indeed, such models are used to further punish and disfranchise those that fall prey to these models.

A slightly different but interrelated issue with modelling to predict is that the strive to predict and explain often ignores the value of describing and/or observing to gain deep understanding. Sure, describing to understand, and explaining and predicting aren’t mutually exclusive. However, in reality, we seem to have blindly adopted prediction and generalization as primary goals of science. Studying to describe and understand, as a result, are undervalued. What is the point of describing? you might ask. I think it is fundamental to understand any phenomena or process as deeply and comprehensibly as possible before we can attempt to explain or predict it, and description is key to gaining such understanding.

I’ll leave you with an insightful Geertz (1973) passage from The Interpretation of Cultures:

“… I have never been impressed with claims that structural linguistics, computer engineering or some other advanced form of thought is going to enable us to understand men without knowing them.”

The AI side of cognitive science is concerned with first world problems

I recently had the opportunity to attend a multidisciplinary conference where cognitive scientists, philosophers, psychologists, artificial intelligence (AI) researchers, neuroscientists and physicists came together to discuss the self. The conference was, generally speaking, well organized and most of the talks were interesting. The theme of the conference was on the openness of the self which means that contrary to the traditional essentialist view of self as fixed, fully autonomous and self-contained, the consensus, among the attendees, was that the self is not a static, discrete entity that exists independent of others but dynamic, changing, co-dependent, and intertwined with others. This intertwinement would furthermore extend to social and political forces that play crucial roles into constituting who we are. In this vein, any discussion of self and technology needs to acknowledge the entanglement of social and political factors and the necessity for diverse input and perspectives.

AI is a very broad field of enquiry which includes, to mention but a few, facial recognition technologies, search engines (such as Google), online assistants (such as Siri), and algorithms which are used in almost every sphere (medical, financial, judicial, and so on) of society. Unfortunately, the view of AI that seems to dominate public as well as academic discourses is a narrow and one-dimensional one where the concern revolves around the question of artificially intelligent “autonomous” entities. This view is unsurprisingly often promoted by a one-dimensional group of people; white, middle-class and male. Questions outside “the creation of artificial AI” rarely enter the equation. The social, political, and economical factors rarely feature in the cognitive science and interdisciplinary formulations of selfhood and technology — as if any technological development emerges in a social, political and economical vacuum. And the conference I attended was no different.

This was apparent during theme-based group discussions at this conference where one group discussed issues regarding self and technology. The discussion was led by researchers in embodied AI and robotics. The questions revolved around the possibility of creating an artificial self, robots, whether AI can be sentient and if so how might we know it. As usual, the preoccupation with abstract concerns and theoretical construction took centre stage, to  the detriment of the political and social issues. Attempts to direct some attention towards the social and political issues were dismissed as irrelevant.

It is easy to see the appeal of getting preoccupied in these abstract philosophical questions. After all, we immediately think of “I, Robot” type of robots when we think of AI and we think of “self-driving” cars when we think of ethical questions in AI.

game and gambling, gaming machines, chess playing Turk, design by Wolfgang von Kempelen (1734 - 1804), built by Christoph Mechel

A 1980s Turk reconstruction

The fascination and preoccupation for autonomous and discrete machines is not new to current pop-culture. The French philosopher René Descartes had a walking and talking clockwork named after his daughter Francine. The machine apparently simulated his daughter Francine, who died of scarlet fever at the age of five. The 18c Hungarian author and inventor Wolfgang von Kempelen created the Mechanical Turk, (a fake) chess-playing and speaking machine to impress the Empress Maria Theresa of Austria.

It is not surprising that our perception of AI is dominated by such issues given that our Sci-Fi pop culture plays an influential role towards our perception of AI. The same culture feeds on overhype and exaggeration of the state of AI. The researchers themselves are also often as responsible for miscommunication and misunderstanding about the state of the art of the filed. And the more hyped a piece of work is, the more attention it is given – look no further than the narrative surrounding Sophia – an excessively anthropomorphized and overhyped machine.

Having said that, the problem goes further than misleading coverage and overhype. The overhype, the narrow one-dimension view of AI as concerned with question of artificial self and “self-driving” cars, detracts from nuanced and most important and more pressing issues in AI that impact the very poor, disfranchised, socially, economically disadvantaged. For example, in the current data economy, insurance systems reward and offer discounts for those that are willing to be tracked and provide as much information about their activities and behaviours. Consumers who want to withhold all but the essential information from their insurers will pay a premium. Privacy, increasingly, will come at a premium cost only the privileged can afford.

An implicit assumption that AI is some sort of autonomous, discrete entity separate from humans, and not a disruptive force for society or the economy, underlies this narrow one-dimensional view of AI and the preoccupation with the creation of artificial self. Sure, if your idea of AI revolves around sentient robots, that might bear some truth. This implicit assumption seems, to me, a hangover from Cartesian dichotomous thinking that remains persistent even among scholars within the embodied and enactive tradition who think that their perspectives account for complex reality. This AI vs humans thinking is misleading and unhelpful, to say the least.

AI systems are ubiquitous and this fact is apparent if you abandon the narrow and one-dimensional view of AI. AI algorithms are inextricably intertwined with our social, legal, health and educational system and not some separate independent entities as we like to envision when we think of AI. The apps that power your smart phone, the automated systems, including those that contribute to the decision towards whether you get a loan or not, whether you are hired or not, or how much your car insurance premium will cost you all are AI. AI that have real impact, especially on society’s most vulnerable.

Yet, most people working on AI (both in academia and Silicon Valley) are unwilling to get their hands dirty with any aspect of the social, economic or political aspect and impact of AI. The field seems, to a great extent, to be constituted of those who are socially, economically and racially privileged where these issues bear no personal consequences. The AI side of cognitive science is no different with its concerns of first world problems.  Any discussion of a person or even society is devoid of gender, class, race, ability and so on. When scholars in these fields speak of “we”, they are barely inclusive of those that are outside the status quo which is mostly a white, male, Western, middle-class educated person. If your model of self is such, how could you and why would you be concerned about the class, economic, race and gender issues that emerge due to unethical application of AI, right? After all, you are unlikely to be affected.  Not only is the model of self unrepresentative of society, there barely is awareness of the issue as a problem in the first place. The problem is invisible due to privilege which renders diversity and inclusivity of perspectives as irrelevant.

This is not by any means a generalization of everyone within the AI scholarship. There are, of course, plenty of people who acknowledge the political and social forces as part of issues to be concerned about within the discussion of AI. Unsurprisingly, such important work in this regard is done by people of colour and women who unfortunately, remain a minority. And the field as a whole would do well to make sure that it is inclusive of such voices, and to value their input instead of dismissing them.

Situating China’s Social Credit System in history and context

If you have been following the developments in the digital humanities, it is very likely that you’ve come across the news that China is implementing a Social Credit System, officially known as Social Credit Score (SCS). Although the SCS is portrayed as a single integrated system that quantifies all behaviour into credit scores, it is in fact an ecology of fragmented initiatives with many different stakeholders. Broadly speaking, it consists of scoring systems developed by private sectors and by governmental bodies. As far as the governmental perspective is concerned, the SCS is an attempt to promote “trustworthiness” and transparency in the economy which is expected to combat perceived lack of trust in the marketplace, and more generally to harmonize social conduct.

Citizens “trustworthiness” is rated based an individual’s social behaviour such as their crime records, what they say on social media, what they buy, the scores of their friends, and so on. This has possible positive or negative implications on individual’s job, visa, loan applications. As a commitment towards radical transparency is a central driving force behind the SCS, information on subjects’ trustworthiness is made publicly available, and in some circumstances even being actively broadcast. Individual citizens and businesses alike are publicly ranked where the records are publicly open.

SCS civilized families

Roncheng’s “civilized families” are displayed on public noticeboards like these. (Simina Mistreanu)

The SCS is to become mandatory by 2020 and is currently being implemented in some form or another across parts of China. Areas that are socioeconomically deprived seem prior targets. Rongcheng in the eastern province of Shandong, where the SCS has been rolled out for some time now, is, according to government officials, one of the best examples of the system working as intended, according to government officials.

From a general systems science perspective, the SCS is a self-organizing system that operates through incentive and punishment mechanisms. People with low ratings will, for example, have slower internet speeds, restricted access to restaurants, and the right to travel invoked.

“Higher scores have already become a status symbol, with almost 100,000 people bragging about their scores on Weibo (the Chinese equivalent of Twitter) within months of launch. A citizen’s score can even affect their odds of getting a date, or a marriage partner, because the higher their Sesame rating, the more prominent their dating profile is on Baihe.” (Creemers, 2018)

The SCS has been described as an insidious digital panopticon and a dystopian nightmare where individuals’ every move are monitored and ranked through data generated from all sorts of activity and interactions, online or otherwise through digital technologies (facial recognition tools and biometric information). Many draw parallels between the SCS and the dystopian science fiction Black Mirror episode “Nosedive” where people rate each other based on their interactions.

Black Mirror rating

Many ethical and human rights issues as well as the complete eradication of the idea of privacy have been raised and the negative consequences of such a dystopian nightmare system is indisputable.

With the realization that ‘digital reputations’ could limit opportunities comes the tendency to self-censor and the tendency to be risk-averse. We are unlikely to hit “like” on a Facebook post that protests some government policy knowing that it could impact our ‘digital reputations’. Consequently, people gradually change their behaviour to align with what the system requires, to get better scores. In the process those behaviours and norms defined as “acceptable” by the government are reinforced.

Nonetheless, among the misconceptions surrounding the SCS, there seems to be some consensus that using individual’s digital traces to directly or indirectly influence individual’s behaviour is something that only happens in non-Western totalitarian states. In fact, credit scoring practices are not unfamiliar in Western societies. Facebook, for instance, seems it is developing its own system of rating users trustworthiness.

It is also worth mentioning Facebook’s emotion tracking patent (where the aim is to monitor individuals’ typing speed in order to predict emotions and adapt messages in response), which was granted in May 2017 and the currently filed Socioeconomic classifier (which might enable Facebook to rank its users according to different social classes), among its series of patents. These developments in combination with others, such as Facebook’s ability to flag individuals through its facial recognition technology without the consent of the user, in some sense constitute a surveillance society. Facebook’s ability to rank and categorize people into a variety of socioeconomic categories has possible impacts on individuals’ opportunities depending on their class, gender, race and sexual orientation. Whether its the type of job ads one is excluded from viewing (due to their gender, class or age) or the exclusion from certain housing ads, Facebook’s ranking and categorizing systems often impact the under-privileged and those who fail to conform to the status quo.

Health insurance

Marshall Allen, July 2018, ProPublica

Along social media platforms, health insurers, and schools, can also be mentioned as examples that share features of the SCS. Like the SCS, these Western industries and institutes, track and surveil people through digital technologies including face recognition tools and biometric information.

We are rated, ranked and categorized using data extracted from us. Similar to the SCS, such ranking and rating often has possible “real” life consequences whether in the form of how much we pay for our insurance, what ads are pushed on us, or how we behave in school yards. The difference between the Chinese SCS and Western tech industry is, while the former is clear and upfront about it, the latter is much more invisible. In fact, such tech giants go out of their way to hide what they are doing.

Rating systems, those by the SCS or deployed through Western tech industry, create unwanted incentives and increase pressure on individuals to conform to the status quo. This creates and contributes to a society that is risk averse.

“When doctors in New York were given scores this had unexpected results. Doctors that tried to help advanced cancer patients had a higher mortality rate, which translated into a lower score. Doctors that didn’t try to help were rewarded with high scores, even though their patients died prematurely.” Tijmen Schep

Situating the SCS in history and context

The history and context which are crucial to the development of the current SCS are often missing from how the SCS is framed, at least within in Western media .

“[social systems] must be viewed whole cloth as open dynamical systems embedded in a physical, historical, and social fabric” (Juarrero, 1999, p. 201)

As far as China’s political tradition goes, morality and authority are inextricably linked. Enforcing moral standards, monitoring and disciplining the conduct of local officials and individual citizens is seen as the role of the state. “Governing the country by virtue” equals to “governing the country by the law”. Unlike the Western legal system where rights, responsibilities and entitlement of private actors and public sectors are relatively easily categorized, such categories are much more blurred within the Chinese legal system. Individual citizens, government officials, communities and business are all expected to contribute to the whole social and economic harmony and development.

“Chinese political tradition has, for centuries, conceived of society as an organic whole, where harmony can be achieved if all its members conduct themselves as appropriate to their position in public and civil structures. … Critical in this process were ideas about systems theory, derived from natural science and applied in the social context. Influenced by Western scholarship on cybernetics and systems theory, scholars such as Qian Xuesen and Song Jian worked closely with government to develop a conceptual framework for the adoption of systems engineering techniques in governance. Particular regard was given to the role of information flows, not just towards and within government, but also as part of cybernetic feedback loops to create self-correcting responses in society.” (Creemers, 2018, p. 7)

Historically the Chinese government has experimented with some forms of social control and controlling social order through self-policing and social controlling mechanisms go all the way back to the Song Dynasty.

“An 11th-century emperor instituted a grid system where groups of five to 25 households kept tabs on each other and were empowered to arrest delinquents” Mistreanu, 2018. The current SCS then is an extension of such historical traditions. The difference now is the addition of digital technologies.

From the Chinese authorities perspective the SCS epitomizes a self-correcting feedback loop where “trustworthiness” and social morality are fostered through incentives and punishments.

This by no means is to argue that the SCS is any less of a digital panopticon. However, by highlighting history and context, often missing from the SCS narrative, we can paint a somewhat complex and nuanced image of the system (as opposed to the often alarming pieces which are stripped of context and history). Furthermore, while we are preoccupied by the stories of how China is becoming one giant surveillance prison, we miss the indirect and evasive practices that are happening within our own “civilized” Western system.

 

Bibliography

Creemers, R. (2018). China’s Social Credit System: An Evolving Practice of Control.
Juarrero, A. (1999). Dynamics in action: Intentional behavior as a complex system (p. 127143). Cambridge, MA: MIT press.

 

 

The in(human) gaze and robotic carers

Google search “robot carers” and you’ll find extremely hyped up articles and think pieces on either how robot carers are just another way of dying even more miserably or how robot carers are saving the elderly from the lives of loneliness and nothing much in between. Not much nuance. Neither is right, of course. Robot carers shouldn’t be dismissed at first hand as the end of human connections and neither should they be overhyped as the flawless substitutes for human care.

I think they can be useful and practical and even preferable to a human carer in some cases while they cannot (and most likely never will) substitute human care and connection in other aspects. The human gaze is my reason for thinking that.

But first let me say a little about the Inhuman Gaze conference, which provoked me to think about robot care givers. The conference took place last week (6th – 9th June) in Paris. It was a diverse and multidisciplinary conference that brought together philosophers, neuroscientists, psychiatrists (scholars and practitioners alike) with the common theme of the inhuman gaze. Over the four days, speakers presented their philosophical arguments, empirical studies and clinical case studies, each from their own perspective, what the human/inhuman gaze is and its implication for the sense of self. I, myself, presented my argument for why other’s gaze (human or otherwise) is a crucial constituent to “self”. I looked at solitary confinement as an example. In solitary confinement (complete isolation or significantly reduced intersubjective contact), prisoners suffer from negative physical and psychological effects including confusion, hallucination and gradual loss of sense of self. The longer (and more intense) the solitary confinement goes, the more the pronounced the negative effects, leading to gradual loss of sense of self.

The reason for gradual loss of self in the absence of contact with others, Bakhtin would insist, is that the self is dependent on others for its existence. The self is never a self-contained and self-sustaining entity. It simply cannot exist outside the web of relations with others. Self-narrative requires not only having something to narrate but also having someone to narrate it to. To be able to conceptualize my self as a meaningful whole, which is fundamental to self-individuation and self-understanding, I need an additional, external perspective – an other.  The coherent self is put under threat in solitary confinement as it is deprived of the “other”, which is imperative for its existence. The gaze of another, even when uncaring, is an affirmation of my existence.

So, what is an inhuman gaze? A gaze from non-human objects: like the gaze of a wall in a solitary confinement? The gaze of a CCTV camera (although there often is a human at the other end of a CCTV camera)? or a gaze from a human but one that is objectifying and dehumanizing? For example, the gaze from a physician who’s performing an illegal organ harvesting where the physician treats the body that she’s operating on like an inanimate object? Let’s assume an inhuman gaze is the gaze of non-human objects for now. Because the distinctiveness of the human gaze (sympathizing, caring, objectifying or humanizing) is important to the point that I am trying to make. The human gaze, unlike the inhuman gaze, is crucial to self-affirmation.

channel-4s-new-sci-fi-robot-series-humans

From Channel 4’s sci-fi robot series Humans

Robot caregivers and the human gaze…

Neither the extreme alarmist nor the uncritical enthusiast help elucidate the pitfalls and potential benefits of robot caregiving. Whether robotic caregiving is a revelation or a doom depends on the type of care one needs. Roughly speaking, we can categorize care that robots can provide into two general categories. First one is physical or mechanical care – for example., fetching medicine or changing elderly patients into incontinence wear. The second one, on the other hand is companionship (to elderly people or children) where the aim might be to provide emotional support.

Now, robotic care might be well suited for the physical or mechanical type of care. In fact, some people might prefer a robot dealing with such physical task as incontinence care or any similar task that they are no longer able to perform themselves. Such care, when provided by a human, might be embarrassing and humiliating for some people. Not only is the human gaze capable of deep understanding and sympathy but also the potential to humiliate and intimidate. The robotic gaze, on the other hand, having no intrinsic values, is not judgemental. So, in the case of physical and mechanical care, the absence of the human gaze does not necessarily result in a significant negative effect. In fact, it might be desirable when we are in a vulnerable position where we feel we might be humiliated.

On the contrary, if companionship and emotional support are the types of care that we are looking for, the value and judgement free robotic gaze will simply not do. We are profoundly social, dynamic and embodied beings who continually strive to attribute meaning and value to the world. If we are to ascribe an ‘essence of the human condition’, it is that that our being in the world is thoroughly interdependent with the existence of others and context where we continually move and negotiate between different positions. True companionship and emotional connection requires intrinsic recognition of emotions, suffering, happiness, and the like. A proper emotional and ethical relation to the other (and the acceptance of genuine responsibility) requires the presence of a loving and value-positing consciousness, and not a value-free, objectifying gaze.

True human companionship and emotional support cannot be programmed into a robot no matter how advanced our technologies can become, for companionship and emotional connection require sense-making and a value-positing consciousness. Sense-making is an active, dynamic and partly open undertaking – and therefore a never-ending process – not a matter of producing and perceiving mappings of reality which can then be codified into a software.  The human gaze affords mutual understanding of what being a human is like. Recognition of emotions, suffering, etc., requires recognition of otherness based on mutual understanding. The human gaze recognizes an ‘other’ human gaze. As Hans Jonas has put it succinctly in ‘The Phenomenon of Life’, “only life can know life … only humans can know happiness and unhappiness.” 

 

 

Resources – on automated systems and bias

Last updated: 21/12/2018

If you are a data scientist, a software developer, or in the social and human sciences with interest in digital humanities, then you’re no stranger to the ongoing discussions on how algorithms embed and perpetuate human biases. Ethical considerations and critical engagement are urgently needed.

I have keenly been following these discussions for a while and this post is an attempt to put together the articles, books, book reviews, videos, interviews, twitter threads and so on., that I’ve come across, in one place so they can be used as resources.

This list is by no means exhaustive and as we are becoming more and more aware of the catastrophic consequences of these technologies, more and more pieces/articles/journal papers are being written about it on a daily basis. I plan to update this site regularly. Also, if you think there are relevant material that I have not included, please leave them as a comment and I will add them.

Books

Weapons of math destruction: how big data increases inequality and threatens democracy by Cathy O’Neil. A great number of the article on the list below are written by O’Neil. She is also active on Twitter regularly posting links and interesting critical insights on everything to do with mathematical models and bias. Here is my own review of O’Neil’s book with plenty of relevant links itself and here for another excellent review of O’Neil’s book.

ageofsurveillanceThe challenges to humanity posed by the digital future, the first detailed examination of the unprecedented form of power called “surveillance capitalism,” and the quest by powerful corporations to predict and control our behavior.

Shoshana Zuboff’s interdisciplinary breadth and depth enable her to come to grips with the social, political, business, and technological meaning of the changes taking place in our time. We are at a critical juncture in the confrontation between the vast power of giant high-tech companies and government, the hidden economic logic of surveillance capitalism, and the propaganda of machine supremacy that threaten to shape and control human life. Will the brazen new methods of social engineering and behavior modification threaten individual autonomy and democratic rights and introduce extreme new forms of social inequality? Or will the promise of the digital age be one of individual empowerment and democratization?

The Age of Surveillance Capitalism is neither a hand-wringing narrative of danger and decline nor a digital fairy tale. Rather, it offers a deeply reasoned and evocative examination of the contests over the next chapter of capitalism that will decide the meaning of information civilization in the twenty-first century. The stark issue at hand is whether we will be the masters of information and machines or its slaves.

We Are DataWe Are Data: Algorithms and the Making of Our Digital Selves (2018) by John Cheney-Lippold. Below is the first few paragraph from a review by Daniel Zwi, a lawyer with an interest in human rights and technology. Here is also a link to my twitter thread where you can read excerpts from the book that I tweeted as I read the book.

In 2013, a 41-year-old man named Mark Hemmings dialled 999 from his home in Stoke-on-Trent. He pleaded with the operator for an ambulance, telling them that ‘my stomach is in agony’, that ‘I’ve got lumps in my stomach’, that he was vomiting and sweating and felt light-headed. The operator asked a series of questions — ‘have you any diarrhoea or vomiting?’; ‘have you passed a bowel motion that looks black or tarry or red or maroon?’ — before informing him that he did not require an ambulance. Two days later Mr Hemmings was found unconscious on the floor of his flat. He died of gallstones shortly after reaching hospital.

This episode serves as the affective fulcrum of We Are Data: Algorithms and the Making of Our Digital Selves, John Cheney-Lippold’s inquiry into the manner in which algorithms interpret and influence our behaviour. It represents the moment at which the gravity of algorithmic regulation is brought home to the reader. And while it may seem odd to anchor a book about online power dynamics in a home telephone call (that most quaint of communication technologies), the exchange betokens the algorithmic relation par excellence. Mr Hemmings’s answers were used as data inputs, fed into a sausage machine of opaque logical steps (namely, the triaging rules that the operator was bound to apply), on the basis of which he was categorised as undeserving of immediate assistance.

The dispassionate, automated classification of individuals into categories is ubiquitous online. We either divulge our information voluntarily — when we fill out our age and gender on Facebook, for example — or it is hoovered up surreptitiously via cookies (small text files which sit on our computer and transmit information about our browsing activity to advertising networks). Our media preferences, purchases and interlocutors are noted down and used as inputs according to which we are ‘profiled’ — sorted into what Cheney-Lippold calls ‘measureable types’ such as ‘gay conservative’ or ‘white hippy’ — and served with targeted advertisements accordingly.

Algorithms of oppressionAlgorithms of oppression: How search engines reinforce – below is an excerpt from Nobel’s book:

Run a Google search for “black girls”—what will you find? “Big Booty” and other sexually explicit terms are likely to come up as top search terms. But, if you type in “white girls,” the results are radically different. The suggested porn sites and un-moderated discussions about “why black women are so sassy” or “why black women are so angry” presents a disturbing portrait of black womanhood in modern society.
In Algorithms of Oppression, Safiya Umoja Noble challenges the idea that search engines like Google offer an equal playing field for all forms of ideas, identities, and activities. Data discrimination is a real social problem; Noble argues that the combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of Internet search engines, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color, specifically women of color.

Screenshot 2017-09-15 at 9.09.59 PM - Edited

Algorithms to Live By: The Computer Science of Human Decisions by Brian Christian and Tom Griffiths. This book is concerned with the workings of the human mind and how computer science can help human decision making.  Here is a post by Artem Kaznatcheev on Computational Kindness which might give you a glimpse of the some of the issues that book covers. Here is a long interview with Brian Christian and Tom Griffiths and a TED Talk with Tom Griffiths on The Computer Science of Human Decision Making.

The Black Box Society: The Secret Algorithms That Control Money and Information by Frank Pasquale. You can read the introduction and conclusion chapters of his book here And here is a good review of Pasquale’s book. You can follow his twitter stream here.

Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech by Sara Wachter-Boettcher

Technically wrongHere is a synopsis:  A revealing look at how tech industry bias and blind spots get baked into digital products—and harm us all.

Buying groceries, tracking our health, finding a date: whatever we want to do, odds are that we can now do it online. But few of us ask why all these digital products are designed the way they are. It’s time we change that. Many of the services we rely on are full of oversights, biases, and downright ethical nightmares: Chatbots that harass women. Signup forms that fail anyone who’s not straight. Social media sites that send peppy messages about dead relatives. Algorithms that put more black people behind bars.

Sara Wachter-Boettcher takes an unflinching look at the values, processes, and assumptions that lead to these and other problems. Technically Wrong demystifies the tech industry, leaving those of us on the other side of the screen better prepared to make informed choices about the services we use—and demand more from the companies behind them.

Paula Boddington, Oxford academic and author of Towards a Code of Ethics for Artificial Intelligence, recommends the five best books on Ethics for Artificial Intelligence. Here is the full interview with Nigel Warburton, published on December 1, 2017.

Automating inequality“Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor” by Virginia Eubanks is being published and will be released on January 23, 2018. Here is an excerpt from Danah Boyd’s blog:

“Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor” is a deeply researched accounting of how algorithmic tools are integrated into services for welfare, homelessness, and child protection. Eubanks goes deep with the people and families who are targets of these systems, telling their stories and experiences in rich detail. Further, drawing on interviews with social services clients and service providers alongside the information provided by technology vendors and government officials, Eubanks offers a clear portrait of just how algorithmic systems actually play out on the ground, despite all of the hope that goes into their implementation. Additionally, Berkman Klein discusses “Algorithms and their unintended consequences for the poor” with Eubanks here.

The Big Data AgendaThe Big Data Agenda: Data Ethics and Critical Data Studies by Annika Richterich PDF available through the link here.

“This book highlights that the capacity for gathering, analysing, and utilising vast amounts of digital (user) data raises significant ethical issues. Annika Richterich provides a systematic contemporary overview of the field of critical data studies that reflects on practices of digital data collection and analysis. The book assesses in detail one big data research area: biomedical studies, focused on epidemiological surveillance. Specific case studies explore how big data have been used in academic work.

The Big Data Agenda concludes that the use of big data in research urgently needs to be considered from the vantage point of ethics and social justice. Drawing upon discourse ethics and critical data studies, Richterich argues that entanglements between big data research and technology/ internet corporations have emerged. In consequence, more opportunities for discussing and negotiating emerging research practices and their implications for societal values are needed.”

 

Re-Engineering HumanityRe-Engineering Humanity by professor Evan Selinger and Brett Frischmann

Every day, new warnings emerge about artificial intelligence rebelling against us. All the while, a more immediate dilemma flies under the radar. Have forces been unleashed that are thrusting humanity down an ill-advised path, one that’s increasingly making us behave like simple machines? In this wide-reaching, interdisciplinary book, Brett Frischmann and Evan Selinger examine what’s happening to our lives as society embraces big data, predictive analytics, and smart environments.

OutnumberedOutnumbered: From Facebook and Google to Fake News and Filter-bubbles – The Algorithms That Control Our Lives (featuring Cambridge Analytica) by David Sumpter.

A review from Financial Times, here.

 

TED Talks, podcasts, and interviews 

The era of blind faith in big data must end TED Talk by Cathy O’Neil, April, 2017

Machine intelligence makes human morals more important November 11, 2017. In this TED Talk, Zeynep Tufekci emphasizes the importance of human values and ethics in the age of machine intelligence and algorithmic decision making.

We’re building an artificial intelligence-powered dystopia, one click at a time, another thought provoking TED Talk from techno-sociologist Zeynep Tufekci.

How I’m fighting bias in algorthims TED Talk – MIT Researcher Joy Buolamwini, November 2016

AI, Ain’t I A Woman? Joy Buolamwini

Data is the new gold, who are the new thieves? TED Talk – Tijmen Schep 2016

O’Neil’s interview with Politics Weekly podcast (starts 30mins in) July 5, 2017. O’Neil calls for public awareness on how algorithms are used, often without our knowledge, in job interviews, for example., and explains why we should question and interrogate these algorithms which are often presented to us as authoritative.

A short interview with Frank Pasquale on his book Black Box Society May 12, 2016. Pasquale emphasizes the opaqueness of algorithms and argues on why we should demand transparency.

A 2 minutes video, a prototype example, of algorithms being used in recruitment. A working example of the kind of dangerous AI used for recruiting that experts such as O’Neil constantly warn against. This post provides a critical analysis of why such endeavors are futile and dangerous. Here’s another related video on how facial recognition technology will go mainstream in 2018. In fact, such technology has gone mainstream in China. Here is a short video where a BBC reporter experimented with the world’s largest surveillance system.

Tom Chatfield on Critical Thinking October 2, 2017 In this philosophically themed podcast, Chatfield discusses issues such as “how new digital realities interact with old human biases” with Dave Edmonds.

When algorithms discriminate: Robotics, AI and ethics November 18, 2017. Stephen Roberts, professor of computer science at the University of Oxford, discusses the threats and promises of artificial intelligence and machine learning with Al Jazeera.

Here is a series of talks, from the ABC Boyer Lectures, hosted by Professor Genevieve Bell. The series is called Fast, Smart and Connected: What is it to be Human, and Australian, in a Digital World? The issues discussed include “How to build our digital future.”

You and AI – Just An Engineer: The Politics of AI (July, 2018). Kate Crawford, Distinguished Research Professor at New York University, a Principal Researcher at Microsoft Research New York, and the co-founder and co-director the AI Now Institute, discusses the biases built into machine learning, and what that means for the social implications of AI.

Facebook: Last Week Tonight with John Oliver (HBO) an extremely funny and super critical look at Facebook.

Humans are biased, and our machines are learning from us — ergo our artificial intelligence and computer programming algorithms are biased too. Joanna Bryson explains how human bias is learned by taking a closer look at how AI bias is learned.

Websites

Social Cooling is a term that refers to a gradual long term negative side effects of living in an digital society where our digital activities are tracked and recorded. Such awareness of potentially being scored by algorithms leads to a gradual behaviour change: self-censorship and self-surveillance. Here is a piece on what looks like social cooling in action. The website itself has plenty of resources that can aid critical thinking and touches up on big philosophical, economical and societal questions in relation to data and privacy.

bias-in-bias-out-sc593da2a154050-1280

www.socialcooling.com

For those interested in critical thinking, data and models Calling Bullshit offers various resources and tools for spotting and calling bullshit. This website, developed for a course entitled ‘Calling Bullshit’, is a great place to explore and learn about all things “data reasoning for the digital age”.

Another important website that is worth a mention here is Algorithmic Justice League where you can report algorithm bias, participate in testing software for inclusive training set, or where you can simply donate and contribute raising awareness about existing bias in coded systems. More on AI face misclassification and accountability by Joy Buolamwini here. With a somewhat similar aim is the Data Harm Record website – a running record of harms that have been caused by uses of big data.

fast.ai a project that aims to increase diversity in the field of deep learning and make deep learning accessible and inclusive to all. Critical Algorithm Studies: a Reading List – a great website with links to plenty of material on critical literature on algorithms as social concerns. Here is the Social Media Collective Reading List where you’ll find further material on Digital Divide/ Digital Inclusion and Metaphors of Data.

The AI Now Institute at New York University is an interdisciplinary research center dedicated to understanding the social implications of artificial intelligence. Data & Society is a research institute focused on the social and cultural issues arising from data-centric technological developments.  FAT/ML is a website on Fairness, Accountability, and Transparency in Machine Learning with plenty of resources and events, run by a community of researchers. Litigating Algorithms: Challenging Government Use of Algorithmic Decision Systems. An AI Now Institute Report.

ConceptNet Numberbatch 17.04: better, less-stereotyped word vectors This is not a website but a blogpost. I am putting it here with other websites as the author offers some solution to reducing biases when building algorithms for natural language understanding beyond simply stating that such algorithms are biased.

Auditing Algorithms – a useful website for those teaching/interested in accountability in automated systems. The site includes films festivals, videos, etc,.

The Ethics and Governance of Artificial Intelligence – a cross-disciplinary course that investigates the implications of emerging technologies, with an emphasis on the development and deployment of Artificial Intelligence. Here’s an Introduction to Data Ethics by Markkula Center for Applied Ethics.

Google launches a new course to teach people about fairness in machine learning.

Biology/genetics  – (Digital phrenology?) 

It is difficult to draw a line and put certain articles under the category of “social”, “biological”, “political”, or other as the boundaries between these categories are blurred and most of the themes are somehow all interlinked. Nonetheless, I think the following articles can loosely be described as dealing with biological/genetics/personality material. Furthermore, towards the end of this post, I have also thematized some articles under the category of “political”.

In a recent preprint paper “Deep Neural Networks Can Detect Sexual Orientation From Faces” (here are the Gurdian and the Economist reportings) Yilun Wang and Michal Kosinski calmed that their deep neural network can be trained to discern individuals’ sexual orientations from their photographs. The paper has attracted and continues to attract a massive attentions and has generated numerous responses, outrages and discussion. Here is an in-depth analysis from Calling Bullshit and here for a detailed technical assessment and here for a comprehensive and eloquent response from Greggor Mattson. Here is another response and another one here from a data scientist’s perspective and another recent response from O’Neil here. If you only want to read just one response, I highly recommend reading Mattson’s. There have been been plenty of discussions and threads on Twitter – here and here are a couple of examples. It is worth noting that Kosinski, one of the authors of the above paper, is listed as one of the the advisers for a company called Faception, an Israeli security firm that promises clients to deploy “facial personality profiling” to catch pedophiles and terrorists among others.

Do algorithms reveal sexual orientation or just expose our stereotypes? by @blaiseaguera et al., is the latest (January 11, 2018) response to the above Wang and Kosinski “gaydar” paper. In this critical analysis, @blaiseaguera et al., argue that much of the ensuing scrutiny of Wang and Kosinski work has focused on ethics, implicitly assuming that the science is valid. However, on a closer inspection, et al., find that the science doesn’t stand up to scrutiny either.

When advanced technologies in genetics and face recognition are applied with the assumption that “technology is neutral”, the consequences are often catastrophic and dangerous. These two pieces, Sci-fi crime drama with a strong black lead and Traces of Crime: How New York’s DNA Techniques Became Tainted provide some in-depth analysis of such.

Physiognomy’s New Clothes this is a comprehensive and eloquent piece and well worth your time. Physiognomy, the practice of using people’s outer appearance to infer inner character is a practice that is now discredited and discarded as phrenology. However, this piece illustrates how such practice is alive and well in the era of big data and machine learning. Here is more on the Wu and Zhang paper that the Physignomy’s New Clothes authors cover in the above piece. Further examples of digital phrenology can be found here and here here.

General articles on various automated systems and bias, discrimination, unfairness, ethical concerns, etc., listed in order of publication dates starting from the latest.

Frank Pasquale testifies (video, written testimony) Before the United States House of Representatives Committee on  Energy and Commerce Subcommittee on Digital Commerce and Consumer Protection in relation to “Algorithms: How Companies’ Decisions About Data and Content Impact Consumers”. Here for more written testimony on Algorithmic Transparency from the Electronic Privacy Information Center – November 29, 2017.
ProPublica
Image Courtesy of ProPublica

There’s software used across the country to predict future criminals. And it’s biased against blacks. May 23, 2016 The company that sells this program (Northpointe) has responded to the criticisms here. Northpointe asserts that a software program it sells that predicts the likelihood a person will commit future crimes is equally fair to black and white defendants. Following such response, Jeff Larson and Julia Angwin has written another response (Technical Response to Northpointe) re-examined the data. They argue that they have considered the company’s criticisms, and stand by their conclusions.

Politics

Algorithmic processes and politics might seem far removed from each other. However, if anything, the recent political climate is indicative of how algorithms can be computational tools for political agendas. Here and here are exemplar twitter threads that highlight particular Twitter accounts used as tools for political agenda. The articles below are, in some way or another, related to algorithms in the political arena.

Forum Q&A: Philip Howard on Computational Propaganda’s Challenge to Democracy July 25, 2017. “Computational propaganda, or the use of algorithms and automated social media accounts to influence politics and the flow of information, is an emerging challenge to democracy in the digital age. Using automated social media accounts called bots (or, when networked, botnets), a wide array of actors including authoritarian governments and terrorist organizations are able to manipulate public opinion by amplifying or repressing different forms of political content, disinformation, and hate speech.”

For a more scholarly read

Afrofeminist epistemology and dialogism: a synthesis (work in progress)

Embodied, enactive and dialogical approaches to cognitive science radically depart from traditional Western thought in the manner with which they deal with life, mind and the person. The former can be characterised as emphasising interdependence, relationships, and connectedness with attempts to understanding organisms in their milieu. Acknowledgements of complexities and ambiguities of reality form the starting points for epistemological claims.  The latter, on the other hand, tends to strive for certainty and logical coherence in an attempt to establish stable and relatively fixed epistemological generalisations. Individuals, which often are perceived as independent discreet entities, are taken as the primary subjects of knowledge and the units of analysis.

Collins’s proposed black feminist epistemology, hereafter “Afrofeminist epistemology”, opposes the traditional Western approach to epistemology as well as the largely Positivist scientific view inherited from it.  As such, it is worth drawing attention to the similarities between Black feminist thought and dialogical approaches to the cognitive sciences. In what follows I seek to reveal a striking convergence of themes between these two schools of thought. In so doing, I intend to illustrate that the two traditions – cognitive sciences, especially the dialogical approach to epistemology, and Afrofeminist epistemology, particularly, the type proposed by Patricia Hill Collins (2002) – can inform one another through dialogue.

General characterization of classic Western approach to epistemology and the Cartesian inheritance

The classic Western approach to epistemology tends to be monological; meaning it tends to focus on individuals and their cognition and behaviour. When relationships and interactions enter the equation, individuals and their relations are often portrayed as distinct entities that can be neatly separated. Dichotomous thinking — subject versus object, emotion versus reason – persists within this tradition. Ethical and moral values and questions are often treated as clearly separable from “objective scientific work” and as something that the scientist need not contaminate her “objective” work with. In its desire for absolute rationality, Western thought wishes to cleave thought from emotion, cultural influence and ethical dimensions. Cognition, evaluation and emotions are treated as if they are entities that shouldn’t be contaminated. Abstract and intellectual thinking are regarded as the most trustworthy forms of understanding and rationality is fetishized.

In the classic Western epistemological tradition, abstract reasoning is taken to be the highest cognitive goal, and certainty as a necessary component for knowledge.  Since the ultimate goal is to arrive at timeless, universally applicable laws, establishing certainty is pivotal for laying the foundations. Although there are historical antecedents leading up to and contributing towards what is generally regarded as Western tradition – in particular, Plato in his dialogues Meno and Phaedo – Descartes represents the pinnacle of Western thought (Gardiner 1998, Toulmin 1992). The subject as autonomous and self-sustaining entity or a Cartesian cogito, which we have inherited from Cartesian thinking, remains prevalent in most current Western philosophy as well as in the background assumptions of the human sciences. The way the individual self is taken as the unquestioned origin of knowledge of the world and others is a legacy of this tradition (Linell 2009).

Black feminist criticism of dominant approach and the proposed alternative

Contrary to the classic Western epistemological tradition, in Afrofeminist epistemology ethical and moral values and questions are inseparable from our enquires into knowledge. Similarly, knowledge claims and knowledge validation processes are not independent of the interests and values of those who define what knowledge is, what is important and worthy of study, and what the criteria for epistemological justification are (Collins 2002). Such definitions and criteria are guarded fiercely by the institutions and individuals who act as the ‘gatekeepers’ of the classic Western epistemological tradition. This traditional Western epistemology, Collins points out, predominantly represents Western, elite, and white, male interests and values. In fact, a brief review of the history of Western philosophical canon reveals that knowledge production processes and the criteria for knowledge claims have predominantly been set by elite, white, Western men.

Scholars like Karen Warren (2009) have cogently argued that the history of classical Western philosophy has, for centuries, almost exclusively consisted of elite, white, Western European men giving the illusion that Western white men are the epitome of intellectual achievement. Women’s voices and perspectives were diminished, ignored, and systematically excluded from the canon. In her ‘recovery project’, Warren finds that women philosophers nonetheless have made important contributions throughout the history of philosophy and that you find them when you go looking for them. This, to a great extent, remains the case not only in philosophy, but also in much of the rest of the academic tradition. A brief look at any philosophy curricula would reveal that white European male philosophers and their views remain dominant and definitive.

Traditional approaches taken as the “normal” and “acceptable” ways to theorise and generalise about people’s lived experiences means that any other approaches to theorising about groups of people that are not aligned with canonical intellectual currents (often white European male) are dismissed as “anomalies”. For Collins, it is indisputable that different people experience reality differently and that all social thought somewhat reflects the realities and interests of its creators. Political criteria influence knowledge production and validation processes in one way or another.  Collins asserts, in studying Black women’s realities, the typical perspectives on offer have either identified black women with the oppressor, in which case Black women lack an independent interpretation of their own realities, or have characterised Black women as less human than the oppressor, in which case Black women lack the capacity to articulate their own standpoint. While in the first perspective independent Black women’s realities are seen as not their own, in the latter, it is seen as inferior. For that reason, the traditional epistemology is inadequate to capture and account for the lived experiences of black women – hence Collins’ proposal for an Afrocentric feminist epistemology which is grounded in black women’s values and lived experiences.

Black women’s lived experiences are different in important ways. The kind of relationships Black women have, and the kind of work they engage in are notable examples that demonstrate the differing realities and lived experiences. Intuitive knowledge, what Collins calls wisdom, is crucial to the everyday lives and survival of black women. While wisdom and intuitions, as opposed to abstract intellectualizing, might be excluded as irrelevant, and at best, less credible as far as the traditional epistemologies are concerned, they are highly valued within black communities:

“The distinction between knowledge and wisdom, and the use of experience as the cutting edge dividing them, has been key to Black women’s survival. … knowledge without wisdom is adequate for the powerful, but wisdom is essential to the survival of the subordinate.” (Collins 1989, p. 759)

The desire for complete objectivity and universally generalisable theories in the dominant Western tradition has led to a focus on abstract analysis of the nature of concepts like ‘knowledge’ and ‘justification’, with little to no grounding of complex lived experience. Its portrayal of reason and rationality in direct contrast with emotions – the former to arrive at pure, objective knowledge –  has led to dichotomous thinking, thus blinding us to continuities and complementarities. Consequently, “reason” has been privileged over emotions. This in turn has impeded emotional and bodily knowledge, what Foucault (1980) calls ‘subjugated knowledge’ often expressed through music, drama, etc., as less important. However, ‘subjugated knowledge’ is crucial and is part of a way of life and survival for black communities. Such knowledge, grounded in concrete experiences and recognised through connectedness, dialogues and relationships, is what is of real value for Black women.

That knowledge claims should be grounded in concrete, lived experience rather than abstract intellectualising is crucial to Collins’s Afrocentric feminist epistemology. Collins’s Afrocentric epistemology prioritizes wisdom over knowledge and has, at its core, black women’s experiences of race and gender oppression. Black women have shared experience of oppression, imperialism, colonialism, slavery, and apartheid as well as roots in the core African value system prior to colonization. The roots of Afrocentric epistemology can be traced back to African-based oral traditions. As such, dialogues occupy an important place. Dialogues, so far as the Afrocentric epistemology is concerned, are an essential method for assessing knowledge claims.

This Afrocentric epistemology, grounded in the lived experience of black women, that employs dialogues as a way of validating knowledge claims, stands in a stark contrast with that of the Eurocentric epistemology. Connectedness rather than separation is an essential component of the knowledge validation process. Individuals are not detached observers of stories or folktales, but rather active participants, listeners and speakers and part of the story. Dialogues explore and capture the fundamentally interactive connected nature of people and relationships.

Ethical claims lie at the heart of an Afrocentric feminist epistemology, in contrast to the classical Western epistemology that considers ethical issues as separate from and independent of ‘objective scientific investigations’. Afrocentric feminist epistemology is about employing emotions, wisdom, ethics and reason as interconnected and equally essential components in assessing knowledge claims with reference to a particular set of historical conditions.

Dialogical criticism to dominant approach and its alternative

The dialogical approach to cognitive science – inspired by Mikhail Mikhailovich Bakhtin’s (1895 – 1975) thinking and further developed by dialogists such as Per Linell (2009) – objects to the dominant Western epistemological approach. Dialogical theories which have roots in the Bakhtin Circle, a 20th century school of Russian thought, have had a massive influence on social theory, philosophy and psychology. At the centre of dialogical theories lies the view that linguistic production, the notion of self-hood, and knowledge are essentially dialogic. Dialogical approaches are concerned with conceptualizing and theorizing human-sense making and they do so based on a set of assumptions some of which stand in stark opposition to traditional Western philosophy and science. These assumptions include: individual selves cannot be assumed to exist as agents and thinkers before they begin to interact with others and the world; our sense-makings are not separable from our historical antecedents and current cultural and societal norms and value systems. The interrelation between self, others and the environment are there from the start in the infant’s life and the awareness of self and others co-develop over time; they are two sides of the same process. Classical Western philosophy and science has tried to reduce the world to rational individual subjects in attempt to establish stable universals. The origin of knowledge of the world and of others is the discreet individual person. So far as dialogical approaches go, most traditional Western epistemological approaches are rooted in Cartesian individualism and are monological – meaning, that they only encompass individuals and their cognition and environments. Groups and societies are nothing but ensembles of individuals:

“Individuals alone think, speak, carry responsibilities, and other individuals at most have a casual impact on their activities and stances.” (Linell 2009, p. 44)

Dialogism[1], in contrast, insists that interdependencies, co-dependencies, and relationships between the individual and the world are most fundamental components in understanding the nature of selves and furthermore, of knowledge. The term intersubjectivity captures this concept well:

“The term “intersubjectivity”—or what Hannah Arendt calls “the subjective in-between”—shifts our emphasis away from notions of the person, the self, or the subject as having a stable character and abiding essence, and invites us to explore the subtle negotiations and alterations of subjective experience as we interact with one another, intervocally or dialogically (in conversation or confrontation), intercorporeally (in dancing, moving, fighting, or competing), and introceptively (in getting what we call a sense of the other’s intentions, frame of mind, or worldview).”  (Jackson 2002, p. 5)

Cultures and societies are typically conceived as objective, stable structures so far as Western epistemologies go. Dialogism by contrast conceives cultures and societies as dynamic, living and partly open, with tensions, internal struggles and conflicts between majorities and minorities and different value systems. “Knowledge is necessarily constructed and continually negotiated (a) in situ and in sociocultural traditions, and (b) in dialogue with others; individuals are never completely autonomous1 as sense-makers.” (Linell 2009, p. 46) The individual is not a separate, discrete, fixed and stable entity that stands independent from others, but rather one that is always in dynamical interactions with and interdependent with others. Knowledge claims and knowledge validation processes need therefore to reflect these continual tensions and dynamic interactions.

Concluding remarks: drawing similarities between dialogical approaches and Afrofeminist epistemology

So, what are the implications, if any, of drawing these commonalities between Afrofeminist epistemology and dialogical approaches to epistemology, and their common refutation of traditional Western epistemology? Collins has described Afrofeminist and Western epistemological grounds as competing and at times irreconcilable:

“Those Black feminists who develop knowledge claims that both epistemologies can accommodate may have found a route to the elusive goal of generating so called objective generalizations that can stand as universal truths.”  (Collins 1989, p.773)  

The synthesis and incorporation of dialogism with Afrofeminist epistemology is, in a sense, not the discovery of that elusive finding into “objective generalization” or “universal truths” that satisfy both epistemologies. Rather such synthesis, I argue, is a means towards epistemological approaches that aspire to embed Afrofeminist values and dialogical epistemological underpinnings to our understandings of personhood and knowledge. Such epistemological approaches acknowledge that knowledge claims, knowledge validation processes and any scientific endeavours in general are value-laden and cannot be considered independent of underlying values and interests. A move towards epistemological approaches that acknowledge the role of the scientist/theorist which Barad (2007) captures concisely:

“A performative understanding of scientific practices, for example, takes account of the fact that knowing does not come from standing at a distance and representing but rather from a direct material engagement with the world.”   (Barad 2007, p. 49)

Connectedness and relationships rather than disinterested, disembodied, and detached Cartesian individuals form a central component of analysis. Great emphasis is placed on extensive dialogues and not to become a detached observer of stories. In so doing, individual expressiveness, emotions, the capacity for empathy and the fact that ideas cannot be divorced from those who create and share them need to be key factor for this epistemology. Such is an epistemological approach that aspires to embed Afrofeminist values and dialogical underpinnings.

Knowledge is specific to time and place and is not rooted in the individual person but in relationships between people. Individuals exist in a web of relations and co-dependently of one other, negotiating meanings and values through dialogues. As Bakhtin, pioneer of dialogism has emphasized, we are essentially dialogical beings, and it is only through dialogues with others that we come to realise and sustain a coherent – albeit continually changing –  sense of self. Reality is messy, ambiguous, and complex. Any epistemological approach that takes the person as fully autonomous, fixed, and a self-sufficient agent whose actions are guided by pure rationality fail to recognise the complexities and ambiguities of reality, time and context-bound nature of knowledge. At the core of this proposed Afrofeminist/dialogical approach to epistemology is an attempt to bring values as important constituent factor to the dialogical, intersubjective embodied, in a constant flux person and the epistemologies that drive from it.

[1] It is important to note that individuals do not disappear in dialogism, rather, the individual is a social being who is interdependent with others, “not an autonomous subject or a Cartesian cogito.” (Linell 2009)

Bibliography

Barad, K. (2007). Meeting the universe halfway: Quantum physics and the entanglement of matter and meaning. duke university Press.

Collins, P. H. (1989). The social construction of black feminist thought. Signs: Journal of Women in Culture and Society14(4), 745-773.

Collins, P. H. (2002). Black feminist thought: Knowledge, consciousness, and the politics of empowerment. Routledge.

Foucault, M. (1980). Language, counter-memory, practice: Selected essays and interviews. Cornell University Press.

Gardiner, M. (1998). The incomparable monster of solipsism: Bakhtin and Merleau-Ponty. Bakhtin and the human sciences. Sage, London, 128-144.

Jackson, M. (2012). Lifeworlds: Essays in existential anthropology. University of Chicago Press.

Linell, P. (2009). Rethinking language, mind, and world dialogically. IAP.

Toulmin, S. E., & Toulmin, S. (1992). Cosmopolis: The hidden agenda of modernity. University of Chicago Press.

Warren, K. (Ed.). (2009). An unconventional history of Western philosophy: conversations between men and women philosophers. Rowman & Littlefield.

 

The Scope of Existential Anthropology – Jackson

Such a beautifully written passage which compels you to reflect, wonder, and think …

Like other human sciences, anthropology has drawn inspiration from many disciplines and sought to build its identity through association with them. But the positivism that anthropology hoped to derive from the natural sciences proved to be as elusive as the authenticity it sought from the humanities. Moreover, though lip service was paid to the models and methods of biology, ecology, psychology, fluid mechanics, structural linguistics, topology, quantum mechanics, mathematics, economics, and general systems theory, anthropologists seldom deployed these analytically or systematically. Rather, they were adopted as images and metaphors. Thus, society was said to function like a living organism, regulate energy like a machine, to be structured like language, organized like a corporation, comparable to a person, or open to interpretation like a text.

Jackson. M (2013) Lifeworlds: Essays in Existential Anthropology. (Chapter 1, The Scope of Existential Anthropology, p.3)