The fruit salad maker; it’s an interdisciplinary tale

Multidisciplinary = the fruit bowl (single disciplines brought together) Interdisciplinary = a fruit salad (combine disciplines together for one output) Transdisciplinary = the smoothie (disciplines transformed-new). EU EnRRICH project

A young fruit enthusiast wanted to make a fruit salad. Seeing that so many different fruit suppliers bring all sorts of fruit to her fruit bar, and many customers in return buy individual fruits, she thought she’d make something that each fruit supplier doesn’t produce by combining their supplies – a fruit salad. Besides, there seems to be a great deal of excitement over this new mixing of various fruits and everybody seems to want and encourage it.

Having sampled many different fruits over the years, the fruit salad maker decided it is a good use of her time and expertise to get into the fruit salad making business. She decided on mango, kiwi and pineapple as her fruits of choice that would make her signature fruit salad. They blend very well, they are grown locally, and they complement one another. When mixed, they not only produce an excellent taste, but they are also very appetizing to look at. Most mango, kiwi and pineapple lovers should be able to appreciate and enjoy them, the fruit salad maker thought and she started the process of combining her fruits.

“Not so fast”, came along the fruit gatekeepers. “We need to first see that your tastes for fruits, ability to make fruit salad, and knowledge of each fruit is sufficient before we allow you to open this fruit bar”. Well, it’s legally required that a fruit bar is certified after all. And on the positive side, this certificate would signify a much-needed validation and boost from the fruit community.

Not being able to open her fruit bar without the recognition required and the seal of approval, the fruit salad maker embarked on the process of fulfilling the necessary requirements to pass the necessary tests. She compiled a convincing argument for the need for fruit salads, her knowledge of three fruits, and most importantly for her personal skills and passion for mixing fruits. She demonstrated how her fruits of choice go well together, why they should be made into fruit salad and how much her customers would benefit from such combination.

She then produced the first plate of fruit salad and put it in front of the fruit gatekeepers. “I love the idea of fruit salads. We are all stuck in our special fruit echo chambers. We should all try fruit salads and appreciate those that actually make colourful fruit salads”, said the mango gatekeeper. He then tasted a big mouthful of the fruit salad before him. “It needs more mango”, he said. “I also recommend you study the history of mango production and the fine-grained detail of the biochemistry of mango to make your fruit salad better. I am afraid I can’t let you past my gate until then”, he added.

The kiwi gatekeeper, who also confessed how much he loves fruit salads, followed and had a mouthful of the fruit salad in front of him.  Like his previous colleague the mango gatekeeper, the kiwi gatekeeper seems to be solely concerned with the kiwi part of the fruit salad – not the whole combination. “Salt would really compliment the kiwis, add a pinch to bring out the flavour more. In order for me to recognize that you have used kiwi in your fruit salad, you need a lot more kiwi on your fruit salad,” he commented. “Plus, I don’t recognize the breed of kiwi that you’re using. I will give you a list of good kiwis you need to use. Until the kiwi is right, I am afraid it is my duty to not let you pass my gate. Better luck next time” he added.

Lastly, the pineapple gatekeeper scooped a spoonful of the fruit salad and tried it. “I also love the idea of fruit salads but I have to tell you that this is not how we slice pineapples over at the pineapple empire. We also marinate them in our special sauce. Your pineapples lack both. You really need to know your pineapple inside out if you are to call yourself a fruit salad maker at all. Plus, I see very little pineapple on this plate. So, get the special sauce from our empire and cut your pineapples our way. Only then can we give you our approval,” she exclaimed.

The fruit salad maker, unestablished and with much less power than the gatekeepers, felt disheartened. She tried to point out that each gatekeeper needs to look at the dish as a whole instead of focusing on each specific fruit. And, surely, the single fruit bars don’t go through as much scrutiny. Unfortunately, questioning the individual fruit experts didn’t do her any favours – they have been in their respective fruit business for much longer than she has and must surely know what they are doing. Who’s she to question their domain expertise?!

It felt as though, what they are demanding seemed too self-fulfilling and incommensurable at times. But then again, she suffered from too much self-doubt given that this is her first big attempt at making a fruit salad, to argue with their demands. Either way, if she is to get that business going, she needs each gatekeeper’s seal of approval. She went ahead and attempted to make the type of fruit salad that would satisfy each gatekeeper; with plenty of mango, huge helpings of ripe kiwi and custom sliced pineapples.

At the next round of testing, the fruit salad maker revised the plate in a manner that reflects the advice previously provided by the gatekeepers. Unfortunately, they unanimously agreed that the plate is overflooded with too much fruit, is unhealthy and is unattractive to look at. “All the excess fruit must be trimmed away,” they declared. “This is a health hazard and we cannot approve of such a dish. Think about how to make it neater, healthier and attractive and come back to us with your improved fruit salad. We will then discuss the matter and perhaps let you through our gate,” they said.

After many attempts to satisfy each of the gatekeepers version of a perfect fruit salad, the fruit salad maker is back to square one. She’s caught in a recursive loop. Each fruit connoisseur, expert on their own fruit, seems to underappreciate the taste and benefit of the fruit mix before them. Putting individual fruit experts together doesn’t necessarily make a fruit salad judge, after all.

Having gone through a number of time-consuming practices of making fruit salads and the bureaucratic paperwork associated with it, the fruit salad maker wonders if the fruit salad making business is worthwhile at all. Single fruit dealings, the dominant mode of doing business would have been simpler – not as rewarding for sure, but certainly simpler. But the thing is, once you develop the palate for the unique taste of fruit salads, nothing else will do.

 

 

Advertisements

For a more scholarly read

This list is not exhaustive by any means but work that is relevant to my work and a list I revise and revisit regularly

Link for the main resources page here

Books

Weapons of math destruction: how big data increases inequality and threatens democracy by Cathy O’Neil. A great number of the article on the list below are written by O’Neil. She is also active on Twitter regularly posting links and interesting critical insights on everything to do with mathematical models and bias. Here is my own review of O’Neil’s book with plenty of relevant links itself and here for another excellent review of O’Neil’s book.

We Are Data

We Are Data: Algorithms and the Making of Our Digital Selves (2018) by John Cheney-Lippold.

Below is the first few paragraph from a review by Daniel Zwi, a lawyer with an interest in human rights and technology. Here is also a link to my twitter thread where you can read excerpts from the book that I tweeted as I read the book.

In 2013, a 41-year-old man named Mark Hemmings dialled 999 from his home in Stoke-on-Trent. He pleaded with the operator for an ambulance, telling them that ‘my stomach is in agony’, that ‘I’ve got lumps in my stomach’, that he was vomiting and sweating and felt light-headed. The operator asked a series of questions — ‘have you any diarrhoea or vomiting?’; ‘have you passed a bowel motion that looks black or tarry or red or maroon?’ — before informing him that he did not require an ambulance. Two days later Mr Hemmings was found unconscious on the floor of his flat. He died of gallstones shortly after reaching hospital.

This episode serves as the affective fulcrum of We Are Data: Algorithms and the Making of Our Digital Selves, John Cheney-Lippold’s inquiry into the manner in which algorithms interpret and influence our behaviour. It represents the moment at which the gravity of algorithmic regulation is brought home to the reader. And while it may seem odd to anchor a book about online power dynamics in a home telephone call (that most quaint of communication technologies), the exchange betokens the algorithmic relation par excellence. Mr Hemmings’s answers were used as data inputs, fed into a sausage machine of opaque logical steps (namely, the triaging rules that the operator was bound to apply), on the basis of which he was categorised as undeserving of immediate assistance.

The dispassionate, automated classification of individuals into categories is ubiquitous online. We either divulge our information voluntarily — when we fill out our age and gender on Facebook, for example — or it is hoovered up surreptitiously via cookies (small text files which sit on our computer and transmit information about our browsing activity to advertising networks). Our media preferences, purchases and interlocutors are noted down and used as inputs according to which we are ‘profiled’ — sorted into what Cheney-Lippold calls ‘measureable types’ such as ‘gay conservative’ or ‘white hippy’ — and served with targeted advertisements accordingly.

ageofsurveillance

The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (2019) by Shoshana Zuboff

The challenges to humanity posed by the digital future, the first detailed examination of the unprecedented form of power called “surveillance capitalism,” and the quest by powerful corporations to predict and control our behavior. Shoshana Zuboff’s interdisciplinary breadth and depth enable her to come to grips with the social, political, business, and technological meaning of the changes taking place in our time. We are at a critical juncture in the confrontation between the vast power of giant high-tech companies and government, the hidden economic logic of surveillance capitalism, and the propaganda of machine supremacy that threaten to shape and control human life. Will the brazen new methods of social engineering and behavior modification threaten individual autonomy and democratic rights and introduce extreme new forms of social inequality? Or will the promise of the digital age be one of individual empowerment and democratization?

The Age of Surveillance Capitalism is neither a hand-wringing narrative of danger and decline nor a digital fairy tale. Rather, it offers a deeply reasoned and evocative examination of the contests over the next chapter of capitalism that will decide the meaning of information civilization in the twenty-first century. The stark issue at hand is whether we will be the masters of information and machines or its slaves.

Algorithms of oppressionAlgorithms of oppression: How search engines reinforce – below is an excerpt from Nobel’s book: You can also find another review of Algorithms of Oppression here. Run a Google search for “black girls”—what will you find? “Big Booty” and other sexually explicit terms are likely to come up as top search terms. But, if you type in “white girls,” the results are radically different. The suggested porn sites and un-moderated discussions about “why black women are so sassy” or “why black women are so angry” presents a disturbing portrait of black womanhood in modern society. In Algorithms of Oppression, Safiya Umoja Noble challenges the idea that search engines like Google offer an equal playing field for all forms of ideas, identities, and activities. Data discrimination is a real social problem; Noble argues that the combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of Internet search engines, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color, specifically women of color.

Screenshot 2017-09-15 at 9.09.59 PM - Edited

Algorithms to Live By: The Computer Science of Human Decisions by Brian Christian and Tom Griffiths. This book is concerned with the workings of the human mind and how computer science can help human decision making.  Here is a post by Artem Kaznatcheev on Computational Kindness which might give you a glimpse of the some of the issues that book covers. Here is a long interview with Brian Christian and Tom Griffiths and a TED Talk with Tom Griffiths on The Computer Science of Human Decision Making.

The Black Box Society: The Secret Algorithms That Control Money and Information by Frank Pasquale. You can read the introduction and conclusion chapters of his book here.  And here is a good review of Pasquale’s book. You can follow his twitter stream here.

Technically wrong

Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech by Sara Wachter-Boettcher

Here is a synopsis:  A revealing look at how tech industry bias and blind spots get baked into digital products—and harm us all.

Buying groceries, tracking our health, finding a date: whatever we want to do, odds are that we can now do it online. But few of us ask why all these digital products are designed the way they are. It’s time we change that. Many of the services we rely on are full of oversights, biases, and downright ethical nightmares: Chatbots that harass women. Signup forms that fail anyone who’s not straight. Social media sites that send peppy messages about dead relatives. Algorithms that put more black people behind bars.

Sara Wachter-Boettcher takes an unflinching look at the values, processes, and assumptions that lead to these and other problems. Technically Wrong demystifies the tech industry, leaving those of us on the other side of the screen better prepared to make informed choices about the services we use—and demand more from the companies behind them.

Paula Boddington, Oxford academic and author of Towards a Code of Ethics for Artificial Intelligence, recommends the five best books on Ethics for Artificial Intelligence. Here is the full interview with Nigel Warburton, published on December 1, 2017.

Automating inequality

“Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor” by Virginia Eubanks is being published and will be released on January 23, 2018. Here is an excerpt from Danah Boyd’s blog:

“Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor” is a deeply researched accounting of how algorithmic tools are integrated into services for welfare, homelessness, and child protection. Eubanks goes deep with the people and families who are targets of these systems, telling their stories and experiences in rich detail. Further, drawing on interviews with social services clients and service providers alongside the information provided by technology vendors and government officials, Eubanks offers a clear portrait of just how algorithmic systems actually play out on the ground, despite all of the hope that goes into their implementation. Additionally, Berkman Klein discusses “Algorithms and their unintended consequences for the poor” with Eubanks here.

The Big Data Agenda

The Big Data Agenda: Data Ethics and Critical Data Studies by Annika Richterich PDF available through the link here.

“This book highlights that the capacity for gathering, analysing, and utilising vast amounts of digital (user) data raises significant ethical issues. Annika Richterich provides a systematic contemporary overview of the field of critical data studies that reflects on practices of digital data collection and analysis. The book assesses in detail one big data research area: biomedical studies, focused on epidemiological surveillance. Specific case studies explore how big data have been used in academic work.

The Big Data Agenda concludes that the use of big data in research urgently needs to be considered from the vantage point of ethics and social justice. Drawing upon discourse ethics and critical data studies, Richterich argues that entanglements between big data research and technology/ internet corporations have emerged. In consequence, more opportunities for discussing and negotiating emerging research practices and their implications for societal values are needed.”

Re-Engineering Humanity

Re-Engineering Humanity by professor Evan Selinger and Brett Frischmann

Every day, new warnings emerge about artificial intelligence rebelling against us. All the while, a more immediate dilemma flies under the radar. Have forces been unleashed that are thrusting humanity down an ill-advised path, one that’s increasingly making us behave like simple machines? In this wide-reaching, interdisciplinary book, Brett Frischmann and Evan Selinger examine what’s happening to our lives as society embraces big data, predictive analytics, and smart environments.

Outnumbered: From Facebook and Google to Fake News and Filter-bubbles – The Algorithms That Control Our Lives (featuring Cambridge Analytica) by David Sumpter. A review from Financial Times, here.

 

Back to the main resources page

Humility is a luxury the privileged can afford

I had the privilege of participating in a science communication conference last week (12, December 2018). Some of the speakers beautifully and convincingly articulated the argument for the importance of academics communicating their work with non-academics as well as other academics from different disciplines and how to do it. Alan Alda’s talk, in particular was deep,insightful and thought-provoking.

Alda’s “Communication is not something you add to science; it is the essence of science” captures his key message that communication is an essential part of doing science and not something separate and extra. There is very little dispute regarding the importance of sharing one’s work with the general public as well as scientists, and with academics outside one’s field. However, there is very little guidance as to how one ought to go about it. Alda’s talk during the SCI:COM conference in Dublin provided some of the most insightful advice by far that I have come across.

Alda suggests, talk TO and not AT people. This seemingly obvious but powerful statement is a way of shifting the mindset from “giving a talk” or “delivering a lecture” which treats knowledge as something that can be simply dispersed to communication as two-way shared activity.

Science commination is a reciprocal process that involves both the speaker and the audience. It is vital that the communicator pays attention to the person that they are communicating with. “It is up to you,the communicator, to ensure that the person is following and to bring them onboard.” And this requires understanding your audience. As Alda puts it: “the speaker needs to listen harder than the listener”.

Communication, Alda argues, is not about me figuring out the best message and spraying it at you, it is building a reciprocal dynamic relationship that changes both the speaker and the audience. Effective communication is understanding your audience and knowing how to connect with them. In order to do so, we don’t start with crafting the best message; we start with awareness of the audience.

Good science communication, Alda emphasises, requires reputation, which is intrinsically connected to trust. Speaking from a position of authority is different from speaking as an equal fellow human being. Your audience is more likely to trust you when you speak as a fellow human and this requires humility, which brings me to central point of my blog.

I wholeheartedly agree with Alda’s approach to communication and also think that humility is a virtue that needs to be highly valued.However, whether humility is viewed as a virtue is dependent on societal stereotypes, hence my conflict with it. Humility doesn’t yield trust and reputation for everyone and I speak from a perspective of a black woman in academia. 

In academia, we often have an ideal representation or an image of what an ‘intellectual’ looks like. This is typically a white, middle-class, cis, male. Society’s stereotypes make this group of people automatically perceived as authoritative. Academia’s structure means that people who fit the stereotypically ‘intellectual’ are seen as as unquestionable experts. And for the privileged who fit society’s ‘intellectual’, where coming across as authoritative is the default, humility and speaking to their audience as a fellow human, gains them trust. On the other hand, academics that don’t fit society’s stereotypical ‘intellectual’ often have to work hard to simply prove that they are as capable of their white male counterparts. In an academic environment where looks, gender and race are part of ‘fitting in’ and getting acknowledgements as an intellectual, humility, which is an admirable character for the white male, can be a way of proving that you are not capable, for a black woman. When the default assumption is often you might lack the capacities due to your race or gender, humility might seem like conforming people’s assumptions. Humility, downplaying one’s skills and achievements, for the black woman who already struggles to establish herself as an intellectual, can be a self-imposed punishment which underestimates her intellectual capacity. Humility, then seems, a luxury that the privileged can afford.

Having said that, I must emphasize that the problem is not humility itself but societal stereotypes and rigid academic structures. I still think humility is a character we need to treasure, both in academia and outside. I just hope that we gradually challenge these stereotypes of what an expert intellectual looks like, which will then afford minority’s the luxury for humility and not punish them for it.


Why model?

I came across this little paper on the Introduction to Dynamical Systems and Chaos online course from Santa Fe. It was provided as a supplementary reading in the ‘Modelling’ section. The paper lays out some of the most enduring misconceptions about building models.

“The modeling enterprise extends as far back as Archimedes; and so does its misunderstanding.” Epstein (2008)

So, why model? What are models? And who are modellers?

Prior to reading this paper, my short answers to these questions would have been in accordance with the widely held misconceptions that:

We model to explain and or predict. Models are formal representations (often mathematical) of phenomenon or processes. And a modeller is someone who builds these explicit formal mathematical models. However, Epstein explains:

“Anyone who ventures a projection, or imagines how a social dynamic—an epidemic, war, or migration—would unfold is running some model.”

I like the idea that we all run some implicit models all the time. In the social and political sphere, where it is extremely difficult to operationalize and specify variables, this perspective gives implicit modelling such as drawing dynamical analogies, its due importance.

The paper lays out 16 reasons other than prediction for building models. And the idea that prediction and explanation aren’t the only modelling goals was revelation to me given that I’ve had a love hate relationship with modelling in the past. I am attracted to models, specially those with dynamical systems inclination but the overall tendency towards prediction as a goal often frustrates me. Just to clarify, prediction is a fine goal but my objection arise when 1) we’re deluded into thinking that models give us the tools to predict specific individual behaviours and 2) we can model a phenomenon, especially human behaviour, without first understanding it.

ML

xkcd: Machine Learning

Let me elaborate further in the context of automated predictive system that are currently trending (at least, within my academic circle) and often preoccupy my thinking. Claims to predict “criminal” and “risky” behaviour are examples from last week’s headlines: UK police wants Artificial Intelligence (AI) to predict criminal behaviour before it happens and Predictim, a commercial data analytics firm, claims its AI can flag “risky” babysitters. Unfortunately, these are not the outrageous exceptions but the general direction where things in the digital surveillance sphere seem to be heading.

Behaviours such as “criminal” or “risky” are very complex adaptive behaviours which are a result of infinite ongoing factors, which we can never fully specify in the first place. This makes it impossible to predict criminal behaviour with certainty. Juarrero reminds us why it is impossible to predict human behaviour with precision:

“When we are dealing with complex adaptive systems, surprises are unavoidable. Because of their sensitivity to initial conditions – due, in turn, to their contextual and temporal embeddedness – complex adaptive systems are characterized by unusual twists and novel turns. Since we will never be able to specify any dynamical system’s initial conditions to the requisite (infinite) degree, a fortiori we will never be able to capture all the details and circumstances of anyone’s life and background. Given this limitation, we must always keep in mind that reconstructing specific instances of behavior will always be, at best, an interpretation and not a deduction – a much more fallible type of explanation than we had previously hoped was available. Interpretations of human action are always tentative. Absolute certainty about either what the agent just did, or what he or she will do – specifically – a year from now, is therefore impossible.” (Juarrero 1999, p. 225)

These claims to predict “criminal” or “risky” behaviour are more than a mere misunderstanding of human nature or simple illusions about what AI tools are capable of doing. As these tools are being implemented into the social world, they have grave consequences on people’s lives. When claiming to predict someone’s potential criminality, errors are inevitable. The stakes are high when we get things wrong. Unsurprisingly, it is often society’s most vulnerable, those who are disfranchised, that pay a high price. Indeed, such models are used to further punish and disfranchise those that fall prey to these models.

A slightly different but interrelated issue with modelling to predict is that the strive to predict and explain often ignores the value of describing and/or observing to gain deep understanding. Sure, describing to understand, and explaining and predicting aren’t mutually exclusive. However, in reality, we seem to have blindly adopted prediction and generalization as primary goals of science. Studying to describe and understand, as a result, are undervalued. What is the point of describing? you might ask. I think it is fundamental to understand any phenomena or process as deeply and comprehensibly as possible before we can attempt to explain or predict it, and description is key to gaining such understanding.

I’ll leave you with an insightful Geertz (1973) passage from The Interpretation of Cultures:

“… I have never been impressed with claims that structural linguistics, computer engineering or some other advanced form of thought is going to enable us to understand men without knowing them.”

How to prepare a talk on AI

How would you give a talk on Artificial Intelligence (AI) to 120 students between the age of 16-18, not all of whom are necessarily interested or have a background in science? How would you define AI? What would you include (and exclude)? What is the best way to structure it? Well, surely, there are many valid answers to these questions. It was the first time that myself and Elayne Ruane, a colleague who is also a PhD researcher, attempted to give an 80 minute talk to a big crowd of such students. We didn’t find much in terms of guidance or advice on how to interact with the students or how to frame the AI discourse in a suitable manner for students who are about to embark on their college journey. We wanted to convey the excitement, hope and potential the field holds while also portraying a realistic image of its current state. Hopefully sharing our general approach might be helpful to anybody who finds themselves in a similar situation.

Mind, what worked for us might not work in different contexts, mindsets, situations, or for a different topic. AI is one of the most over-hyped and misunderstood areas of research in the minds of the general public. Furthermore, AI has been somewhat associated with a certain stereotypical archetype in the media – a white male genius computer geek. How one introduces the field and the kind of work and influential figures one includes plays a subtle but important role towards challenging these misconceptions and stereotypes. Specifically, when addressing a crowd of young people in the midst of deciding what areas of study they will pursue at university, how you present the field of AI can send implicit signals about who is welcome. For us, this is everyone.

Initial Discussion

We began our talk with a brief discussion of what a computer science degree, as one of the routes to AI research, entails (within the context of our own department at University College Dublin) and the kinds of careers that it can lead to while raising the point that there isn’t one path to follow. We then briefly talked about exemplar AI projects that are taking place within our own School. We kept this part of the talk very interactive by frequently polling the group by way of raising their hands. This was important in keeping the students engaged.

What is AI?

We discussed the general definition of AI – the common view that artificial intelligence refers to a machine that simulates human intelligence. What it means to ‘simulate’ or ‘human intelligence’ are contested and of course far from settled. However, we felt it was important to keep it simple for the purpose of this talk. ‘Machines that simulate human intelligence and exhibit human behaviour’, often comes down to abilities such as learning, problem solving, reasoning, language processing and the like.

Unlike other disciplines such as physics or biology, Artificial Intelligence is not a clearly defined and well contained discipline but rather a very broad and cross disciplinary endeavour. It draws from mathematics, engineering, biology, neuroscience, linguistics, philosophy, and many more. Although the most direct route to studying AI is through computer science (certainly within the context of UCD), one can also get to AI through other routes. Besides, AI can be synthesized with any field of enquiry, including, neuroscience, music and art. Christie’s recent AI generated art is a good example.

AI is a wide umbrella term with sub-fields including robotics, natural language processing, computer vision, machine learning and deep learning, speech recognition, machine translation and more. We tried to use examples of these relevant to the students including Google Translate, Amazon’s Alexa, PS4 games, Minecraft, facial recognition tools, and robots. We showed them the famous video of Boston Dynamic’s robot, Spot, dancing to Uptown Funk which was a huge hit.

The History of AI

Ada_Lovelace_portraitAI is often thought of as a recent development, or worse, as futuristic, something that will happen in the far future. We tend to forget that dreams, aspirations and fascinations with AI go back in history back to antiquity. In this regard, Rene Descartes’s simulacrum and the Mechanical Turk are good examples. Descartes was fond of automata and had a walking and talking clockwork named after his daughter Francine. The machine apparently simulated his daughter, who died of scarlet fever at the age of 5. Similarly, the 18c Hungarian author and inventor Wolfgang von Kempelen created the Mechanical Turk, (a fake) chess-playing and speaking machine to impress the Empress Maria Theresa of Austria.

We can list an endless number of scholars who contributed to the development of AI as it is conceived today. The main towering figures we included were:

  • Al KhawarizmiThe ninth century Persian mathematician Muḥammad ibn Mūsā al-Khwārizmī who gave us one of the earliest mathematical algorithms. The word “algorithm” comes from mispronunciation of his name.
  • The English mathematician, Ada Lovelace who is often regarded as the first computer programmer.
  • Alan Turing who is regarded as the father of theoretical computer science and whom most students seemed to be already aware of.
  • And more recently, and perhaps scholars most influential in shaping the way we currently understand AI, are Marvin Minsky, John McCarthy, and Margaret Masterman.

Fun Game

We tried to make our talk as interactive as possible. We had questions and discussion points throughout. Towards the end, we had a game where students had to guess whether the AI being described on each slide was ‘sci-fi’ or ‘real’. Here are the main examples. Have a go yourself. 🙂

Sci-fi or real

  • Self-aware robots

Self aware robot

  • Face recognition AI which rates people’s ‘trustworthiness’

Trustworthiness AI

  • A bedside light that notifies you of your retweets

Light notifying RT

  • Automated confession systems

eConfession

Common Misconceptions

If there is anything that the AI narrative is not short of, it’s hype and misconception. Clarifications, in a subtle way, both help illustrate what the actual current state of the field is as well as highlighting the challenges that arise with it. As such, the final concluding remakes were highlighting the misconceptions surrounding AI and the ethical concerns that necessarily arise with any technological advancement. The major misconceptions we mentioned are:

  1. AI is a distant reality. The fact is far from it. AI is deeply embedded in the infrastructure of everyday life. It is invisible and ubiquitous.
  2. AI equals robots or ‘self-driving’ cars. As it would have been obvious by now, robotics is simply one dimension.
  3. AI is neutral and can’t be biased. This again is far from reality. As AI integrates deeper into the educational, medical, legal, and other social spheres, ethical questions inevitably arise. Questions of ethics, fairness, and responsibility are inherently questions of AI.

That concludes the content of the talk.

General advice:

  1. Keep it open and flexible. Create opportunities to hear from them. This allows you to get an idea of their awareness and knowledge (which can then help you calibrate on the fly in terms of technical detail) while also keeping them engaged.
  2. Pictures, more picture, and videos, are a great way to open up discussion. We showed a video of Google Assistant making a phone call which really captured their attention and got them talking. This also brought forth some ethical discussion.
  3. Prepare for plenty of questions around “Is AI going to take over?” and “How scared and worried should we be?”. It’s important to highlight how AI advancements can be misused but the trick is to highlight how much of what is reported on AI is overly blown hype which contributes to these unnecessary and unrealistic fears of AI when in fact much of the development in AI remains still premature. On the other hand, remember, we were talking to young science students about to embark to college. We still want to encourage them and want them to feel the dreams, excitements and hopes that have been the driving force of AI, at least in the 50s and 60s and the promising potentials that AI presents in medicine, robotics and more.

 

Further reading

 

 

The AI side of cognitive science is concerned with first world problems

I recently had the opportunity to attend a multidisciplinary conference where cognitive scientists, philosophers, psychologists, artificial intelligence (AI) researchers, neuroscientists and physicists came together to discuss the self. The conference was, generally speaking, well organized and most of the talks were interesting. The theme of the conference was on the openness of the self which means that contrary to the traditional essentialist view of self as fixed, fully autonomous and self-contained, the consensus, among the attendees, was that the self is not a static, discrete entity that exists independent of others but dynamic, changing, co-dependent, and intertwined with others. This intertwinement would furthermore extend to social and political forces that play crucial roles into constituting who we are. In this vein, any discussion of self and technology needs to acknowledge the entanglement of social and political factors and the necessity for diverse input and perspectives.

AI is a very broad field of enquiry which includes, to mention but a few, facial recognition technologies, search engines (such as Google), online assistants (such as Siri), and algorithms which are used in almost every sphere (medical, financial, judicial, and so on) of society. Unfortunately, the view of AI that seems to dominate public as well as academic discourses is a narrow and one-dimensional one where the concern revolves around the question of artificially intelligent “autonomous” entities. This view is unsurprisingly often promoted by a one-dimensional group of people; white, middle-class and male. Questions outside “the creation of artificial AI” rarely enter the equation. The social, political, and economical factors rarely feature in the cognitive science and interdisciplinary formulations of selfhood and technology — as if any technological development emerges in a social, political and economical vacuum. And the conference I attended was no different.

This was apparent during theme-based group discussions at this conference where one group discussed issues regarding self and technology. The discussion was led by researchers in embodied AI and robotics. The questions revolved around the possibility of creating an artificial self, robots, whether AI can be sentient and if so how might we know it. As usual, the preoccupation with abstract concerns and theoretical construction took centre stage, to  the detriment of the political and social issues. Attempts to direct some attention towards the social and political issues were dismissed as irrelevant.

It is easy to see the appeal of getting preoccupied in these abstract philosophical questions. After all, we immediately think of “I, Robot” type of robots when we think of AI and we think of “self-driving” cars when we think of ethical questions in AI.

game and gambling, gaming machines, chess playing Turk, design by Wolfgang von Kempelen (1734 - 1804), built by Christoph Mechel

A 1980s Turk reconstruction

The fascination and preoccupation for autonomous and discrete machines is not new to current pop-culture. The French philosopher René Descartes had a walking and talking clockwork named after his daughter Francine. The machine apparently simulated his daughter Francine, who died of scarlet fever at the age of five. The 18c Hungarian author and inventor Wolfgang von Kempelen created the Mechanical Turk, (a fake) chess-playing and speaking machine to impress the Empress Maria Theresa of Austria.

It is not surprising that our perception of AI is dominated by such issues given that our Sci-Fi pop culture plays an influential role towards our perception of AI. The same culture feeds on overhype and exaggeration of the state of AI. The researchers themselves are also often as responsible for miscommunication and misunderstanding about the state of the art of the filed. And the more hyped a piece of work is, the more attention it is given – look no further than the narrative surrounding Sophia – an excessively anthropomorphized and overhyped machine.

Having said that, the problem goes further than misleading coverage and overhype. The overhype, the narrow one-dimension view of AI as concerned with question of artificial self and “self-driving” cars, detracts from nuanced and most important and more pressing issues in AI that impact the very poor, disfranchised, socially, economically disadvantaged. For example, in the current data economy, insurance systems reward and offer discounts for those that are willing to be tracked and provide as much information about their activities and behaviours. Consumers who want to withhold all but the essential information from their insurers will pay a premium. Privacy, increasingly, will come at a premium cost only the privileged can afford.

An implicit assumption that AI is some sort of autonomous, discrete entity separate from humans, and not a disruptive force for society or the economy, underlies this narrow one-dimensional view of AI and the preoccupation with the creation of artificial self. Sure, if your idea of AI revolves around sentient robots, that might bear some truth. This implicit assumption seems, to me, a hangover from Cartesian dichotomous thinking that remains persistent even among scholars within the embodied and enactive tradition who think that their perspectives account for complex reality. This AI vs humans thinking is misleading and unhelpful, to say the least.

AI systems are ubiquitous and this fact is apparent if you abandon the narrow and one-dimensional view of AI. AI algorithms are inextricably intertwined with our social, legal, health and educational system and not some separate independent entities as we like to envision when we think of AI. The apps that power your smart phone, the automated systems, including those that contribute to the decision towards whether you get a loan or not, whether you are hired or not, or how much your car insurance premium will cost you all are AI. AI that have real impact, especially on society’s most vulnerable.

Yet, most people working on AI (both in academia and Silicon Valley) are unwilling to get their hands dirty with any aspect of the social, economic or political aspect and impact of AI. The field seems, to a great extent, to be constituted of those who are socially, economically and racially privileged where these issues bear no personal consequences. The AI side of cognitive science is no different with its concerns of first world problems.  Any discussion of a person or even society is devoid of gender, class, race, ability and so on. When scholars in these fields speak of “we”, they are barely inclusive of those that are outside the status quo which is mostly a white, male, Western, middle-class educated person. If your model of self is such, how could you and why would you be concerned about the class, economic, race and gender issues that emerge due to unethical application of AI, right? After all, you are unlikely to be affected.  Not only is the model of self unrepresentative of society, there barely is awareness of the issue as a problem in the first place. The problem is invisible due to privilege which renders diversity and inclusivity of perspectives as irrelevant.

This is not by any means a generalization of everyone within the AI scholarship. There are, of course, plenty of people who acknowledge the political and social forces as part of issues to be concerned about within the discussion of AI. Unsurprisingly, such important work in this regard is done by people of colour and women who unfortunately, remain a minority. And the field as a whole would do well to make sure that it is inclusive of such voices, and to value their input instead of dismissing them.

Situating China’s Social Credit System in history and context

If you have been following the developments in the digital humanities, it is very likely that you’ve come across the news that China is implementing a Social Credit System, officially known as Social Credit Score (SCS). Although the SCS is portrayed as a single integrated system that quantifies all behaviour into credit scores, it is in fact an ecology of fragmented initiatives with many different stakeholders. Broadly speaking, it consists of scoring systems developed by private sectors and by governmental bodies. As far as the governmental perspective is concerned, the SCS is an attempt to promote “trustworthiness” and transparency in the economy which is expected to combat perceived lack of trust in the marketplace, and more generally to harmonize social conduct.

Citizens “trustworthiness” is rated based an individual’s social behaviour such as their crime records, what they say on social media, what they buy, the scores of their friends, and so on. This has possible positive or negative implications on individual’s job, visa, loan applications. As a commitment towards radical transparency is a central driving force behind the SCS, information on subjects’ trustworthiness is made publicly available, and in some circumstances even being actively broadcast. Individual citizens and businesses alike are publicly ranked where the records are publicly open.

SCS civilized families

Roncheng’s “civilized families” are displayed on public noticeboards like these. (Simina Mistreanu)

The SCS is to become mandatory by 2020 and is currently being implemented in some form or another across parts of China. Areas that are socioeconomically deprived seem prior targets. Rongcheng in the eastern province of Shandong, where the SCS has been rolled out for some time now, is, according to government officials, one of the best examples of the system working as intended, according to government officials.

From a general systems science perspective, the SCS is a self-organizing system that operates through incentive and punishment mechanisms. People with low ratings will, for example, have slower internet speeds, restricted access to restaurants, and the right to travel invoked.

“Higher scores have already become a status symbol, with almost 100,000 people bragging about their scores on Weibo (the Chinese equivalent of Twitter) within months of launch. A citizen’s score can even affect their odds of getting a date, or a marriage partner, because the higher their Sesame rating, the more prominent their dating profile is on Baihe.” (Creemers, 2018)

The SCS has been described as an insidious digital panopticon and a dystopian nightmare where individuals’ every move are monitored and ranked through data generated from all sorts of activity and interactions, online or otherwise through digital technologies (facial recognition tools and biometric information). Many draw parallels between the SCS and the dystopian science fiction Black Mirror episode “Nosedive” where people rate each other based on their interactions.

Black Mirror rating

Many ethical and human rights issues as well as the complete eradication of the idea of privacy have been raised and the negative consequences of such a dystopian nightmare system is indisputable.

With the realization that ‘digital reputations’ could limit opportunities comes the tendency to self-censor and the tendency to be risk-averse. We are unlikely to hit “like” on a Facebook post that protests some government policy knowing that it could impact our ‘digital reputations’. Consequently, people gradually change their behaviour to align with what the system requires, to get better scores. In the process those behaviours and norms defined as “acceptable” by the government are reinforced.

Nonetheless, among the misconceptions surrounding the SCS, there seems to be some consensus that using individual’s digital traces to directly or indirectly influence individual’s behaviour is something that only happens in non-Western totalitarian states. In fact, credit scoring practices are not unfamiliar in Western societies. Facebook, for instance, seems it is developing its own system of rating users trustworthiness.

It is also worth mentioning Facebook’s emotion tracking patent (where the aim is to monitor individuals’ typing speed in order to predict emotions and adapt messages in response), which was granted in May 2017 and the currently filed Socioeconomic classifier (which might enable Facebook to rank its users according to different social classes), among its series of patents. These developments in combination with others, such as Facebook’s ability to flag individuals through its facial recognition technology without the consent of the user, in some sense constitute a surveillance society. Facebook’s ability to rank and categorize people into a variety of socioeconomic categories has possible impacts on individuals’ opportunities depending on their class, gender, race and sexual orientation. Whether its the type of job ads one is excluded from viewing (due to their gender, class or age) or the exclusion from certain housing ads, Facebook’s ranking and categorizing systems often impact the under-privileged and those who fail to conform to the status quo.

Health insurance

Marshall Allen, July 2018, ProPublica

Along social media platforms, health insurers, and schools, can also be mentioned as examples that share features of the SCS. Like the SCS, these Western industries and institutes, track and surveil people through digital technologies including face recognition tools and biometric information.

We are rated, ranked and categorized using data extracted from us. Similar to the SCS, such ranking and rating often has possible “real” life consequences whether in the form of how much we pay for our insurance, what ads are pushed on us, or how we behave in school yards. The difference between the Chinese SCS and Western tech industry is, while the former is clear and upfront about it, the latter is much more invisible. In fact, such tech giants go out of their way to hide what they are doing.

Rating systems, those by the SCS or deployed through Western tech industry, create unwanted incentives and increase pressure on individuals to conform to the status quo. This creates and contributes to a society that is risk averse.

“When doctors in New York were given scores this had unexpected results. Doctors that tried to help advanced cancer patients had a higher mortality rate, which translated into a lower score. Doctors that didn’t try to help were rewarded with high scores, even though their patients died prematurely.” Tijmen Schep

Situating the SCS in history and context

The history and context which are crucial to the development of the current SCS are often missing from how the SCS is framed, at least within in Western media .

“[social systems] must be viewed whole cloth as open dynamical systems embedded in a physical, historical, and social fabric” (Juarrero, 1999, p. 201)

As far as China’s political tradition goes, morality and authority are inextricably linked. Enforcing moral standards, monitoring and disciplining the conduct of local officials and individual citizens is seen as the role of the state. “Governing the country by virtue” equals to “governing the country by the law”. Unlike the Western legal system where rights, responsibilities and entitlement of private actors and public sectors are relatively easily categorized, such categories are much more blurred within the Chinese legal system. Individual citizens, government officials, communities and business are all expected to contribute to the whole social and economic harmony and development.

“Chinese political tradition has, for centuries, conceived of society as an organic whole, where harmony can be achieved if all its members conduct themselves as appropriate to their position in public and civil structures. … Critical in this process were ideas about systems theory, derived from natural science and applied in the social context. Influenced by Western scholarship on cybernetics and systems theory, scholars such as Qian Xuesen and Song Jian worked closely with government to develop a conceptual framework for the adoption of systems engineering techniques in governance. Particular regard was given to the role of information flows, not just towards and within government, but also as part of cybernetic feedback loops to create self-correcting responses in society.” (Creemers, 2018, p. 7)

Historically the Chinese government has experimented with some forms of social control and controlling social order through self-policing and social controlling mechanisms go all the way back to the Song Dynasty.

“An 11th-century emperor instituted a grid system where groups of five to 25 households kept tabs on each other and were empowered to arrest delinquents” Mistreanu, 2018. The current SCS then is an extension of such historical traditions. The difference now is the addition of digital technologies.

From the Chinese authorities perspective the SCS epitomizes a self-correcting feedback loop where “trustworthiness” and social morality are fostered through incentives and punishments.

This by no means is to argue that the SCS is any less of a digital panopticon. However, by highlighting history and context, often missing from the SCS narrative, we can paint a somewhat complex and nuanced image of the system (as opposed to the often alarming pieces which are stripped of context and history). Furthermore, while we are preoccupied by the stories of how China is becoming one giant surveillance prison, we miss the indirect and evasive practices that are happening within our own “civilized” Western system.

 

Bibliography

Creemers, R. (2018). China’s Social Credit System: An Evolving Practice of Control.
Juarrero, A. (1999). Dynamics in action: Intentional behavior as a complex system (p. 127143). Cambridge, MA: MIT press.

 

 

The in(human) gaze and robotic carers

Google search “robot carers” and you’ll find extremely hyped up articles and think pieces on either how robot carers are just another way of dying even more miserably or how robot carers are saving the elderly from the lives of loneliness and nothing much in between. Not much nuance. Neither is right, of course. Robot carers shouldn’t be dismissed at first hand as the end of human connections and neither should they be overhyped as the flawless substitutes for human care.

I think they can be useful and practical and even preferable to a human carer in some cases while they cannot (and most likely never will) substitute human care and connection in other aspects. The human gaze is my reason for thinking that.

But first let me say a little about the Inhuman Gaze conference, which provoked me to think about robot care givers. The conference took place last week (6th – 9th June) in Paris. It was a diverse and multidisciplinary conference that brought together philosophers, neuroscientists, psychiatrists (scholars and practitioners alike) with the common theme of the inhuman gaze. Over the four days, speakers presented their philosophical arguments, empirical studies and clinical case studies, each from their own perspective, what the human/inhuman gaze is and its implication for the sense of self. I, myself, presented my argument for why other’s gaze (human or otherwise) is a crucial constituent to “self”. I looked at solitary confinement as an example. In solitary confinement (complete isolation or significantly reduced intersubjective contact), prisoners suffer from negative physical and psychological effects including confusion, hallucination and gradual loss of sense of self. The longer (and more intense) the solitary confinement goes, the more the pronounced the negative effects, leading to gradual loss of sense of self.

The reason for gradual loss of self in the absence of contact with others, Bakhtin would insist, is that the self is dependent on others for its existence. The self is never a self-contained and self-sustaining entity. It simply cannot exist outside the web of relations with others. Self-narrative requires not only having something to narrate but also having someone to narrate it to. To be able to conceptualize my self as a meaningful whole, which is fundamental to self-individuation and self-understanding, I need an additional, external perspective – an other.  The coherent self is put under threat in solitary confinement as it is deprived of the “other”, which is imperative for its existence. The gaze of another, even when uncaring, is an affirmation of my existence.

So, what is an inhuman gaze? A gaze from non-human objects: like the gaze of a wall in a solitary confinement? The gaze of a CCTV camera (although there often is a human at the other end of a CCTV camera)? or a gaze from a human but one that is objectifying and dehumanizing? For example, the gaze from a physician who’s performing an illegal organ harvesting where the physician treats the body that she’s operating on like an inanimate object? Let’s assume an inhuman gaze is the gaze of non-human objects for now. Because the distinctiveness of the human gaze (sympathizing, caring, objectifying or humanizing) is important to the point that I am trying to make. The human gaze, unlike the inhuman gaze, is crucial to self-affirmation.

channel-4s-new-sci-fi-robot-series-humans

From Channel 4’s sci-fi robot series Humans

Robot caregivers and the human gaze…

Neither the extreme alarmist nor the uncritical enthusiast help elucidate the pitfalls and potential benefits of robot caregiving. Whether robotic caregiving is a revelation or a doom depends on the type of care one needs. Roughly speaking, we can categorize care that robots can provide into two general categories. First one is physical or mechanical care – for example., fetching medicine or changing elderly patients into incontinence wear. The second one, on the other hand is companionship (to elderly people or children) where the aim might be to provide emotional support.

Now, robotic care might be well suited for the physical or mechanical type of care. In fact, some people might prefer a robot dealing with such physical task as incontinence care or any similar task that they are no longer able to perform themselves. Such care, when provided by a human, might be embarrassing and humiliating for some people. Not only is the human gaze capable of deep understanding and sympathy but also the potential to humiliate and intimidate. The robotic gaze, on the other hand, having no intrinsic values, is not judgemental. So, in the case of physical and mechanical care, the absence of the human gaze does not necessarily result in a significant negative effect. In fact, it might be desirable when we are in a vulnerable position where we feel we might be humiliated.

On the contrary, if companionship and emotional support are the types of care that we are looking for, the value and judgement free robotic gaze will simply not do. We are profoundly social, dynamic and embodied beings who continually strive to attribute meaning and value to the world. If we are to ascribe an ‘essence of the human condition’, it is that that our being in the world is thoroughly interdependent with the existence of others and context where we continually move and negotiate between different positions. True companionship and emotional connection requires intrinsic recognition of emotions, suffering, happiness, and the like. A proper emotional and ethical relation to the other (and the acceptance of genuine responsibility) requires the presence of a loving and value-positing consciousness, and not a value-free, objectifying gaze.

True human companionship and emotional support cannot be programmed into a robot no matter how advanced our technologies can become, for companionship and emotional connection require sense-making and a value-positing consciousness. Sense-making is an active, dynamic and partly open undertaking – and therefore a never-ending process – not a matter of producing and perceiving mappings of reality which can then be codified into a software.  The human gaze affords mutual understanding of what being a human is like. Recognition of emotions, suffering, etc., requires recognition of otherness based on mutual understanding. The human gaze recognizes an ‘other’ human gaze. As Hans Jonas has put it succinctly in ‘The Phenomenon of Life’, “only life can know life … only humans can know happiness and unhappiness.” 

 

 

A foetus is not a person

As the referendum on the Eighth Amendment of the Constitution of Ireland fast approaches, misinformation and misunderstanding (both deliberate and unintentional) continue to circulate on a massive scale, both on social media platforms and on the forest of posters that line every road and street. The rhetorical weapons used by the Vote No campaign are subtle and powerful. Consequently, it is becoming increasingly difficult to distinguish fact from mere propaganda, and sound argument from mere rhetoric.

Unsurprisingly, the stakes are high. For those seeking to remove the Eighth Amendment, the basic right to bodily autonomy for women and girls is at stake. For those seeking to maintain the status quo, the power to control, limit and punish women and girls – the basic aims of misogyny – seem to be slipping away. Emotions run high.

One clearly fallacious argumentative strategy used by the Vote No campaign is the use of various “slippery slope” arguments. For example, they argue that if abortion is legalized, then it will lead to terminations of all pregnancies with life limiting conditions; or that if abortion up to 12 weeks is legalized, then there is no guarantee that it won’t be extended to 20 weeks, or to 9 months, or indeed lead to the legalisation of infanticide.

There are a number of problems with these arguments. First, they are empirical, causal arguments: they tell us that if such-and-such state of affairs comes about, then a certain effect will follow. But we should only believe such arguments on the basis of empirical evidence: in particular, only if the relevant kind of state of affairs really has led to the relevant sort of effect in the past. And this evidence simply does not exist: there is no evidence that once the termination of pregnancy under certain circumstances is legalised in a jurisdiction, the effects claimed in the various slippery slope arguments come about.

Moreover, the first example above – that if abortion is legalized, then it will lead to terminations of all pregnancies with life limiting conditions – assumes that the only purpose of abortion is to terminate pregnancies with life limiting conditions. How does termination of pregnancies for reasons other than life limiting conditions fare based on this logic? A foetus’s life limiting conditions is not the sole reason for abortion, and prohibiting abortion in general in order to prevent termination of pregnancies with life limiting conditions makes no sense, since not all terminations are due to life limiting conditions of the foetus.

Furthermore – and this cannot be emphasised enough – abortion happens whether it is legal or not. So criminalizing abortion does not solve any problems – it simply creates more misery and suffering. Criminalizing abortion deprives women and girls of access to safe and legal abortion, and forces them to seek other unsafe means of terminating unwanted pregnancies.

This is an important point: the effect of legalising abortion is not to allow access to abortion where there was none, but rather to make abortion safe.

Finally, voting no to repeal the Eighth Amendment is actively deciding to take away a woman’s right to autonomy over her body. Since legalizing abortion is giving women the right to decide for themselves, neutrality is an expression of satisfaction with the status quo – which currently either forces women either to travel to seek termination, to go through the procedure illegally and unsafely, or forces (using the power of the State) women to carry pregnancy to full term.

Personhood: The Western Christian View vs dialogical views

Arguments about how Irish society’s most vulnerable (working class women, women of colour) are most affected by the lack of safe and legal abortion, or how the criminalizing of abortion in Ireland deprives women of their reproductive rights, are ongoing and familiar within the abortion debate.

What we want to address here is the (mis)conception (which is the basis of some anti-abortion arguments) that a foetus is a person with a right to life equivalent to that of a fully-fledged adult. This follows from the argument that life begins at conception, and that life guarantees personhood. It is a consequence of this line of thinking that as soon as conception occurs, there exist two independent lives (the foetus and the pregnant woman) with equal rights.

Even if we grant that life begins at conception, the idea of equating of life with personhood is a wholly misguided one, which has its roots in the Western Christian notion of a soul. A person, according to this doctrine, is a totally autonomous, self-contained entity that exists independent of others. This view is generally known as “individualistic” and to a large extent attributed to the 17th-century French philosopher, René Descartes. This conception of selfhood is not only problematic but on a closer inspection fails to provide logical support for the argument that a foetus is a person. A foetus is clearly not a totally autonomous and self-sufficient entity, and therefore cannot be granted personhood.

Ubuntu-Empathy-the-New-Paradigm-for-Humanity.jpg

Alternatively, dialogical perspectives of selfhood provide a radically different view of personhood – one that views other people as imperative pillars of the self. According to dialogical perspectives, which stand in sharp contrast to the individualistic notion of personhood, we need and rely on others in order to construct and sustain our sense of self. We are inextricably inseparable from those around us, and continually develop and change through intersubjective interaction with others. Our self-knowledge comes from others and continually develops through our daily intersubjective interactions with others and our environment. As the Russian intellectual Mikhail Bakhtin put it:

“Within my own consciousness, my “I” has no beginning and no end. The only way I know of my birth is through accounts I have of it from others; and I shall never know my death, because my “self” will be alive only so long as I have consciousness – what is called my death will not be known by me, but once again by others. While the birth and death of others appear to be irreversibly real.”

Without others, the very core of our existence is threatened and solitary confinement is a grim and harrowing example of this. With the view of a person as a process (rather than an entity) continually developing and changing and interdependent with others, we might then grant a foetus some status of personhood but one that is not on a par with a fully-fledged adult and, furthermore, one that is entirely dependent on others (the pregnant woman or girl to be specific) for its existence and identity.

The idea of a continually changing self (which is dependent on others for its existence) can be troubling, especially since it seems to remove the apparent total autonomy that we typically take for granted, especially within individualistic cultures.

In non-individualistic cultures – where others are seen as pivotal constituents of the self, communal values are prior to individual ones, and we are before I am (or as the Kenyan-born philosopher John Mbiti put it ‘I am because we are, and since we are, therefore I am’) – the collective comes before the individual. Responsibility, for example, is distributed among every member of the community, and not something that is left up to individual parents.

In Ethiopia, for example, where I grew up, the manifestation of the sentiment ‘the collective before the individual’ can be seen in the values placed around abortion and child-rearing. While less emphasis is put on the status of the foetus (and the decision mainly left up to the grown adult – the pregnant woman or girl –  who is not equated with a foetus) more emphasis is put on the responsibility of each member of the community in raising a child. If, for example, an adult sees a school child skipping school, responsibility to advise that child and send them back to school is assumed. Members of that community are active participants in the upbringing of all children. This is in stark contrast to more individualistic cultures, where people make it their business to monitor pregnant bodies during pregnancy and (in the Irish case) force women or girls to carry pregnancy to full term, but take little or no responsibility for a baby when it has been born.

Coming back our point about abortion and the implication of the idea of personhood as a process that continually develops with complete dependence on others (gradually becoming less dependent) is that the foetus, being the early stage of the process, in not a person similar to that of a fully-fledged adult.

The response that is often raised by the anti-abortionists to this is that “when does a foetus become a person?”, “at what point does transformation from foetus to a person occur?”

This unfortunately is an ill-framed and irrelevant question, philosophically speaking. Personhood is a process that develops over time through intersubjective dialogical relationships with others. Not an all-or-nothing entity that either exists or doesn’t. Not something you have or don’t have. Looking for a specific time when a foetus becomes a person is therefore misguided.

The anti-abortion argument that a foetus has a right to life in the same sense that a pregnant woman or girl has then lies on philosophically erroneous conceptions of personhood. Given that personhood is a process of continual development that is sustained in interaction with others, it is a mistake to think that the foetus (which is considerably less developed and immensely dependent on the pregnant women) is a person in the same way – and to the same degree – that the pregnant woman or girl is.

This blogpost is co-written by Abeba Birhane, Cognitive Science PhD candidate at University College Dublin and Dr. Daniel Deasy, Lecturer in Philosophy at University College Dublin.