Cognitive Science

The dark side of Big Data – how mathematical models increase inequality. My review of O’Neil’s book ‘WMD’

We live in the age of algorithms. Where the internet is, algorithms are. The Apps on our phones are results of algorithms. The GPS system can bring us from point A to point B thanks to algorithms. More and more decisions affecting our daily lives are handed over to automation. Whether we are applying for college, seeking jobs, or taking loans, mathematical models are increasingly involved with the decision makings. They pervade schools, the courts, the workplace, and even the voting process. We are continually ranked, categorised, and scored in hundreds of models, on the basis of our revealed preferences and patterns; as shoppers and couch potatoes, as patients and loan applicants, and very little of this do we see – even in applications that we happily sign up for.

More and more decisions are increasingly handled by algorithms, which in theory, should mean that human biases and prejudices should be eliminated. Algorithms are, after all, “neutral” and “objective”. They apply the same rules to everybody regardless of race, gender, ethnicity or ability. However, this couldn’t be far from the truth. In fact, mathematical models can be, and in some cases have been, tools that further inequality and unfairness. O’Neil calls these kinds of models Weapons of Math Destruction (WMD). These models are biased and unfair as they learn to encode poisonous prejudices, learning from past records just how to be unfair. These models punish the racial and ethnic minorities, low-wage workers, and women. It is as if these models were designed expressly to punish and to keep them down. As the world of data continues to expand, each of us producing ever-growing streams of updates about our lives, so do prejudice and unfairness.

Mathematical models have revolutionised the world and efficiency is their hallmark and sure, they aren’t just tools that create and distribute bias, unfairness and inequality. In fact, models, by their nature are neither good nor bad, neither fair nor unfair, neither moral nor immoral – they simply are tools. The sports domain is a good example where mathematical models are a force for good. For some of the world’s most competitive baseball teams today, competitive advantages and wins depend on mathematical models. Managers make decisions that sometimes involve moving players across the field based on analysis of historical data and current situation and calculate the positioning that is associated with the highest probability of success.

There are crucial differences, however, between models such as those used by baseball managers and WMDs.  While the former is transparent, and constantly updates its model with feedbacks, the latter by contrast are opaque and inscrutable black-boxes. Furthermore, while the baseball analytics engines manage individuals, each one potentially worth millions of dollars, companies hiring minimum wage workers, by contrast, are managing herds. Their objectives are optimising profits so they slash their expenses by replacing human resources professionals with machines that filter large populations into manageable groups. Unlike the baseball models, these companies have little reason – say plummeting productivity – to tweak their filtering model.  O’Neil’s primary focus in the book are those that are opaque and inscrutable, that are used within powerful institutions and industries which create and widen inequalities – WMDs – “The dark side of Big Data”! 

Weapons-of-math-destructionThe book contains crucial insights (or haunting warnings, depending on how you choose to approach it) to the catastrophic directions mathematical models used in the social sphere are heading. And it couldn’t come from a more credible and experienced source than a Harvard mathematician who then went to work as quant for D. E. Shaw, a leading hedge fund, and a data scientist, among other things.

One of the most persistent themes of O’Neil’s book is that the central objectives of a given model are crucial. In fact, objectives determine whether a model becomes a tool that helps the vulnerable or one that is used to punish them. WMDs objectives are often to optimise efficiency and profit, not justice. This, of course, is the nature of capitalism. And WMDs efficiency comes at the cost of fairness – they become biased, unfair, and dangerous. The destructive loop goes around and around and in the process, models become more and more unfair.

Legal traditions lean strongly towards fairness … WMDs, by contrast, tend to favour efficiency. By their very nature, they feed on data that can be measured and counted. But fairness is squishy and hard to quantify. It is a concept. And computers, for all their advances in language and logic, still struggle mightily with concepts. They “understand” beauty only as a word associated with the Grand Canyon, ocean sunsets, and grooming tips in Vogue magazine. They try in vain to measure “friendship” by counting likes and connections on Facebook. And the concept of fairness utterly escapes them. Programmers don’t know how to code for it, and few of their bosses ask them too. So fairness isn’t calculated into WMDs and the result is massive, industrial production of unfairness. If you think of a WMD as a factory, unfairness is the black stuff belching out of the smoke stacks. It’s an emission, a toxic one. [94-5]

The prison system is a startling example where WMDs are increasingly used to further reinforce structural inequalities and prejudicesIn the US, for example, those imprisoned are disproportionately poor and of colour. Being a black male in the US makes you nearly seven times more likely to be imprisoned than if you were a white male. Are such convictions fair? Many different lines of evidence suggest otherwise. Black people are arrested more often, judged guilty more often, treated more harshly by correctional officers, and serve longer sentences than white people who have committed the same crime. Black imprisonment rate for drug offences, for example, is 5.8 times higher than it is for whites, despite a roughly comparable prevalence of drug use.

Prison systems which are awash in data hardly carry out important research such as why non-white prisoners from poor neighbourhoods are more likely to commit crimes or what the alternative ways of looking at the same data are. Instead, they use data to justify the workings of the system and further punish those that are already at a disadvantage. Questioning the workings of the system or enquiries on how the prison system could be improved are almost never considered. If, for example, building trust were the objective, an arrest may well become the last resort, not the first. Trust, like fairness, O’Neil explains, is hard to quantify and presents a great challenge to modellers even when the intentions are there to consider such concept as part of the objective.

Sadly, it’s far simpler to keep counting arrests, to build models that assume we’re birds of a feather and treat us such… Innocent people surrounded by criminals get treated badly. And criminals surrounded by law-abiding public get a pass. And because of the strong correlation between poverty and reported crime, the poor continue to get caught up in these digital dragnets. The rest of us barely have to think about them. [104]

Insofar as these models rely on barely tested insights, they are in a sense not that different to phrenology – digital phrenologyThe practice of using outer appearance to infer inner character, which in the past justified slavery and genocide has been outlawed and is considered pseudoscience today. Scientific racism has entered a new era with the appearance of justified “objectivity” with machine-learned models embedding human biases. “Scientific” criminological approaches now claim to “produce evidence for the validity of automated face-induced inference on criminality. However, what these machine-learned “criminal judgements” pick up on, more than anything, is systematic unfairness.  

model that profiles us by our circumstances helps create the environment that justifies its assumptions. The stream of data we produce serve as insights into our lives and behaviours. Instead of testing whether these insights stand up to scientific scrutiny, the data we produce are used to justify the modellers’ assumptions and to reinforce pre-existing prejudice. And the feedback loop goes on.

When I consider the sloppy and self-serving ways that companies use data, I am often reminded of phrenology… Phrenology was a model that relied on pseudoscientific nonsense to make authoritative pronouncements, and for decades it went untested. Big Data can fall into the same trap. [121-2]

Hoffman in 1896 published a 330-page report where he used exhaustive statistics to support a claim as pseudoscientific and dangerous as phrenology. He made the case that the lives of black Americans were so precarious that the entire race was uninsurable. However, not only were Hoffman’s statistics erroneously flawed, like many of WMDs O’Neil discusses throughout the book, he also confused causation for correlation. The voluminous data he gathered served only to confirm his thesis: race is a powerful predictor of life expectancy. Furthermore, Hoffman failed to separate the “Black” population into different geographical, social or economic cohorts blindly assuming that the whole “Black” population is a homogeneous group. 

This cruel industry has now been outlawed. Nonetheless, the unfair and discriminatory practices remain and are still practised but in a far subtler form –  they are now coded into the latest generations of WMDs and obfuscated under complex mathematics. Like Hoffman, the creators of these new models confuse correlation with causation and they punish the struggling classes and racial and ethnic minorities. And they back up their analysis with realms of statistics, which give them the studied air of “objective science”. 

What is even more frightening is that as oceans of behavioural data continue to feed straight into artificial intelligence systems, this, to the most part will, unfortunately, remain a black box to the human eye. We will rarely learn about the classes that we have been categorised into or why we were put there and, unfortunately, these opaque models are as much a black-box to those who design them. In any case, many companies would go out of their way to hide the results of their models, and even their existence.

In the era of machine intelligence, most of the variables will remain a mystery... automatic programs will increasingly determine how we are treated by other machines, the ones that choose the ads we see, set prices for us, line us up for a dermatologist appointment, or map our routes. They will be highly efficient, seemingly arbitrary, and utterly unaccountable. No one will understand their logic or be able to explain it. If we don’t wrest back a measure of control, these future WMDs will feel mysterious and powerful. They’ll have their way with us, and we’ll barely know it is happening. [173]

In the current insurance system, (at least as far as the US is concerned) the auto insurers’ tracking systems which provide insurers with more information enabling them to create more powerful predictions, are opt-in. Only those willing to be tracked have to turn on their black boxes. Those that do turn them on get rewarded with discounts where the rest subsidise those discounts with higher rates. Insurers who squeeze out the most intelligence from this information, turning it into profits, will come out on top. This, unfortunately, undermines the whole idea of collectivisation of risk on which insurance systems are based. The more insurers benefit from such data, the more of it they demand, gradually making trackers the norm. Consumers who want to withhold all but the essential information from their insurers will pay a premium. Privacy, increasingly, will come at a premium cost. A recently approved US bill illustrates just that. This bill would expand the reach of “Wellness Programs” to include genetic screening of employees and their dependents and increase the financial penalties for those who choose not to participate.

Being poor in a world of WMDs is getting more and more dangerous and expensive. Even privacy is increasingly becoming a luxury that only the wealthy can afford. In a world which O’Neil calls a ‘data economy’, where artificial intelligence systems are hungry for our data, we are left with very few options but to produce and share as much data about our lives as possible. We are, in the process, implicitly or explicitly, coerced into self-monitoring and self-discipline as we continually attempt to conform ideal bodies and “normal” health statuses as dictated by organisations and institutions that handle and manage, say, our health insurances. Raley (2013) refers to this as dataveillance: a form of continuous surveillance through the use of (meta)data. Ever growing flow of data, including data pouring in from the Internet of Things – the Fitbits, Apple Watches, and other sensors that relay updates on how our bodies are functioning, continue to contribute towards this “dataveillance”.  

One might argue that helping people deal with their weight and health issues isn’t such a bad thing and that would be a reasonable argument. However, the key question here, as O’Neil points out, is whether this is an offer or a command. Using flawed statistics like the BMI, which O’Neil calls “mathematical snake oil”, corporates dictate what the ideal health and body looks like. They infringe on our freedom as they mould our health and body ideals. They punish those that they don’t like to look at and reward those that fit their ideals. Such exploitations are disguised as scientific and are legitimised through the use of seemingly scientific numerical scores such as the BMI. The BMI, a person’s weight (kg) over height (cm) squared, is only a crude numerical approximation for physical fitness. And since the “average” man underpins its statistical scores, it is more likely to conclude that women are “overweight” – after all, we are not “average” men. Even worse, black women, who often have higher BMIs, pay the heaviest penalties.  

The control of great amounts of data and the race to build powerful algorithms is a fight for political power. O’Neil’s breathtakingly critical look at corporations like Facebook, Apple, Google, and Amazon illustrates this. Although these powerful corporations are usually focused on making money, their profits are tightly linked to government policies which makes the issue essentially a political one.

These corporations have significant amounts of power and a great amount of information on humanity, and with that, the means to steer us in any way they choose. The activity of a single Facebook algorithm on Election Day could not only change the balance of Congress, but also potentially decide the presidency. When you scroll through your Facebook updates, what appears on your screen is anything but neutral – your newsfeed is censored. Facebook’s algorithms decided whether you see bombed Palestines or mourning Israelis, a policeman rescuing a baby or battling a protester. One might argue that television news has always done the same and this is nothing new. CNN, for example, chooses to cover a certain story from a certain perspective, in a certain way. However, the crucial difference is, with CNN, the editorial decision is clear on the record. People can debate whether that decision is the right one. Facebook on the other hand, O’Neil puts it, is more like the “Wizard of Oz” — we do not see the human beings involved. With its enormous power, Facebook can affect what we learn, how we feel, and whether we vote – and we are barely aware of any of it. What we know about Facebook, like other internet giants, comes mostly from the tiny proportion of their research that they choose to publish.

In a society where money buys influence, these WMD victims are nearly voiceless. Most are disenfranchised politically. The poor are hit the hardest and all too often blamed for their poverty, their bad schools, and the crime that afflicts their neighbourhoods. They, for the most part, lack economic power, access to lawyers, or well-funded political organisations to fight their battles. From bringing down minorities’ credit scores to sexism in the workplace, WMDs serve as tools. The result is widespread damage that all too often passes for inevitability.

Again, it is easy to point out that injustice, whether based on bias or greed, has been with us forever and WMDs are no worse than the human nastiness of the recent past. As with the above examples, the difference is transparency and accountability. Human decision making has one chief virtue. It can evolve. As we learn and adapt, we change. Automated systems, especially those O’Neil classifies as WMD, by contrast, stay stuck in the time until engineers dive in to change them.

If Big Data college application model had established itself in the early 1960s, we still wouldn’t have many women going to college, because it would have been trained largely on successful men. [204]

Rest assured, the book is not all doom and gloom or that all mathematical models are biased and unfair. In fact, O’Neil provides plenty of examples where models are used for good and models that have the potential to be great.

Whether a model becomes a tool to help the vulnerable or a weapon to inflict injustice, as O’Neil, time and time again emphasises, comes down to its central objectives. Mathematical models can sift through data to locate people who are likely to face challenges, whether from crime, poverty, or education. The kinds of objectives adopted dictate whether such intelligence is used to reject or punish those that are already vulnerable or to reach out to them with the resources they need. So long as the objectives remain on maximising profit, or excluding as many applicants as possible, or to locking up as many offenders as possible, these models serve as weapons that further inequalities and unfairness. Change that objective from leeching off people to reaching out to them, and a WMD is disarmed — and can even become a force of good. The process begins with the modellers themselves. Like doctors, data scientists should pledge a Hippocratic Oath, one that focuses on the possible misuse and misinterpretation of their models. Additionally, organisations such as the Algorithmic Justice League, which aim to increase awareness of algorithmic bias, provide space for individuals to report such biases. 

Opaqueness is a common feature of WMDs. People have been dismissed from work, sent to prison, or denied loans due to their algorithmic credit scores with no explanation as to how or why. The more we are aware of their opaqueness, the better chance we have in demanding transparency and accountability and this begins by making ourselves aware of the works of experts like O’Neil. This is not a book only those working in data science, machine learning or other related fields need to read, but one that everyone needs to read. If you are a modeller, this book should encourage you to zoom out, think whether there are individuals behind the figures that your algorithms manipulate, and think about the big questions such as the objectives behind your codes. Almost everyone, to a greater or lesser extent, is part of the growing world of ‘data economy’. The more awareness there is of the dark side of these machines, the better equipped we are to ask questions, to demand answers from those behind the machines that decide our fate.

Solitary confinement deprives dialogicity and therefore deprives a coherent sense of self

solitaryI recently came across this extremely powerful and disturbing 3 minutes video of solitary confinement and given my dialogically informed perspective, it kind of made me reflect (as well as initiate conversations with others ;-)) on the concepts of self, other and world. Solitary confinement, which can be seen as the absence of dialogicity, seems to have a devastating effect on the sense of self and I think, this video is the material affirmation.

I don’t think there can be much disagreement regarding the disturbed state of most of the prisoners in the video. Solitary confinement disrupts our sense of self-narratives for self-narratives depend on having something to narrate as well as an ‘other’ to narrate it to. Alexis de Tocqueville and Charles Dickens have described the prisoner in isolated cells as “buried alive” and subjected to “immense amount of torture and agony” through a “slow and daily tampering with the mysteries of the brain”. Looking at solitary confinement from a phenomenological perspective, Gallagher (2004), has identified a long list of experiences associated with solitary confinement:

“anxiety, fatigue, confusion, paranoia, depression, hallucinations, headaches, insomnia, trembling, apathy, stomach and muscle pains, oversensitivity to stimuli, feelings of inadequacy, inferiority, withdrawal, isolation, rage, anger, and aggression, difficulty in concentrating, dizziness, distortion of the sense of time, severe boredom, and impaired memory.”

There is little disagreement, if any at all, that solitary confinement is cruel and damaging. However, as Foucault in Discipline and Punish reminds us, the original purposes of solitary confinement, was a way for the prisoner to reflect on his crimes and return into his inner ‘true’ self. Given time to introspect in a solitary confinement, the prisoner was expected to turn his thoughts inward, repent his crimes, and eventually return to society as a morally cleansed citizen.

“Thrown into solitude, the convict reflects. Placed alone in the presence of his crime, he learns to hate it, and, if his soul is not yet blunted by evil, it is in isolation that remorse will come to assail him”

(Tocqueville in Foucault 1979: 237)

At the centre of this viewpoint is an underlying assumption which views the individual as something that exists and is capable of reasoning and functioning in isolation from others – a notion of the individual that is self-sufficient and self-contained where the necessarily interrelatedness of self, other, and world is overlooked. For philosophers such as Gardiner, this individualistic notion of the self is something we have adopted from the Western Christian notion of the soul through Cartesian-inspired philosophies. Contrary to this notion of solitary (which is built upon individualistic assumptions) as a means to come back to the inner self, deprived from contact and interaction with others, the very core of our existence is threatened.

‘‘Just as the body is formed initially in the mother’s womb (body), a person’s consciousness awakens wrapped in another consciousness … Individuality is created by and through others and the Other is part of the self.”

(Bakhtin, 1990)

Coming back to my brief musing, the fact that our sense of self seems to erode when we are deprived of interaction with others reinforces the Bakhtinian dialogical viewpoint that self and others co-develop and are two sides of the same coin. It is through our dialogical and embodied interactions with others that we are able to form and sustain a sense of coherent self. Others are essentially involved at all social and individual lived experiences. Through our encounters with others, we are able to evaluate and assess our own existence. Depriving the person of ‘others’ by subjecting them to solitary confinement, denies that essential additioal external perspective, the means by which a coherent self-image is maintained and the person risks losing the ‘self’ and disappearing into a non-existence.

Bakhtin, Merleau-Ponty, and the Cartesian subject

 

dialogue 1To what extent are the modernist conceptions of the subject Cartesian? What of our sciences, especially the human sciences and the knowledge that emerges from them? And how can we overcome these lingering Cartesian residues? Gardiner in ”The incomparable monster of solipsism’: Bakhtin and Merleau-Ponty’ explores these questions (and more). This post is an attempt to provide a brief review of this paper. 

The subject, at least in Western metaphysics, according to Gardiner, is narcissist for it is shadowed by the Cartesian view which yields to a total self-determinism and total-self-grounding. Our capacities for abstract thinking are privileged at the expense of embodied dialogism. The production of knowledge according to Western metaphysics is rooted in the solitary subject contemplating an external world in a purely cognitive manner as a disembodied observer. The locus of classical modernity, Gerdiner argues, is captured by the overwhelming desire for epistemological certitude and logical coherence in its desire to establish absolute certainty. In attempting to establish this lucidity and certainty; complex, multivalent and ambiguous reality is substituted with crystalline logic and conceptual rigour. Our obsession to transcribe the world into pure algorithmic language, as if the external world presents itself as a collection of inert facts, according to Gardiner is the epitome of  Cartesianism.  Merleau – Ponty describes this as “A nightmare to which there is no awakening”.

It is however, important to note that the status of the human sciences has evolved considerably over the course of the enlightenment and since Popper, falsifiablity and not certitude and coherence is the hallmark of science. Descartes, with emphasis on doubt stands at the beginning of this tradition and as Ian Shapiro would argue, the locus classicus of modernity is in fact doubt, skepticism, and falsifiability. 

According to Gardiner, the Cartesian self poses a threat to dialogical values and what they espouse. By seeing the world as a projection of cognitive capacities, we leave no room for recognizing otherness. Not only is the body alien to this physical subject, other selves are equally mysterious that can have no authentically dialogical relationship. It is by adopting a dialogical world view that we are able to capture the interactive nature of bodies and selves as they co-exist within a shared life world. Gardiner asserts that Bakhtin and Merleau-Ponty are in agreement that modern Western thought, Platonism being the archetypal example, is dominated by perspectives that have rejected the validity of the body and it’s lived experience in favour of theoretical constructions. The utilitarian character of modern science and technology and abstract idealist philosophy reflects this. The tradition in which arguments are framed and debated in the philosophy of mind in which philosophical zombies and Martian c-fibres often take centre stage, illustrates this further. 

Gardiner argues, the privileging of purely cognitive abilities results in tendencies in equating the self to subjective mental processes. This comes at a price of the subject being abstract that dispassionately contemplates from afar. Bakhtin insists that relation to the other requires presence of value positing consciousness and not a disinterested, objectifying gaze. Without the interactive context connecting self, other, and world, the subject slips into solipsism and loses ground for its Being and become empty. For Merleau-Ponty, the world is always in a Heraclitian flux, constantly transforming and becoming and not static and self-contained. Nor is our relation with others a purely cognitive affair. World and body exist in a relation of overlapping.  My senses reach out to the world and respond to it, actively engaging with it. They shape and configure it just as the world at the same time reaches deep into my sensory Being. The perceptual system is not a mere mechanical apparatus that only serves representational thinking to produce refined concepts and ideas but is radically intertwined with the world itself. Self perception, according to Merleau-Ponty, is not merely cognitive but it is also corporal. As I experience the world around me, I am simultaneously an entity in the world. I can hear myself speaking.

The world is presented to me in a deformed manner. My perspective are skewed by the precise situation I occupy at a particular point in time/space, by the idiosyncrasies of my psychosocial and historical context of my existence. Since I am thrown into the world lacking intrinsic significance and I have to make the world meaningful, I am condemned to make continual value judgments and generate meanings. I can never possess the totality of the world through intellectual grasps of my environment, thus my knowledge of the experiential world is always constrained and one sided. As meaning of the world for each of us is constructed from a vantage point of our uniquely embodied viewpoint, no two individuals experience the world precisely the same way.  Encounter with other selves is necessary to gain a more complete perspective on the world. I am never my own light to myself. It is through encounter with another self that I gain access to an external viewpoint through which I am able to visualize myself as a meaningful whole, a gestalt.

Gardiner argues this is how we can escape solipsism – through an apprehension of oneself in the mirror of the other, a vantage point that enables one to evaluate and assess his/her own existence and construct a coherent self-image. To be able to conceptualize myself as a meaningful whole, which is fundamental to self-individuation and self-understanding, I need additional, external perspective. By looking through the other’s soul, I vivify my exterior and make it part of the plastic pictorial world.

We need a philosophy that understands nature as a dynamic, living organism that is ‘pregnant with potentials’. As embodied subjects, we are intertwined with the world, bound up with the dynamic cycles and processes of growth and change.  Insofar as our minds are incarnate and our bodies necessarily partake the physical and biological natural processes, there is an overlap of spirit and matter, subject and object, nature and culture. No break in the circuit; impossible to say where nature ends and subject begins. The self is dynamic, embodied, and creative entity that strives to attribute meaning and value to the world. We are forced to make certain choices and value judgments by Being-in-the-World to transform the world as it is given into a-world-for-me. In making the world a meaningful place, the subject actively engages with and alters its lived environment. I and other co-mingle in the ongoing event of Being. The self, as Bakhtin points out, is ‘unfinalizable’ –  continually re-authored as circumstances change.

Gardiner concludes that both Merleau-Ponty and Bakhtin object to the ‘Primacy of intellectual objectivism’ taken as the model of intelligibility which forms Western philosophy from which our sciences emerge. Such objectification of the world in modernist paradigms represents a retreat from lived experience. Genuinely participative thinking and active engaging requires an engaged, embodied relation to the other and to the world at large. Our capacity for abstract cognition and representational thinking is incapable of grasping the linkage between myself and the other within the fabric of everyday social life. Hence the solipsistic consequences of subjectivistic idealism. As Bakhtin’s ‘carnal hermeneutics’ – the dialogical character of human embodiment – emphasizes, the incarnated self can only be affirmed through its relation with the other. The body is not something self-sufficient: it needs the others’ recognition and form giving activity.

Science! It works, bitches!

Science 2

Science is constantly pushing the boundaries as to what can be known and it’s the best available tool we have to produce the most reliable knowledge. Scientifically produced knowledge is often taken as legitimate, objective, unbiased and value-free. A scroll through some scientist’s Twitter posts can show just how much a great deal of scientists make it clear that knowledge that science produces is the ultimate fact. Arguably, knowledge grounded in science is perceived as the ultimate and the most authoritative that others need to aspire to – one that is qualified to legitimately dictate correct from incorrect or right from wrong and considered as the standard against all other forms of knowledge should be measured.

This form of knowledge is often presented in sharp contrast with knowledge that is dogmatic and ideological as if they are neatly separable. Those who are reasonable and educated are seen as free from ideologies and dogmas. Those that attempt to dispute this so called fact are often portrayed as anti-science. Typically snarled at “Don’t take it personal it’s science, can’t argue with the facts”.

Don’t get me wrong! I love science. Science is wonderful and yes, as far as the most consensual way of producing knowledge goes, science may be the best tool we have. However, it’s the idea of scientific knowledge as completely objective, free from any values, ideologies and biases that I object to. There is no such thing as ‘a view from nowhere’ and science and scientists are not immune to this. Science as completely free and separable from ideologies, biases and currently available discourses and a tool by which we objectively discover what is out there is simply naive. Nor is science free from theoretical commitments, or epistemological and ontological assumptions on which experiments are founded.

The methods we choose to investigate (and by implication those we choose to ignore) are central to the kind of knowledge we produce. Such methods are essentially tied to certain underlying theoretical commitments, which are embedded in certain ontological and epistemological assumptions. How something is defined has great influence on what conclusion one arrives at. How scientists analyse and interpret data can greatly be influenced by their preconceived notions. These two studies on sex differences on the brain, arriving at almost opposite conclusions despite having comparable data, shows just that. 

Science as a way to establishing facts gets fussier and messier as we move away from the natural sciences and towards studies of human cognition and behaviour. The more socially constructed the concept seems, the more problematic it becomes to make any claims of knowledge as the truth or an established fact. This is evident by the fact that there can be multiple equally plausible theories and research findings explaining certain concepts such as emotions or happinessNot to mention the difficulties defining these concepts in a manner that scientists agree upon. The very idea of defining the concept or phenomena that scientists are trying to get hold on brings with it associated cultural, historical, and ideological baggage.

We operate within a certain cultural context and are situated in a certain geographical location at a certain time in history where certain ways of practicing science are more acceptable than others. The way we frame how we think about certain things as well as the methods we develop to explore these questions are inseparably tangled with these factors. As well as our historical and cultural past, our own perspective is coloured by our immediate interaction with others around us. The very language we use to formulate our hypothesis predetermines, to some extent, the direction that our research follows. For example, despite the underlying similarities these questions are framed “are you pro-choice?” or “Do you object to the idea of murdering unborn children?” will elicit different responses.

This messy picture of science where the objective and subjective are not neatly separable, makes attempts to develop so called objective approaches with regard to socially constructed behaviours such as criminality questionable. What kind of behaviour is criminal? In which society? At what time? There is no simple and universally defined definition of crime. A brief look at the concept of homosexuality that has developed from being a criminal act to now (for most of the Western society, anyway) as a right, shows how slippery and context dependent the very idea of what counts as a criminal behaviour. Any attempt to understand drug-related crimes, for example, shows how unclear the idea of crime can be – both snorting cocaine and smoking cannabis (in certain parts of the world) being defined as criminal acts, legally speaking .

I am not arguing that all science is biased and that the work scientists have been doing is no use. The point I want to make is that how we come to conceptualise certain phenomena in a certain way but not other does not spring out of nowhere but is inextricably linked to our language, the current dominant theories, current discourses available to us, and our history among other things. Therefore we need to be aware that our science is (implicitly or explicitly) influenced and to some extent determined by these – some fields more than others. And as the scientist is not a robot that is devoid of passion, interests, errors, and biases (which is not necessarily beneficial either as I think some passion and interest in our research is important), the least we can do is acknowledge this and actively question and review whether our views have been clouded by such as well as being mindful of any generalizable claims that we make as objective facts.

What makes me, me?

What makes me like coffee over tea? Why do some people engage in criminal activities? What is it that makes some people a rapist? What are the sources of bullying behaviour?

Wouldn’t it be wonderful if we could find simple explanations for such complex questions? Psychology is constantly trying to explain complex behaviour. It’s not uncommon to hear explanations by psychologists, neuroscientists, social scientists, criminologists, and the like, usually each from their own perspective, asserting why we prefer one thing over the other, why we are repulsed by certain things, or why we behave the way we do. These explanations often invoke factors such as parenting style, genes, environment, history, culture and so on, depending on the perspective the subject has been approached from.

Arguably, explanations that closely focus on certain factors and not others serve a purpose when it comes to narrowly defined investigations. The problem is, in attempting to explain complex behaviour, we often fail prey (knowingly or unknowingly) to false dichotomies. Despite the constant warning against false dichotomies, it is common to read scientific papers making attempts, for example,  to ascribe the influence of genes as opposed to environment in seeking to understand the effect of parenting on the kind of person we grow up to be.

The “person” is an extremely slippery and difficult concept to pin down. What makes me ‘me’ is extremely fuzzy (and constantly changing) to the extent that it cannot be separated from those around me, my historical background, the culture and time I am situated in, and the dynamical interactions at play. We are constantly dynamically interacting and influencing others around us, and the physical environment, as well as being influenced by these factors. My view of what constitutes a criminal behaviour for example, does not spring into being from nowhere. Rather it is an interplay of many factors, such as the currently available discourse, my political, social, economic, and geographical position in a certain society, the kind of shared of language that is available for use, as well as my family, culture and historical background.

Given that we are constantly in the process of becoming mediated by the dynamical interplay of inextricably linked factors such as culture, genes, physical environment, history, currently available discourses, local societal norms, diet and so on, attempting to separate these factors and claiming to have determined  the contribution genes and/or environment makes towards complex behaviour such as criminality would be similar to having successfully separated the inside and outside of a Mobius strip.

Interdisciplinary research through the lens of emotion studies

Interdisciplinary has become a buzz word, especially within the Arts and Human Sciences. The number of interdisciplinary journal articles in philosophy with interdisciplinary in the title has risen from 14 in 1987 to 1570 in 2014. However, there remains a lot of confusion as to what exactly interdisciplinary entails, how it is best pursued or what kind of research best represents interdisciplinarity. Interdisciplinarity often refers to a field of enquiry that goes beyond distinct disciplinary boundaries and combines two or more academic disciplines in order to provide an understanding/explanation of certain subjects/phenomena – an approach that fluidly crosses disciplinary boundaries and relates to more than one branch of knowledge. 

The definition of what is an emotion is immensely varied and controversial. In fact, most of the controversies in the study of emotions can be said to arise partly due to the definition one adopts. This is evident in the literature as it is possible to find studies on emotion that address different phenomenon but claim to address emotion.This is not to say each approach makes a valid argument within its own framework and contributes in its own way. However, the lack of consensus and the immense disagreement surrounding the study of emotions, makes an interdisciplinary approach towards the study of emotions seem perplexing. How should we synthesize and find a middle ground when faced with a variety of competing and at times incommensurable perspectives? Such questions are often left to the philosopher. 

Interdisciplinarity, within the human sciences at least, is heavily tied to philosophy. Fuller (2013), for example, places a great emphasis on the philosopher as someone serving as an architect who designs the blueprint for the independent disciplines to fill in with empirical evidence or as an alchemist who makes something remarkable out of the ordinary elements taken from the distinct disciplines. The philosopher, it seems, is often seen as the synthesizer. Such assessment is valid to some extent, given philosophy’s role as provider of conceptual analysis. It is nonetheless not without question.

One not only needs the tools and skills to philosophically analyse ontological and methodological assumptions to find a middle ground and synthesise disciplinary research, but also needs the skills and knowledge to read, interpret, and critically analyse what has been presented from a given discipline. Not only are the skills to philosophically analyse essential here, but also the disciplinary expert knowledge such as reading and interpreting empirical evidence, critical engagement, and examination of theoretical as well as methodological issues in a given discipline’s approach to a certain subject.

Take emotions, for example, which profoundly colour our everyday experience. Emotions are investigated across a variety of distinct disciplines ranging from neuroscience, psychology, philosophy, biology, social sciences, and even within the arena of cognitive modeling. It would appear that to get the fullest picture of emotion studies, an interdisciplinary approach that incorporates philosophical analysis of what emotions are/aren’t, empirical evidence from neuroscience and biological studies, as well as recognition of the social and cultural subjectiveness of emotions supported by a computational model would be the ultimate – although this would without doubt require a great deal of work.

However, although these distinct disciplines seem focused on their investigations of emotions (and are mostly fruitful from their own perspective), it gets messy and difficult when attempts are made to synthesize such a variety of approaches. Not least because each discipline’s definition of what is an emotion is varied, but also because there exists methodological differences in the way emotions are investigated as well as the varied ontological assumptions each discipline adopts. These differences can be extremely varied to the extent that they become incompatible. The biological approach which typically focuses on bodily and facial markers of emotion aspires to provide a universal theory, compared to the social and cultural centred approach that sees emotions as inextricably linked to a given society’s culture and language. For these two approaches, emotions are framed in a fundamentally different manner which influences what follows, the kind of questions considered worth asking, and subsequently the research methods developed as well as the way data are interpreted.

The point I want to highlight here through the example of emotion studies in an interdisciplinary manner is that, it becomes difficult and sometimes problematic to practice an all encountering interdisciplinarity contrary to how it appears at first. At least not if one wants to do interdisciplinary research in a manner that considers and scrutinises all the deep issues while attempting to synthesize miscellaneous and sometimes incompatible methodological and ontological assumptions. Depending on the view point, certain aspects might be seen as crucial by one discipline while ignored by the other, even when dealing with a similar subject matter. If one is not mindful of all the subtle issues – how a certain research question is framed to suit a given methodology for example – affects the kind of empirical evidence obtained, and, gone unnoticed, may contribute to a confused interdisciplinarity.