Uncategorized

The Algorithmic Colonization of Africa

 

The second annual CyFyAfrica 2019, The Conference on Technology, Innovation, and Society [1] took place in Tangier, Morocco, 7 – 9 June 2019. It was a vibrant, diverse and dynamic gathering where various policy makers, UN delegates, ministers, governments, diplomats, media, tech corporations, and academics from over 65 nations, mostly African and Asian countries, attended. The conference unapologetically stated that its central aim is to bring forth the continent’s voices to the table in the global discourse. The president’s opening message emphasised that Africa’s youth need to be put at the front and centre stages of African concerns as the continent increasingly relies on technology for its social, educational, health, economical and financial issues. At the heart of the conference was the need to provide a platform to the voice of young people across the continent. And this was rightly so. It needs no argument that Africans across the continent need to play a central role in determining crucial technological questions and answers of not only for their continent but also far beyond.

In the race to make the continent teched-up, there are numerous cautionary tales that the continent needs to learn from. Otherwise we run the risk of repeating them and the cost of doing so is too high. To that effect, this piece outlines three major lessons that those involved in designing, implementing, importing, regulating, and communicating technology need to be aware of.

The continent stands to benefit from various technological and Artificial Intelligence (AI) developments. Ethiopian farmers, for example, can benefit from crowd sourced data to forecast and yield better crops. The use of data can help improve services within the health care and education sector. The country’s huge gender inequalities which plague every social, political, economical sphere can be brought to the fore through data. Data that exposes gender disparities in these key positions, for example, renders crucial the need to change societal structures in order to allow women to serve in key positions. Such data also brings general awareness of inequalities, which is central for progressive change.

Having said that, this is not what I want to discuss here. There already exist countless die-hard technology worshipers, some only too happy to blindly adopt anything “data-driven” and “AI” without a second thought of the possible unintended consequences, both within and outside the continent. Wherever the topic of technological innovation takes place, what we constantly find is advocates of technology and attempts to digitise every aspect of life, often at any cost.

In fact, if most of the views put forward by various ministers, tech developers, policy makers and academics at the CyFyAfrica 2019 conference are anything to go by, we have plenty of such tech evangelists – blindly accepting ethically suspect and dangerous practices and applications under the banner of  “innovative”, “disruptive” and “game changing” with little, if any at all, criticism and scepticism. Therefore, given that we have enough tech-worshipers holding the technological future of the continent on their hands, it is important to point out the cautions that need to be taken and the lessons that need to be learned from other parts of the world, as the continent races forward in the technological race.

Just like Silicon Valley enterprise, the African equivalent of tech start-ups and “innovations” can be found at every possible sphere of life in any corner of the continent, from Addis Abeba to Nairobi to Abuja, to Cape Town. These innovations include in areas such as banking, finance, heath care, education, and even “AI for good” initiatives, both from companies and individuals within as well as outside the continent. Understandably, companies, individuals and initiatives want to solve society’s problems and data and AI seem to provide quick solutions. As a result the attempt to fix complex social problems with technology is ripe. And this is exactly where problems arise.

In the race of which start-up will build the next smart home system or state-of-the-art mobile banking application, we lose sight of the people behind each data point. The emphasis is on “data” as something that is up for grabs, something that uncontestedly belongs to tech-companies, governments, and the industry sector, completely erasing individual people behind each data point. This erasure of the person behind each data point makes it easy to “manipulate behaviour” or “nudge” users, often towards profitable outcomes for the companies. The rights of the individual, the long-term social impacts of AI systems and the unintended consequences on the most vulnerable are pushed aside, if they ever enter the discussion at all. Be it small start-ups or more established companies that design and implement AI tools, at the top of their agenda is the collection of more data and efficient AI systems and not the welfare of individual people or communities. Rather, whether explicitly laid out or not, the central point is to analyse, infer, and deduce “users” weakness and deficiencies and how that can be used to the benefit of commercial firms. Products, ads, and other commodities can then be pushed to individual “users” as if they exist as an object to be manipulated and nudged towards certain behaviours deemed “correct” or “good” by these companies and developers.

The result is AI systems that alter the social fabric, reinforce societal stereotypes and further disadvantage those already at the bottom of the social hierarchy while we allude to insisting these systems as politically neutral under the guise of “AI” and “data-driven”. UN delegates addressing the issue of online terrorism and counterterrorism measure and exclusively discussing Islamic terrorist groups, despite white supremacist terrorist groups carrying out more attacks than any other groups in recent years [2], illustrates an example where socially held stereotypes are reinforced and wielded in the AI tools that are being developed.

Although it is hardly ever made explicit, much of the ethical principles underlying AI rest firmly within utilitarian thinking. Even when knowledge of unfairness and discrimination of certain groups and individual as a result of algorithmic decision-making are brought to the fore, solutions that benefit the majority are sought. For instance, women have been systematically excluded from entering the tech industry [3], minorities forced into inhumane treatment [4], and systematic biases have been embedded in predictive policing systems [5], to mention but a few. However, although society’s most vulnerable are disproportionally impacted by the digitization of various services, proposed solutions to mitigate unfairness hardly consider such group as crucial piece of the solution.

Machine bias and unfairness is an issue that the rest of the tech world is grappling with. As technological solutions are increasingly devised and applied to social, economical and political issues, so are the problems that arise with the digitisation and automation of everyday life. The current attempts to develop “ethical AI” and “ethical guidelines” both within the Western tech industry and academic sphere illustrates awareness and attempt to mitigate these problems. The key global players in technology, Microsoft [6] and Google’s DeepMind [7] from the industry sector and Harvard and MIT [8], from the academic sphere are primary examples that illustrate the recognition of the possible catastrophic consequences of AI on society. As a result, ethics boards and curriculums on ethics and AI are being developed.

These approaches to develop, implement and teach responsible and ethical AI take multiple forms, perspectives, directions and emphasise various aspects. This multiplicity of views and perspectives is not a weakness but rather a desirable strength which is necessary for accommodating a healthy, context dependent remedy. Insisting on one single framework for various ethical, social and economical issues that arise in various contexts and cultures with the integration of AI, is not only unattainable but also advocating a one-size-fits-all style dictatorship and not a guideline.

Nonetheless, given the countless technology related disasters and cautionary tales that the global tech-community is waking up to, there are numerous crucial lessons that African developers, start-ups and policy makers can learn from. The African continent need not go through its own disastrous cautionary tales to discover the dark side of digitisation and technologization of every aspect of life.

AI is not magic and anything that makes it comes across as one needs to be disposed off 

AI is a buzz word that gets thrown around so carelessly, it has increasingly become vacuous. What AI refers to is notoriously contested and the term is impossible to define conclusively – and it will remain that way due to the various ways various disciplines define and use it. Artificial intelligence can refer to anything from highly overhyped and deceitful robots [9], to Facebook’s machine learning algorithms that dictate what you see on your News Feed, to your “smart” fridge and everything in between. “Smart”, like AI has increasingly come to mean devices that are connected to other devices and servers with little to no attention being paid to how such hypoconnectivity at the same time creates surveillance systems that deprives individuals of their privacy.

Over-hyped and exaggerated representation of the current state of the field poses a major challenge. Both researchers within the field and the media contribute to this over-hype. The public is often made to believe that we have reached AGI (Artificial General Intelligence) or that we are at risk of killer robots [10] taking over the world, or that Facebook’s algorithms have created their own language forcing Facebook to shut down its project [11], when none of this is in fact correct. The robot known as Sophia is another example of AI over-hype and misrepresentation of AI, one that shows the disastrous consequences of the lack of critical appraisal. This robot which is best described as a machine with some face recognition capabilities and a rudimentary chatbot engine, is falsely described as semi-sentient by its maker. In a nation where women are treated as a second-class citizen, UAE granted this machine a citizenship, treating the female gendered machine better than its own female citizens. Similarly, neither the Ethiopian government nor the media attempted to pause and reflect on how the robot’s stay in Addis Ababa [12] should be covered. Instead the over-hype and deception were amplified as the robot was treated as some God-like entity.

Leading scholars of the field such as Mitchell [13] emphasise that, we are far from “superintelligence”. The current state of AI is marked by crucial limitations such as the lack of understanding of common-sense, which is a crucial element of human understanding. Similarly, Bigham [14] emphasises that in most of the discussion regarding “autonomous” systems (be it robots or speech recognition algorithms), a heavy load of the work is done by humans, often cheap labour – a fact that is put aside as it doesn’t bode well with the AI over-hype narrative.

Over-hype is not only a problem that portrays unrealistic image [15] of the filed, but also one that distracts attention away from the real danger of AI which is much more invisible, nuanced and gradual than “killer robots”.  The simplification and extraction of human experience for capitalist [16] ends which is then presented as behaviour based “personalisation” is banal seeming practice on the surface but one that needs more attention and scrutiny. Similarly, algorithmic predictive models of behaviour that infer habits, behaviours and emotions need to be of concern as most of there inferences reflect strongly held biases and unfairness rather than getting at any in-depth causes or explanations.

The continent would do well to adopt a dose of critical appraisal when presenting, developing and reporting AI. This requires challenging the mindset that portrays AI with God-like power. And seeing AI as a tool that we create, control and are responsible for. Not as something that exists and develops independent of those that create it. And like any other tool, AI is one that embeds and reflects our inconsistencies, limitations, biases, political and emotional desires. Just like a mirror that reflects how society operates – unjust and prejudiced against some individuals and groups.

Technology is never either neutral or objective – it is like a mirror that reflects societal bias, unfairness and injustice

AI tools deployed in various spheres are often presented as objective and value free. In fact, some automated systems which are put forward in domains such as hiring [17] and policing [18] are put forward with the explicit claim that these tools eliminate human bias. Automated systems, after all, apply the same rules to everybody. Such claim is in fact one of the single most erroneous and harmful misconceptions as far as automated systems are concerned.  As the Harvard mathematician, Cathy O’Neil [19] explains “algorithms are opinions embedded in code”. This widespread misconception further prevents individuals from asking questions and demanding explanations. How we see the world and how we chose to represent the world is reflected in the algorithmic models of the world that we build. The tools we build necessarily embed, reflect and perpetuate socially and culturally held stereotypes and unquestioned assumptions. Any classification, clustering or discrimination of human behaviours and characteristics that our AI system produces reflects socially and culturally held stereotypes, not an objective truth.

UN delegates working on online counterterrorism measures but explicitly focusing on Islamic groups despite over 60 percent [20] of mass shootings in 2019 the USA being carried out by white nationalist extremists, illustrate a worrying example that stereotypically held views drive what we perceive as a problem and furthermore the type of technology we develop.

A robust body of research as well as countless reports [21] of individual personal experience illustrates that various applications of algorithmic decision-makings result in biased and discriminatory outcomes. These discriminatory outcomes affect individuals and groups which are already on society’s margins, those that are viewed as deviants and outliers – people that refuse to conform to the status quo. Given that the most vulnerable are affected by technology the most, it is important that their voices are central in any design and implementation of any technology that is used on/around them. Their voice needs to be prioritised at every step of the way including in the designing, developing, implementing of any technology as well as in policy making.

As Africa grapples between catching up with the latest technological developments and protecting the consequential harm that technology causes, policy makers, governments and firms that develop and apply various tech to the social sphere need to think long and hard about what kind of society we want and what kind of society technology drives. Protecting and respecting the rights, freedoms and privacy of the very youth that the leaders want to put at the front and centre should be prioritised. This can only happen with guidelines and safeguards for individual rights and freedom in place.

Invasion of privacy and the erosion of humane treatment of the human

AI technologies are gradually being integrated to decision making processes in every sphere of life including insurance, banking, health and education services. Various start-ups are emerging from all corners of the continent at an exponential rate to develop the next “cutting edge” app, tool or system; to collect as much data as possible and then infer and deduce “users” various behaviours and habits. However, there seems to be little, if any at all, attention paid to the fact that digitisation and automatization of such spheres necessarily brings its own, often not immediately visible, problems. In the race to come up with the next new “nudge” [22] mechanism that could be used in insurance or banking, the competition for mining the most data seems the central agenda. These firms take it for granted that such “data”, which is out there for grabs, automatically belongs to them. The discourse around “data mining” and “data rich continent” shows the extent to which the individual behind each data point remains non-existent. This removing of the individual (individual with fears, emotions, dreams and hopes) behind each data is symptomatic of how little attention is given to privacy concerns. This discourse of “mining” people for data is reminiscent of the coloniser attitude that declares humans as raw material free for the taking.

Data is necessarily always about something and never about an abstract entity. The collection, analysis and manipulation of data, possibly entails monitoring, tracking and surveilling people. This necessarily impacts them directly or indirectly whether it is change of their insurance premiums or refusal of services.

AI technologies that are aiding decision making in the social sphere are developed and implemented by private sectors and various start-ups for the most part, whose primary aim is to maximise profit. Protecting individual privacy rights and cultivating a fair society is therefore least of their agenda especially if such practice gets in the way of “mining”, freely manipulating behaviour and pushing products into customers. This means that, as we hand over decision making regarding social issues to automated systems developed by profit driven corporates, not only are we allowing our social concerns to be dictated by corporate incentives (profit), but we are also handing over moral questions to the corporate world. “Digital nudges”, behaviour modifications developed to suit commercial interests, are a prime example. As “nudging” mechanisms become the norm for “correcting” individual’s behaviour, eating habits or exercising routines, those corporates, private sectors and engineers developing automated systems are bestowed with the power to decide what the “correct” behaviour, eating or exercising habit is. Questions such as who is deciding what the “correct” behaviour is and for what purpose are often completely ignored. In the process, individuals that do not fit our stereotypical image of what a “fit body”, a “well health” and a “good eating habit” is end up being punished, outcasted and pushed further to the margin.

The use of technology within the social sphere often, intentionally or accidentally, focuses on punitive practices, whether it is to predict who will commit the next crime or who would fail to pay their mortgage. Constructive and rehabilitation questions such as why people commit crimes in the first place or what can be done to rehabilitate and support those that have come out of prison are almost never asked. Technological developments built and applied with the aim of bringing security and order, necessarily bring cruel, discriminatory and inhumane practices to some. The cruel treatment of the Uighurs in China [23] and the unfair disadvantaging of the poor [24] are examples in this regard.

The question of technologization and digitalisation of the continent is also a question of what kind of society we want to live in. African youth solving their own problems means deciding what we want to amplify and show the rest of the world. It also means not importing the latest state-of-the-art machine learning systems or any other AI tools without questioning what the underlying purpose is, who benefits, and who might be disadvantaged by the application of such tools. Moreover, African youth playing in AI filed means creating programs and databases that serve various local communities and not blindly importing Western AI systems founded upon individualistic and capitalist drives. In a continent where much of the narrative is hindered by negative images such as migration, draught, and poverty; using AI to solve our problems ourselves means using AI in a way we want to understand who we are and how we want to be understood and perceived; a continent where community values triumph and nobody is left behind.

(more…)

In Defence of Uncertainty – Bibliography

 

Newcrafts Paris talkI gave a talk on the above title at NewCrafts Paris 2019 conference and was asked for bibliography underlying the content of my talk so here it is. I have included the abstract below to provide some context. I might also write a blog sometime in the future so watch this space. 🙂

Abstract: Imagine a world where we are able to predict people’s behaviour with precision. A world, for example, where we can tell whether someone is going to a commit crime before they do. A lot of our problems would just disappear. The quest for absolute certainty has been at the top of Western science’s agenda. In a similar fashion, current technological developments tend to strive for generalizability and predictability. We value certainty, stability and uniformity. Whereas most of reality, instead of being orderly and stable is seething with change, disorder and process. People, far from being predictable and predetermined, are complex, social and dynamical beings that inherently exist in a web of relations. This talk discusses how absolute certainty is not only an unattainable goal so far as understanding people and the social world is concerned but also a dangerous state to aspire to.

 

In Defence of Uncertainty – Bibliography

  • Amazon scraps secret AI recruiting tool that showed bias against women
  • Angwin, J.; Larson, J.; Mattu, S.; and Ajunwa, I., Friedler, S., Scheidegger, C. E., & Venkatasubramanian, S. (2016). Hiring by algorithm: predicting and preventing disparate impact. Available at SSRN.
  • Bakhtin, M. M. (2010). The dialogic imagination: Four essays (Vol. 1). University of texas Press.
  • Bakhtin, M. M. (2010). The Dialogic Imagination: Four Essays. University of Texas Press.
  • Barad, K. (2007). Meeting the universe halfway: Quantum physics and the entanglement of matter and meaning. duke university Press.
  • Baumer, E. P., & Silberman, M. (2011, May). When the implication is not to design (technology). In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 2271-2274). ACM.
  • Birhane A (2017) Descartes was wrong: “a person is a person through other persons” | Aeon Ideas.
  • Descartes, R. (2013). René Descartes: Meditations on first philosophy: With selections from the objections and replies. Cambridge University Press.
  • Ferryman, K., & Pitcan, M. (2018). Fairness in Precision Medicine. Data & Society.
  • Foucault, M. (2002). The order of things: An archaeology of the human sciences. Psychology Press.
  • Gonen, H., & Goldberg, Y. (2019). Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them. arXiv preprint arXiv:1903.03862.
  • Google ‘genuinely sorry’ after app labels dark-skinned people as ‘gorillas’ | CBC News
  • Holquist, M. (2003). Dialogism: Bakhtin and his world. Routledge.
  • Introna, L. D., & Nissenbaum, H. (2000). Shaping the Web: Why the politics of search engines matters. The information society, 16(3), 169-185.
  • Juarrero, A. (1999). Dynamics in action: Intentional behavior as a complex system (p. 127143). Cambridge, MA: MIT press.
  • Marková, I. (2016). The dialogical mind: Common sense and ethics. Cambridge University Press.
  • Maturana, H. R., & Poerksen, B. (2004). From being to doing. The origins of the biology of cognition.
  • Mbiti, J. S. (1990). African religions & philosophy. Heinemann.
  • Morson, G. S., & Emerson, C. (Eds.). (1989). Rethinking Bakhtin: extensions and challenges. Northwestern University Press.
  • New Study Uses Machine Learning to Predict Sexual Orientation
  • O’Neil, C. (2016) Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.
  • Online Etymology Dictionary – Root word of science
  • Prigogine, I., & Stengers, I. (1985). Order out of Chaos.
  • Richardson, R., Schultz, J., & Crawford, K. (2019). Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice. New York University Law Review Online, Forthcoming.
  • They’re watching, and they know a crime is about to take place before it happens
  • Von Foerster, H. (2007). Understanding understanding: Essays on cybernetics and cognition. Springer Science & Business Media.
  • Von Foerster, H., & Poerksen, B. (2002). Understanding systems: Conversations on epistemology and ethics (Vol. 17). Springer.
  • Wang, Y., & Kosinski, M. (2018). Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Journal of personality and social psychology, 114(2), 246.
  • Wilson, B., Hoffman, J., & Morgenstern, J. (2019). Predictive Inequity in Object Detection. arXiv preprint arXiv:1902.11097.

Integrating (not “adding”) ethics and critical thinking into data science

One does notThere is no area untouched by data science and computer science. From medicine, to the criminal justice system, to banking & insurance, to social welfare, data driven solutions and automated systems are proposed and developed for various social problems. The fact that computer science is intersecting with various social, cultural and political spheres means leaving the realm of the “purely technical” and dealing with human culture, values, meaning, and questions of morality; questions that need more than technical “solutions”, if they can be solved at all. Critical engagement and ethics are, therefore, imperative to the growing field of computer science.

And the need for ethical and critical engagement is becoming more apparent as not a day goes by without a headline about some catastrophic consequences of careless practices, be it a discriminatory automated hiring system or implementation of a facial recognition system that undermines privacy. With the realization that the subjects of enquiry of data science delve deep into the social, cultural and political, has come the attempt to integrate (well, to at least include) ethics as part of computer science modules (e.g., at Harvard and MIT). The central players in the tech world (e.g., DeepMind) also seem to be moving in the direction of taking ethics seriously.

However, even when the need to integrate ethics and critical thinking into computer science is acknowledged, there are no established frameworks, standards or consensus on how it ought to be done, which is not surprising given that the practice is at its early stages. Often times, the idea of ethics is seen as something that can be added to existing systems (as opposed to a critical and analytical approach that goes beyond and questions current underlying assumptions), or as some sort of checklist (as opposed to an aspiration that students need to adopt as part of normal professional data science practice beyond satisfying module requirements) with a list of dos and don’ts that can be consulted… and ta da! You have ethics!

In this blog, we share our approach to integrating critical thinking and ethics to data science as part of the Data Science in Practice module in UCD, CS. The central aspiration of this class is to stimulate students to think critically, to question taken for granted assumptions and to open various questions for discussion. Central to this is the idea of viewing critical thinking and ethics as an important aspect of data scientific practice rather than a list of dos and don’ts that can be taught in class. To see irresponsible and unethical outcomes and practices as things that affect us as individual citizens and shape society for the worst.

The class does not teach some set ethical foundations that need to be followed or ethical and unethical ways of doing data science. Rather, we present various ethical issues as open questions for discussion and the class is given current tools and automated systems PredPol, for example, and are asked to point out possible issues. The class, therefore, is extremely interactive throughout.

The structure of the module (Data Science in Practice) is that students work in pairs on data science projects of their choice. Depending on the type of question the students choose to tackle, some projects require extensive critical and ethical reflection, while others less so. Nonetheless, all the projects are required to include an “Ethical Considerations” section in their final report. This section ideally reflects possible ethical issues that they came across working in their chosen project and the ways they mitigated such issues as well as issues that could be anticipated as emerging from the work that could be out of their control.

At the start of the module we have a general Critical Thinking and Data Ethics three-hour long lecture. The content is described below for those interested. Given that it is a data science module, the first half of the session thematically raises data related ethical questions and critical reflection while during the second half, the theme is ethics and AI, specifically, automated systems.

There are infinite various ways to approach this, a vast amount of material to include and many ways to design, frame and direct the conversation. Our specific way is simply one of them. It fits the module, the department, and the students’ backgrounds and aligns with the module aims and expected outcomes. These factors are likely to differ in other institutes and modules. If you find this helpful, that’s great. If not, we hope that this blogpost provided you with some food for thought.

The central content is thematized in two parts as follows. Read along if you are interested in the details. You can also email Abeba.birhane@ucdconnect.ie if you would like the slides.

Part I

  • Looking back: hidden history of data

  • Unquestioned assumptions and (mis)understandings of the nature of data: a critical look

    • Data reflect objective truth:
    • Data exist in a historical, societal, and cultural vacuum:
    • The data scientist is often invisible in data science:
  • Correlation vs causation

  • GDPR

Part II

  • Bias: automated systems

  • Data for Good

 

The first challenge is to establish why critical thinking and data ethics is important for data scientists. This is the second year that this module is running and one of the lessons learned from last year is that not everybody is on board from the get-go with the need for critical thinking and data ethics for data science. Therefore, although it might seem obvious, it is important to try to get everyone on board before jumping in. This is essential for a productive discussion. The students are likely to engage and have an interest if they first and foremost see why it is important. Examples of previous and current data science related disasters, (Cambridge Analytica, Facebook, for example), other major computer science departments doing it, and the fact that “Ethical Considerations” need to be included in the student’s final report serve to get them on board.

Looking back: hidden history of data

With the convincing out of the way, a brief revisit of the dark history of data and the sciences, provides a vivid and gruesome example of the use and abuse of the most vulnerable members of society in the name of data for medical advancements. Nazi-era medical experiments serve as primary examples. Between 1933 – 1945, German anatomists over 31 departments, had accepted bodies of thousands of people killed by the Hitler regime. These bodies were dissected in order to study anatomy. The (in)famous Hermann Stieve (1886 – 1952) got his “material” i.e. people the Nazis sentenced to death for minor crimes such as looting, for research from Plötzensee Prison. Stieve’s medical practices are among the ethically harrowing. However, he is also seen as “a great anatomist who revolutionized gynaecology through his clinical-anatomical research.”

The question remains: how should we view research that is scientifically valuable but morally disturbing? This question elicits a great deal of discussion in class.

Ethical red-flags and horrifying consequences are much more visible and relatively immediate with medical anatomy research. Whereas with data and data driven automated decision makings, the effects and consequences are much more nuanced and invisible.

At this point, another open question is paused to the class: What makes identifying and mitigating ethical red flags in data science much more difficult and nuanced than those in medical sciences?

Unquestioned assumptions and (mis)understandings of the nature of data: a critical look

Data reflect objective truth:

The default thinking within data and computer science tends to assume that data are automatically objective. This persistent misconception that data are an objective form of information that we simply find “out there”, observe and record, obscures the fact that data can be as subjective as the humans finding and recording it. Far from reflecting objective truth, data are political, messy, often incomplete, sometimes fake, and full of complex human meanings. Gitelman (2013)’s book “Raw Data is an Oxymoron” is an excellent resource in this regard. “Data is anything but “raw”… we shouldn’t think of data as a natural resource but as a cultural one that needs to be generated, protected, and interpreted.”

Crawford also concisely summarizes the problem with the view of data as reflection of objective truth:

“Data and data sets are not objective; they are creations of human design. We give numbers their voice, draw inferences from them, and define their meaning through our interpretations. Hidden biases in both the collection and analysis stages present considerable risks, and are as important to the big-data equation as the numbers themselves.”

(Crawford, 2013)

At almost every step of the data science process, the data scientist makes assumptions and decisions based on these assumptions. Consequently,  any results that emerge are fundamentally biased by these assumptions. These assumptions might be reasonable or they might not. This means that data scientists must be transparent about these assumptions. The problem is that oftentimes, data scientists are neither clear about their assumptions nor think about them at all.

Data exist in a historical, societal, and cultural vacuum:

Far from reflecting objective truth, data often reflect historical inequalities, norms and cultural practices. Our codes then pick these inequalities and norms, which are taken as the “ground truth” and amplify them. As a result, women getting paid less than men, for example, comes to be taken as the norm and gets amplified by algorithmic systems. Amazon’s recently discontinued hiring AI is a primary example. The training data for Amazon’s hiring algorithm is historical data – CV’s submitted to the company over the previous 10 years. In this case, previous success is taken as indication of future success. And in the process, CV’s that didn’t fit the criteria for past success (women) were eliminated from consideration. This type of predictive system works under the assumption that the future looks like the past. However, this is problematic as people and societies change over time. Algorithmic decision making like that of Amazon’s create algorithmic driven determinism where people are deprived from being the exception to the rule.

The data scientist is often invisible in data science:

Due to the “view from nowhere” approach that most of data science operates under, the data scientist is almost always absent from data science. With the spotlight on the data scientist, we can examine how there’s always people making important choices including:

Which data to include and by default which to exclude, how data are weighed, analysed, manipulated and reported, and making important decisions such as how good “good enough” is for an algorithm to perform. And people are biased. We form stereotypes and rely on them as shortcuts to our day-to-day lives. For example, a CV with a white-sounding name will receive a different (more positive) response than the same CV with a black-sounding name. Women’s chance of being hired in symphony orchestras increases between 30% and 55% in blind auditions. Goldin & Rouse (1997). We can only see things from our own point of view, which is entangled in our history, cultural and social norms. Putting the spotlight on the data scientist allows us to acknowledge personal motivations, beliefs, values, and biases that directly or indirectly shape our scientific practices.

Correlation vs causation

The statement that “correlations is not causation” might seem an obvious statement but one that most us need to be regularly reminded. In this regard, Bradford Hill’s criteria of causation is a helpful framework to look at. Hill’s 9 principles – minimal conditions needed to establish a causal relationship, were originally developed as a research tool in the medical sciences. However, they are equally applicable and relevant to data scientists. Hill’s 9 principles are; strength, consistency, specificity, temporality, dose effect, plausibility, coherence, experiment, and analogy. The more criteria that are met, the more likely the relationship is causal. xkcd.com provides witty and entertaining comics for each of Hill’s criteria for causation.

GDPR

Ethical questions inevitably arise with all innovation. Unfortunately, they are often an afterthought and not anticipated and mitigated. As data scientists, the questions that we are trying to answer implicitly or explicitly intersect with the social, medical, political, psychological, and legal sphere. Ethical and responsible practices are not only personally expected but also legally required. To this end awareness and compliance of GDPR regulations is crucial when collecting, storing, and processing personal data. Students working with personal data are directed to the university’s GDPR Road Map.

Having said that, GDPR can only serve as a framework and is not the final answer that proves a clear black and white solution. We cannot comprehend what our data will reveal in conjunction with other data. Furthermore, privacy is not something we can always negotiate person by person but rather something that we need to look at as whole network. Look no further than the Strava debacle.

This is a murky and complex area and the idea is not to equip the students with the fine grained details of privacy or GDPR but rather to raise awareness.

Part II

Bias: automated systems

Can we solve problems stemming from human bias by turning decisions over to machines? In theory, more and more decisions increasingly handled by algorithms should mean that human biases and prejudices should be eliminated. Algorithms are, after all, “neutral” and “objective”. They apply the same rules to everybody regardless of race, gender, ethnicity or ability. The reality, however, is far from this. As O’Neil points out, automated systems only give us the illusion of neutrality. Case after case have demonstrated that automated systems, in fact, can become tools that perpetuate and amplify inequalities and unfairness. Examples include recidivism algorithms, hiring algorithms.

Decisions delivered from automated systems may not be as grave and immediate if these systems are recommending what books we might like to buy next based on our previous purchase. However, the stakes are much higher when automated systems are diagnosing illness or holding sway over a person’s job application or prison sentence.

O’Neil makes a powerful argument that the objectives of a mathematical model determine whether the model becomes a force for good or a tool that wields and perpetuates existing and historical bias. Automated systems (which are often developed by commercial firms for profit), often strive to optimize for efficiency and profit, which come at the cost of fairness. Take the (U.S) prison system for example. Questions such as how the prison system can be improved are almost never considered. Instead the goal seems to be to lock as many people away as possible. Consequently, algorithmic systems within the prison system strive to flag and lock people away that are deemed to likely reoffend.

The stream of data we produce serve as insights into our lives and behaviours. Instead of testing whether these insights and intuitions stand up to scientific scrutiny, the data we produce are used to justify the modellers’ intuitions and to reinforce pre-existing assumptions and prejudice. And the feedback loop goes on. Once again, associations are taken as evidence to justify pre-existing assumptions.

Algorithmic systems increasingly present new ways to sort, profile, exclude and discriminate within the complex social world. The opaque nature of these systems mean that we don’t know things have gone wrong until a big number of people, often society’s most vulnerable, are affected. We, therefore, need to anticipate all possible consequences of our work in advance before things go wrong. As we’ve seen in previous examples, algorithmic decision making is increasingly intersecting with the social sphere – blurring the boundaries between technology and society, public and private. As data scientists, working to solve society’s problems, understanding these complex and fuzzy boundaries and cultural, historical and social context of our data and algorithmic tools is crucial.

In domains such as medicine and psychology, where work has direct or indirect impact on the individual person or society, there often exist ethical frameworks in place. Ethics is an integral part of medical training, for example. Physicians are held to specific ethical standards through the practice of swearing the Hippocratic Oath and through various medical ethics boards.

At this stage another question is put forward to the class: Given that data scientists, like physicians, work to solve society’s problems, influencing it in the process, should data scientists then be held to the same standard as physicians?

Data for Good

Most of the content of this lecture contains either cautionary tales or warnings, which at times might dishearten students. This year we have added a section on “Data for good” towards the end. This helps conclude the course with a bit of a positive note by illustrating how data science is being used for social good.

The Greena Davis Institute in collaboration with Google.org is using data to identify gender bias within the film industry. They analysed the 100 highest grossing (US domestic) live-action films from 2014, 2015, and 2016. The findings show that men are seen and heard nearly twice as often as women. Such work is crucial for raising awareness of the blind spots in media and encourages storytellers to be inclusive.

“Data helps us understand what it is we need to encourage creators to do. Small things can have a huge impact.” Geena Davis, Academy Award-winning actor, founder and chair of the Geena Davis Institute on Gender in Media

Similarly, the Troll Patrol project by Amnesty International and Element AI, studied online abuse against women. They surveyed millions of tweets received by 778 journalists and politicians, UK and US throughout 2017. They commissioned an online polling of women in 8 countries about their experiences of abuse on social media. Over 6,500 volunteers from around the world took part, analysing 288,000 tweets to create a labelled dataset of abusive or problematic content. The findings show that 7.1% of tweets sent to the women in the study were “problematic” or “abusive”. This amounts to 1.1 million tweets mentioning 778 women across the year, or one every 30 seconds. Furthermore, women of colour, (black, Asian, Latinx and mixed-race women) were 34% more likely to be mentioned in abusive or problematic tweets than white women. Black women were disproportionately targeted, being 84% more likely than white women to be mentioned in abusive or problematic tweets.

This sums up how we currently organize the content and the kind of tone we are aiming for. The ambition is a fundamental rethinking of taken for granted assumptions and to think of ethical data science in a broader sense as work that potentially affects society rather than simply as “not using personal information for research”. Whether we succeed or not is a different matter that remains to be seen. Furthermore, this is a fast-moving field where new impacts of technology as well as new ways of thinking about ethics are continually changing. Taking this and the fact that we are continually incorporating student feedbacks and what has (not)worked into account, next year’s content could possibly look slightly different.

The fruit salad maker; it’s an interdisciplinary tale

Multidisciplinary = the fruit bowl (single disciplines brought together) Interdisciplinary = a fruit salad (combine disciplines together for one output) Transdisciplinary = the smoothie (disciplines transformed-new). EU EnRRICH project

A young fruit enthusiast wanted to make a fruit salad. Seeing that so many different fruit suppliers bring all sorts of fruit to her fruit bar, and many customers in return buy individual fruits, she thought she’d make something that each fruit supplier doesn’t produce by combining their supplies – a fruit salad. Besides, there seems to be a great deal of excitement over this new mixing of various fruits and everybody seems to want and encourage it.

Having sampled many different fruits over the years, the fruit salad maker decided it is a good use of her time and expertise to get into the fruit salad making business. She decided on mango, kiwi and pineapple as her fruits of choice that would make her signature fruit salad. They blend very well, they are grown locally, and they complement one another. When mixed, they not only produce an excellent taste, but they are also very appetizing to look at. Most mango, kiwi and pineapple lovers should be able to appreciate and enjoy them, the fruit salad maker thought and she started the process of combining her fruits.

“Not so fast”, came along the fruit gatekeepers. “We need to first see that your tastes for fruits, ability to make fruit salad, and knowledge of each fruit is sufficient before we allow you to open this fruit bar”. Well, it’s legally required that a fruit bar is certified after all. And on the positive side, this certificate would signify a much-needed validation and boost from the fruit community.

Not being able to open her fruit bar without the recognition required and the seal of approval, the fruit salad maker embarked on the process of fulfilling the necessary requirements to pass the necessary tests. She compiled a convincing argument for the need for fruit salads, her knowledge of three fruits, and most importantly for her personal skills and passion for mixing fruits. She demonstrated how her fruits of choice go well together, why they should be made into fruit salad and how much her customers would benefit from such combination.

She then produced the first plate of fruit salad and put it in front of the fruit gatekeepers. “I love the idea of fruit salads. We are all stuck in our special fruit echo chambers. We should all try fruit salads and appreciate those that actually make colourful fruit salads”, said the mango gatekeeper. He then tasted a big mouthful of the fruit salad before him. “It needs more mango”, he said. “I also recommend you study the history of mango production and the fine-grained detail of the biochemistry of mango to make your fruit salad better. I am afraid I can’t let you past my gate until then”, he added.

The kiwi gatekeeper, who also confessed how much he loves fruit salads, followed and had a mouthful of the fruit salad in front of him.  Like his previous colleague the mango gatekeeper, the kiwi gatekeeper seems to be solely concerned with the kiwi part of the fruit salad – not the whole combination. “Salt would really compliment the kiwis, add a pinch to bring out the flavour more. In order for me to recognize that you have used kiwi in your fruit salad, you need a lot more kiwi on your fruit salad,” he commented. “Plus, I don’t recognize the breed of kiwi that you’re using. I will give you a list of good kiwis you need to use. Until the kiwi is right, I am afraid it is my duty to not let you pass my gate. Better luck next time” he added.

Lastly, the pineapple gatekeeper scooped a spoonful of the fruit salad and tried it. “I also love the idea of fruit salads but I have to tell you that this is not how we slice pineapples over at the pineapple empire. We also marinate them in our special sauce. Your pineapples lack both. You really need to know your pineapple inside out if you are to call yourself a fruit salad maker at all. Plus, I see very little pineapple on this plate. So, get the special sauce from our empire and cut your pineapples our way. Only then can we give you our approval,” she exclaimed.

The fruit salad maker, unestablished and with much less power than the gatekeepers, felt disheartened. She tried to point out that each gatekeeper needs to look at the dish as a whole instead of focusing on each specific fruit. And, surely, the single fruit bars don’t go through as much scrutiny. Unfortunately, questioning the individual fruit experts didn’t do her any favours – they have been in their respective fruit business for much longer than she has and must surely know what they are doing. Who’s she to question their domain expertise?!

It felt as though, what they are demanding seemed too self-fulfilling and incommensurable at times. But then again, she suffered from too much self-doubt given that this is her first big attempt at making a fruit salad, to argue with their demands. Either way, if she is to get that business going, she needs each gatekeeper’s seal of approval. She went ahead and attempted to make the type of fruit salad that would satisfy each gatekeeper; with plenty of mango, huge helpings of ripe kiwi and custom sliced pineapples.

At the next round of testing, the fruit salad maker revised the plate in a manner that reflects the advice previously provided by the gatekeepers. Unfortunately, they unanimously agreed that the plate is overflooded with too much fruit, is unhealthy and is unattractive to look at. “All the excess fruit must be trimmed away,” they declared. “This is a health hazard and we cannot approve of such a dish. Think about how to make it neater, healthier and attractive and come back to us with your improved fruit salad. We will then discuss the matter and perhaps let you through our gate,” they said.

After many attempts to satisfy each of the gatekeepers version of a perfect fruit salad, the fruit salad maker is back to square one. She’s caught in a recursive loop. Each fruit connoisseur, expert on their own fruit, seems to underappreciate the taste and benefit of the fruit mix before them. Putting individual fruit experts together doesn’t necessarily make a fruit salad judge, after all.

Having gone through a number of time-consuming practices of making fruit salads and the bureaucratic paperwork associated with it, the fruit salad maker wonders if the fruit salad making business is worthwhile at all. Single fruit dealings, the dominant mode of doing business would have been simpler – not as rewarding for sure, but certainly simpler. But the thing is, once you develop the palate for the unique taste of fruit salads, nothing else will do.

 

 

For a more scholarly read

This list is not exhaustive by any means but work that is relevant to my work and a list I revise and revisit regularly

Link for the main resources page here

Books

Weapons of math destruction: how big data increases inequality and threatens democracy by Cathy O’Neil. A great number of the article on the list below are written by O’Neil. She is also active on Twitter regularly posting links and interesting critical insights on everything to do with mathematical models and bias. Here is my own review of O’Neil’s book with plenty of relevant links itself and here for another excellent review of O’Neil’s book.

We Are Data

We Are Data: Algorithms and the Making of Our Digital Selves (2018) by John Cheney-Lippold.

Below is the first few paragraph from a review by Daniel Zwi, a lawyer with an interest in human rights and technology. Here is also a link to my twitter thread where you can read excerpts from the book that I tweeted as I read the book.

In 2013, a 41-year-old man named Mark Hemmings dialled 999 from his home in Stoke-on-Trent. He pleaded with the operator for an ambulance, telling them that ‘my stomach is in agony’, that ‘I’ve got lumps in my stomach’, that he was vomiting and sweating and felt light-headed. The operator asked a series of questions — ‘have you any diarrhoea or vomiting?’; ‘have you passed a bowel motion that looks black or tarry or red or maroon?’ — before informing him that he did not require an ambulance. Two days later Mr Hemmings was found unconscious on the floor of his flat. He died of gallstones shortly after reaching hospital.

This episode serves as the affective fulcrum of We Are Data: Algorithms and the Making of Our Digital Selves, John Cheney-Lippold’s inquiry into the manner in which algorithms interpret and influence our behaviour. It represents the moment at which the gravity of algorithmic regulation is brought home to the reader. And while it may seem odd to anchor a book about online power dynamics in a home telephone call (that most quaint of communication technologies), the exchange betokens the algorithmic relation par excellence. Mr Hemmings’s answers were used as data inputs, fed into a sausage machine of opaque logical steps (namely, the triaging rules that the operator was bound to apply), on the basis of which he was categorised as undeserving of immediate assistance.

The dispassionate, automated classification of individuals into categories is ubiquitous online. We either divulge our information voluntarily — when we fill out our age and gender on Facebook, for example — or it is hoovered up surreptitiously via cookies (small text files which sit on our computer and transmit information about our browsing activity to advertising networks). Our media preferences, purchases and interlocutors are noted down and used as inputs according to which we are ‘profiled’ — sorted into what Cheney-Lippold calls ‘measureable types’ such as ‘gay conservative’ or ‘white hippy’ — and served with targeted advertisements accordingly.

ageofsurveillance

The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (2019) by Shoshana Zuboff

The challenges to humanity posed by the digital future, the first detailed examination of the unprecedented form of power called “surveillance capitalism,” and the quest by powerful corporations to predict and control our behavior. Shoshana Zuboff’s interdisciplinary breadth and depth enable her to come to grips with the social, political, business, and technological meaning of the changes taking place in our time. We are at a critical juncture in the confrontation between the vast power of giant high-tech companies and government, the hidden economic logic of surveillance capitalism, and the propaganda of machine supremacy that threaten to shape and control human life. Will the brazen new methods of social engineering and behavior modification threaten individual autonomy and democratic rights and introduce extreme new forms of social inequality? Or will the promise of the digital age be one of individual empowerment and democratization?

The Age of Surveillance Capitalism is neither a hand-wringing narrative of danger and decline nor a digital fairy tale. Rather, it offers a deeply reasoned and evocative examination of the contests over the next chapter of capitalism that will decide the meaning of information civilization in the twenty-first century. The stark issue at hand is whether we will be the masters of information and machines or its slaves.

Algorithms of oppressionAlgorithms of oppression: How search engines reinforce – below is an excerpt from Nobel’s book: You can also find another review of Algorithms of Oppression here. Run a Google search for “black girls”—what will you find? “Big Booty” and other sexually explicit terms are likely to come up as top search terms. But, if you type in “white girls,” the results are radically different. The suggested porn sites and un-moderated discussions about “why black women are so sassy” or “why black women are so angry” presents a disturbing portrait of black womanhood in modern society. In Algorithms of Oppression, Safiya Umoja Noble challenges the idea that search engines like Google offer an equal playing field for all forms of ideas, identities, and activities. Data discrimination is a real social problem; Noble argues that the combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of Internet search engines, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color, specifically women of color.

Screenshot 2017-09-15 at 9.09.59 PM - Edited

Algorithms to Live By: The Computer Science of Human Decisions by Brian Christian and Tom Griffiths. This book is concerned with the workings of the human mind and how computer science can help human decision making.  Here is a post by Artem Kaznatcheev on Computational Kindness which might give you a glimpse of the some of the issues that book covers. Here is a long interview with Brian Christian and Tom Griffiths and a TED Talk with Tom Griffiths on The Computer Science of Human Decision Making.

The Black Box Society: The Secret Algorithms That Control Money and Information by Frank Pasquale. You can read the introduction and conclusion chapters of his book here.  And here is a good review of Pasquale’s book. You can follow his twitter stream here.

Technically wrong

Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech by Sara Wachter-Boettcher

Here is a synopsis:  A revealing look at how tech industry bias and blind spots get baked into digital products—and harm us all.

Buying groceries, tracking our health, finding a date: whatever we want to do, odds are that we can now do it online. But few of us ask why all these digital products are designed the way they are. It’s time we change that. Many of the services we rely on are full of oversights, biases, and downright ethical nightmares: Chatbots that harass women. Signup forms that fail anyone who’s not straight. Social media sites that send peppy messages about dead relatives. Algorithms that put more black people behind bars.

Sara Wachter-Boettcher takes an unflinching look at the values, processes, and assumptions that lead to these and other problems. Technically Wrong demystifies the tech industry, leaving those of us on the other side of the screen better prepared to make informed choices about the services we use—and demand more from the companies behind them.

Paula Boddington, Oxford academic and author of Towards a Code of Ethics for Artificial Intelligence, recommends the five best books on Ethics for Artificial Intelligence. Here is the full interview with Nigel Warburton, published on December 1, 2017.

Automating inequality

“Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor” by Virginia Eubanks is being published and will be released on January 23, 2018. Here is an excerpt from Danah Boyd’s blog:

“Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor” is a deeply researched accounting of how algorithmic tools are integrated into services for welfare, homelessness, and child protection. Eubanks goes deep with the people and families who are targets of these systems, telling their stories and experiences in rich detail. Further, drawing on interviews with social services clients and service providers alongside the information provided by technology vendors and government officials, Eubanks offers a clear portrait of just how algorithmic systems actually play out on the ground, despite all of the hope that goes into their implementation. Additionally, Berkman Klein discusses “Algorithms and their unintended consequences for the poor” with Eubanks here.

The Big Data Agenda

The Big Data Agenda: Data Ethics and Critical Data Studies by Annika Richterich PDF available through the link here.

“This book highlights that the capacity for gathering, analysing, and utilising vast amounts of digital (user) data raises significant ethical issues. Annika Richterich provides a systematic contemporary overview of the field of critical data studies that reflects on practices of digital data collection and analysis. The book assesses in detail one big data research area: biomedical studies, focused on epidemiological surveillance. Specific case studies explore how big data have been used in academic work.

The Big Data Agenda concludes that the use of big data in research urgently needs to be considered from the vantage point of ethics and social justice. Drawing upon discourse ethics and critical data studies, Richterich argues that entanglements between big data research and technology/ internet corporations have emerged. In consequence, more opportunities for discussing and negotiating emerging research practices and their implications for societal values are needed.”

Re-Engineering Humanity

Re-Engineering Humanity by professor Evan Selinger and Brett Frischmann

Every day, new warnings emerge about artificial intelligence rebelling against us. All the while, a more immediate dilemma flies under the radar. Have forces been unleashed that are thrusting humanity down an ill-advised path, one that’s increasingly making us behave like simple machines? In this wide-reaching, interdisciplinary book, Brett Frischmann and Evan Selinger examine what’s happening to our lives as society embraces big data, predictive analytics, and smart environments.

Outnumbered: From Facebook and Google to Fake News and Filter-bubbles – The Algorithms That Control Our Lives (featuring Cambridge Analytica) by David Sumpter. A review from Financial Times, here.

Artificial Unintelligence: How Computers Misunderstand the World By Meredith Broussard

Artifical unintelligenceA guide to understanding the inner workings and outer limits of technology and why we should never assume that computers always get it right.

“In Artificial Unintelligence, Meredith Broussard argues that our collective enthusiasm for applying computer technology to every aspect of life has resulted in a tremendous amount of poorly designed systems. We are so eager to do everything digitally—hiring, driving, paying bills, even choosing romantic partners—that we have stopped demanding that our technology actually work. Broussard, a software developer and journalist, reminds us that there are fundamental limits to what we can (and should) do with technology. With this book, she offers a guide to understanding the inner workings and outer limits of technology—and issues a warning that we should never assume that computers always get things right.”

 

Back to the main resources page

Situating China’s Social Credit System in history and context

If you have been following the developments in the digital humanities, it is very likely that you’ve come across the news that China is implementing a Social Credit System, officially known as Social Credit Score (SCS). Although the SCS is portrayed as a single integrated system that quantifies all behaviour into credit scores, it is in fact an ecology of fragmented initiatives with many different stakeholders. Broadly speaking, it consists of scoring systems developed by private sectors and by governmental bodies. As far as the governmental perspective is concerned, the SCS is an attempt to promote “trustworthiness” and transparency in the economy which is expected to combat perceived lack of trust in the marketplace, and more generally to harmonize social conduct.

Citizens “trustworthiness” is rated based an individual’s social behaviour such as their crime records, what they say on social media, what they buy, the scores of their friends, and so on. This has possible positive or negative implications on individual’s job, visa, loan applications. As a commitment towards radical transparency is a central driving force behind the SCS, information on subjects’ trustworthiness is made publicly available, and in some circumstances even being actively broadcast. Individual citizens and businesses alike are publicly ranked where the records are publicly open.

SCS civilized families

Roncheng’s “civilized families” are displayed on public noticeboards like these. (Simina Mistreanu)

The SCS is to become mandatory by 2020 and is currently being implemented in some form or another across parts of China. Areas that are socioeconomically deprived seem prior targets. Rongcheng in the eastern province of Shandong, where the SCS has been rolled out for some time now, is, according to government officials, one of the best examples of the system working as intended, according to government officials.

From a general systems science perspective, the SCS is a self-organizing system that operates through incentive and punishment mechanisms. People with low ratings will, for example, have slower internet speeds, restricted access to restaurants, and the right to travel invoked.

“Higher scores have already become a status symbol, with almost 100,000 people bragging about their scores on Weibo (the Chinese equivalent of Twitter) within months of launch. A citizen’s score can even affect their odds of getting a date, or a marriage partner, because the higher their Sesame rating, the more prominent their dating profile is on Baihe.” (Creemers, 2018)

The SCS has been described as an insidious digital panopticon and a dystopian nightmare where individuals’ every move are monitored and ranked through data generated from all sorts of activity and interactions, online or otherwise through digital technologies (facial recognition tools and biometric information). Many draw parallels between the SCS and the dystopian science fiction Black Mirror episode “Nosedive” where people rate each other based on their interactions.

Black Mirror rating

Many ethical and human rights issues as well as the complete eradication of the idea of privacy have been raised and the negative consequences of such a dystopian nightmare system is indisputable.

With the realization that ‘digital reputations’ could limit opportunities comes the tendency to self-censor and the tendency to be risk-averse. We are unlikely to hit “like” on a Facebook post that protests some government policy knowing that it could impact our ‘digital reputations’. Consequently, people gradually change their behaviour to align with what the system requires, to get better scores. In the process those behaviours and norms defined as “acceptable” by the government are reinforced.

Nonetheless, among the misconceptions surrounding the SCS, there seems to be some consensus that using individual’s digital traces to directly or indirectly influence individual’s behaviour is something that only happens in non-Western totalitarian states. In fact, credit scoring practices are not unfamiliar in Western societies. Facebook, for instance, seems it is developing its own system of rating users trustworthiness.

It is also worth mentioning Facebook’s emotion tracking patent (where the aim is to monitor individuals’ typing speed in order to predict emotions and adapt messages in response), which was granted in May 2017 and the currently filed Socioeconomic classifier (which might enable Facebook to rank its users according to different social classes), among its series of patents. These developments in combination with others, such as Facebook’s ability to flag individuals through its facial recognition technology without the consent of the user, in some sense constitute a surveillance society. Facebook’s ability to rank and categorize people into a variety of socioeconomic categories has possible impacts on individuals’ opportunities depending on their class, gender, race and sexual orientation. Whether its the type of job ads one is excluded from viewing (due to their gender, class or age) or the exclusion from certain housing ads, Facebook’s ranking and categorizing systems often impact the under-privileged and those who fail to conform to the status quo.

Health insurance

Marshall Allen, July 2018, ProPublica

Along social media platforms, health insurers, and schools, can also be mentioned as examples that share features of the SCS. Like the SCS, these Western industries and institutes, track and surveil people through digital technologies including face recognition tools and biometric information.

We are rated, ranked and categorized using data extracted from us. Similar to the SCS, such ranking and rating often has possible “real” life consequences whether in the form of how much we pay for our insurance, what ads are pushed on us, or how we behave in school yards. The difference between the Chinese SCS and Western tech industry is, while the former is clear and upfront about it, the latter is much more invisible. In fact, such tech giants go out of their way to hide what they are doing.

Rating systems, those by the SCS or deployed through Western tech industry, create unwanted incentives and increase pressure on individuals to conform to the status quo. This creates and contributes to a society that is risk averse.

“When doctors in New York were given scores this had unexpected results. Doctors that tried to help advanced cancer patients had a higher mortality rate, which translated into a lower score. Doctors that didn’t try to help were rewarded with high scores, even though their patients died prematurely.” Tijmen Schep

Situating the SCS in history and context

The history and context which are crucial to the development of the current SCS are often missing from how the SCS is framed, at least within in Western media .

“[social systems] must be viewed whole cloth as open dynamical systems embedded in a physical, historical, and social fabric” (Juarrero, 1999, p. 201)

As far as China’s political tradition goes, morality and authority are inextricably linked. Enforcing moral standards, monitoring and disciplining the conduct of local officials and individual citizens is seen as the role of the state. “Governing the country by virtue” equals to “governing the country by the law”. Unlike the Western legal system where rights, responsibilities and entitlement of private actors and public sectors are relatively easily categorized, such categories are much more blurred within the Chinese legal system. Individual citizens, government officials, communities and business are all expected to contribute to the whole social and economic harmony and development.

“Chinese political tradition has, for centuries, conceived of society as an organic whole, where harmony can be achieved if all its members conduct themselves as appropriate to their position in public and civil structures. … Critical in this process were ideas about systems theory, derived from natural science and applied in the social context. Influenced by Western scholarship on cybernetics and systems theory, scholars such as Qian Xuesen and Song Jian worked closely with government to develop a conceptual framework for the adoption of systems engineering techniques in governance. Particular regard was given to the role of information flows, not just towards and within government, but also as part of cybernetic feedback loops to create self-correcting responses in society.” (Creemers, 2018, p. 7)

Historically the Chinese government has experimented with some forms of social control and controlling social order through self-policing and social controlling mechanisms go all the way back to the Song Dynasty.

“An 11th-century emperor instituted a grid system where groups of five to 25 households kept tabs on each other and were empowered to arrest delinquents” Mistreanu, 2018. The current SCS then is an extension of such historical traditions. The difference now is the addition of digital technologies.

From the Chinese authorities perspective the SCS epitomizes a self-correcting feedback loop where “trustworthiness” and social morality are fostered through incentives and punishments.

This by no means is to argue that the SCS is any less of a digital panopticon. However, by highlighting history and context, often missing from the SCS narrative, we can paint a somewhat complex and nuanced image of the system (as opposed to the often alarming pieces which are stripped of context and history). Furthermore, while we are preoccupied by the stories of how China is becoming one giant surveillance prison, we miss the indirect and evasive practices that are happening within our own “civilized” Western system.

 

Bibliography

Creemers, R. (2018). China’s Social Credit System: An Evolving Practice of Control.
Juarrero, A. (1999). Dynamics in action: Intentional behavior as a complex system (p. 127143). Cambridge, MA: MIT press.

 

 

ክርስትና እና እንስታዊነት ሆድና ጀርባ በሲራክ ተመስገን

የእንስታዊነት (Feminism) እንቅስቃሴ በመሰረታዊነት ሴቷን ከወንዱ እኩል በኢኮኖሚ፣ በማህበራዊ እና በፖለቲካው መስክ ተሳታፊ እንድትሆን ማስቻል ነው። ሴቷ በፆታዋ ብቻ የሚደርስባትን መገፋት ለማስቀረት መንቀሳቀስ ነው። የእዚህ መገፋት እና አባታዊ ስርዓት በአለም ላይ መዘርጋት ክርስትና ትልቅ አስተዋፅኦ አለው ብዬ አምናለሁ። ለእዚህም ነው ብዕሬን ያነሳሁት። እንግሊዛዊው የባይዎሎጅ ሊቅ ሪቻርድ ዳውኪንስ ብዙ በተነገረለት ‘The God Delusion’ በተባለው ድንቅ መጽሀፉ ላይ የብሉይ ኪዳኑን አምላክ እንዲህ ሲል በምሬት ይገልፀዋል፡

“The God of the Old Testament is arguably the most unpleasant character in all fiction: jealous and proud of it; a petty, unjust, unforgiving control-freak; a vindictive, bloodthirsty ethnic cleanser; a misogynistic, homophobic, racist, infanticidal, genocidal, filicidal, pestilential, megalomaniacal, sadomasochistic, capriciously malevolent bully”

ፕሮፌሰር ዳውኪንስ ይሄን ሲል ግን እንዲሁ በባዶው አይደለም፤ ለእያንዳንዱ ስያሜው ከብሉይ ኪዳን መጻህፍት ጥቅስ እያጣቀሰ እንጅ፡፡ እኔም ‹‹ይሄንን ኢ–ሰብዓዊ የሆነን አካል በአምላክነት የተቀበለ ሰው ስለ መብት ሊያወራ አይገባም ›› የምለውም በመጽሃፉ የተጠቀሰው ባህርይ እጅግ ከሰብዓዊነት የራቀ በመሆኑ ነው፡፡። በዚህ ርዕስ የማነሳው የሴቶች መብት እና የእንስታዊነት (Feminism) ጉዳይም የመጽሐፉ ዋነኛ ተጠቂ ናቸው፡፡ በመጽሐፍ ቅዱስ ሴቶች ከብሉይ ኪዳን እስከ አዲስ ኪዳን ድረስ ሴቶች ተጨቋኝ ሆነው የቀረቡበት ጥራዝ ነው። አብነት እየጠቃቀስኩ ላስረዳ፡፡

የብሉይ ሴቶች

በብሉይ በእግዚአብሔር ተወዳጅ ከሆኑ ነገስታት አንዱ ንጉስ ዳዊት ነው። ይህ ሰዉ ወሲብ በጣም ይወድ ነበረ። ብዙም ዕቁባቶች ነበሩት። ሴቶችንም እንደግል ንብረቱ ቆጥሮ በአንድ ቤት ዘግቶ፣ ከማንም ሳይገናኙ እንዲሞቱ የማድረግ ስልጣን ነበረው ዳዊት (2ኛ ሳሙኤል 20:3)፡፡ በአመት ሶስት ጊዜ በሚደረገው የቂጣ በዓል፣ የመኸር በዓል እና የመክተቻ በአል ወቅት በእግዝአብሔር ፊት ለዕይታ የሚቀርቡት ወንዶች ብቻም ነበሩ (ዘጸአት፣ 23:14–17)፡፡ በሙሴዎ ዓለም የተፈጥሮ ኡደቶች (ወሊድም ሆነ የወር አበባ) ለሴት ልጅ የመርከስ ምልክት ነው። እንደዚህም ሆኖ ወንድ ከወለደች 7 ቀን የረከሰች ነች። በአስገራሚ ሁኔታ ሴት ከወለደች ዕጥፍ ቀን የረከሰች ነች መባሏ ነው (ዘሌዋውያን 12: 1–5)፡፡እግዜሩ ለሰው ልጆች ዋጋ ማውጣቱ ሲገርም ሴቶች ከወንዶች ያነሰ ዋጋ ያለቸው መሆኑ ይበልጥ ያስቃል። በብሉዩ ዓለም ከአምስት አመት ሴት ልጅ ይልቅ የአንድ ወር ወንድ ህፃን በዋጋ ይበልጣል (ዘሌዋውያን 27: 1–7)፡፡ ይባስ ብሎም ሙሴ በአምላኩ ሕዝቡን እንዲቆጥር ሲታዘዝ ሴቶች እንደሰው አይቆጠሩም ነበረ (ዘኁልቆ 3:15)፡፡ በዚህ አያበቃም እግዚአብሔር ለሙሴ በሰጠው ህግ መሰረት አንድ ሰው ቢሞት ወንዶች ልጆቹ ብቻ የንብረት ወራሾች ይሆናሉ። ሴቶች ልጆች ወራሾች የሚሆኑት ሟች ወንድ ልጆች ከሌሉት ብቻ ነው (ዘኁልቆ 27:8–11)፡፡ ድንግልና ሳይኖራት ያገባች ሴት በድንጋይ ተወግራ እንድትሞት ‹የእግዚአብሔር ህግ› ያዛል (ዘዳግም22:13–21)። በተቃራኒው ወንድ ድንግልና ከሌለው ይቀጣ የሚል ህግ ግን የለም። በአጠቃላይ የብሉይ ኪዳን ዘመን ተብሎ በሚታወቀው ጊዜ ሴት እቃ ( ) ነች እንጅ ሰው አልነበረችም፡፡ አዲስ ኪዳኑስ ምን ይላል;

ሴቶች በአዲስ ኪዳን

ከብሉይ ኪዳኑ የጭካኔ ዘመን አንፃር እየሱስ ክርስቶስ አብዮተኛ ነበረ ማለት ይቻላል። በአይሁዳውያን ዘንድ ሴቶችን ዝቅዝቅ የማድረግ ባህልን ሲጠቀም አይታይም። ሴቶችንም ያስተምርም ነበረ። በተዘዋወረባቸው ቦታዎችም ሁሉ በቋሚነት አብረውት ይከተለት ነበረ። እንደ ወንዶቹ ይፈውሳቸውም ምሳሌ ያደርጋቸዋልም። ይልቁኑ የክርትና መሰረት ነው ተብሎ ከሚነገርለት ከቅዱስ ጳውሎስ አስተምህሮ ነው አዲስ ኪዳኑ በሴቶች ላይ ሲጨክን የሚታየው፡፡ ጳውሎስ በ1ኛ ቆሮንቶስ 11:3 ላይ «ነገር ግን የወንድ ሁሉ ራስ ክርስቶስ፣ የሴትም ራስ ወንድ፣ የክርስቶስም ራስ እግዚአብሔር እንደሆነ ልታውቁ እወዳለሁ» ብሎ ሴትን በደረጃ ከወንዱ አውርዶ ያስቀምጣታል፡፡ አልፎም ለሴት ልጅ የፀጉር አቆራረጥ ህግ ያፀድቃል። ሴትም ለወንድ ሲባል የተፈጠረች እንደሆነ በግልፅ እና በጉልህ ይናገራል። ሚስቶች የባሎቻቸው ባሪያ እንደሆኑ እና ያለምንም ማመንታት ለባሎቻቸው እንዲገዙ ደንግጓል (ኤፌሶን 5:22–23)፡፡ ሴቶች ህዝብ በተሰበሰበበት ቦታ መናገር አይፈቀድላቸውም። ማወቅ የፈለጉት ነገር እንኳን ቢኖር በቤታቸው ባሎቻቸውን እንዲጠይቁ ነው እግዜሩ የሚያዘውይመክራል ጳውሎስ (1ኛ ቆሮንቶስ 14:34–36)፡፡ ሴቶች እንዲያስተምሩ አይፈቀድላቸውም። በወንድ ላይም መሰልጠን አይችሉም (1ኛ ጢሞቴዎስ 2:11–15)፤ በማለትም ‹ወንድ ወደ ችሎት፤ ሴት ወደ ማጀትን›› ጳውሎስ ይሰብከናል፡፡ ሲያጠቃልልም ሴቶች ደካሞች መሆናቸው በ1ኛ ጴጥሮስ 3:7 ላይ ይነግረናል ቅዱስ ጳውሎስ፡፡ እዚህ ላይ ነው ጥያቄው፡፡ ይሄን የመሰለ ሴቶችን እንደሰው እንኳን ለመቁጥር የሚግደረደር የጭቆና መሳሪያ ተይዞ ስለ ሴቶች መብት ማውራት እንዴት ይቻላል? መብትስ ምንድን ነው? ራስን መቃረን ደሞ በሽታ ነው። ይህን የሃይማኖት የጭቆና ህግጋት እና ትዕዛዝ አውልቀው ሳይጥሉ ‹እንስታዊት ነኝ› ማለት ለእኔ ለእንቅስቃሴው ስድብ ነው። እነደጳውሎስ ምክር ስጥ ብባልም እንደዚህ የመጽሃፉ አማኒያን ራሳቸው ‹የሴት መብት ተከራካሪ› ብለው የሚጠሩ ሰዎች ከእንስታዊነት እንቅስቃሴ ላይ እጃቸውን ቢያነሱ ሸጋ ነው ብይ ነኝ፡፡ “You can’t have your cake and eat it” እንዲሉ፤ ወይ ሽልጦውን ወይ ሆዳችንን ነው ጥያቄው፡፡ ለነገሩ እንደ ኤልዛቤት ስታንተን ያሉ ሴቶች ‘The Woman’s Bible’ ብለው ማሻሻያ ለማድረግ መነሳታቸው፤ የዚሁ የመጽሃፉ ጨቋኝነት ቢያማራቸው አይደለምን?

 

Men #mansplain feminism to me

1ahehy

I recently got into some Twitter exchanges regarding Ethiopian feminism. Seeing a bunch of men telling women that they can’t be both religious and feminists despite those women arguing otherwise, started it. Let me clarify things in a bit more detail here. Not only are you mistaken, as there are plenty of remarkable Muslim feminists, the arrogance in your tone is unbearable.  The real irony was though you failing to see the privileged standpoint which you are speaking from. A privilege that grants you to think that your opinion on feminism should be more trustworthy than the experience and say of women who live sexism and misogyny every day. I am not at all religious but one doesn’t need to be religious to see how wrong-headed it is for men to alienate and exclude women from feminism based on faith. Especially, when those women are declaring themselves feminists and providing justifications (note that they needed to) why it works for them. Do you think they need your approval to qualify as a feminist because you have problems with letting go of authority? Why should women feel they need to fit your definition of feminism to call themselves one? Do you think they need men like you to think for them and tell them what feminism is or should be? Telling a woman that she can’t be both religious and feminist is like the oppressor telling the oppressed what oppression means. If you think women need your approval and validation as they explore what feminism means to them, it is a sign that you have failed to grasp the kind of patriarchal society we live in and you are likely to be part of the problem.

I am not advocating for any strand of feminism here. Neither am I trying to define what feminism is nor who should be categorised as a woman and why. My issue is you belittling and demeaning women for saying what kind of feminism works for them and what feminism means to them. It doesn’t matter what level of education you have, or how enlightened your knowledge of feminism might be (although I highly doubt most men who think they should be in charge defining feminism know much about it at all), telling a feminist what feminism is or should be, defeats the very essence of what feminism stands for – namely women thinking and deciding for themselves. You wanting to be the central voice here not only gives you complete authority, which feminism is trying to shift, it also disregards and invalidates women’s experiences.

Do you find the idea that women can think and decide for themselves and that your input comes second indigestible? That might be because it has been the accepted norm (thanks to patriarchy) for your voice to be the dominant and authoritative one. You wanting to take the upper hand and explain what feminism is to women is an indictment of your unquestioned and taken for granted privilege as a man. It takes one to critically reflect on societal structures and one’s place in such structures to be aware of one’s own privilege.

If you think feminists central focus should be the protection of your freedom of speech, then you’ve got it all wrong. And if you can’t see why your rights aren’t the centre of attention in the feminist’s agenda, then you really are blinded by your male privilege in which case you urgently need to scrutinise those privileges.

If you truly want to contribute to the whole movement, learn to critically analyse your place as a man in society and carefully listen to what women have to say. Your knowledge is no good if it is dismissive of women who live to experience sexism every day. There can only be a common ground for discussion of your contribution to feminism when you first believe and accept that women are capable of leading their own movement and are the primary role-players as far as feminism goes.

Finally, this is aimed at those men who think that their knowledge of feminism is far superior to women’s lived experiences and say on feminism. If you are not one of them, then this post doesn’t concern you and you are most likely to agree with me here. If you are, I hope you find this post somewhat helpful in terms of clarifying issues – absent in the restricted Twitter exchanges.