I recently had the opportunity to attend a multidisciplinary conference where cognitive scientists, philosophers, psychologists, artificial intelligence (AI) researchers, neuroscientists and physicists came together to discuss the self. The conference was, generally speaking, well organized and most of the talks were interesting. The theme of the conference was on the openness of the self which means that contrary to the traditional essentialist view of self as fixed, fully autonomous and self-contained, the consensus, among the attendees, was that the self is not a static, discrete entity that exists independent of others but dynamic, changing, co-dependent, and intertwined with others. This intertwinement would furthermore extend to social and political forces that play crucial roles into constituting who we are. In this vein, any discussion of self and technology needs to acknowledge the entanglement of social and political factors and the necessity for diverse input and perspectives.
AI is a very broad field of enquiry which includes, to mention but a few, facial recognition technologies, search engines (such as Google), online assistants (such as Siri), and algorithms which are used in almost every sphere (medical, financial, judicial, and so on) of society. Unfortunately, the view of AI that seems to dominate public as well as academic discourses is a narrow and one-dimensional one where the concern revolves around the question of artificially intelligent “autonomous” entities. This view is unsurprisingly often promoted by a one-dimensional group of people; white, middle-class and male. Questions outside “the creation of artificial AI” rarely enter the equation. The social, political, and economical factors rarely feature in the cognitive science and interdisciplinary formulations of selfhood and technology — as if any technological development emerges in a social, political and economical vacuum. And the conference I attended was no different.
This was apparent during theme-based group discussions at this conference where one group discussed issues regarding self and technology. The discussion was led by researchers in embodied AI and robotics. The questions revolved around the possibility of creating an artificial self, robots, whether AI can be sentient and if so how might we know it. As usual, the preoccupation with abstract concerns and theoretical construction took centre stage, to the detriment of the political and social issues. Attempts to direct some attention towards the social and political issues were dismissed as irrelevant.
It is easy to see the appeal of getting preoccupied in these abstract philosophical questions. After all, we immediately think of “I, Robot” type of robots when we think of AI and we think of “self-driving” cars when we think of ethical questions in AI.
The fascination and preoccupation for autonomous and discrete machines is not new to current pop-culture. The French philosopher René Descartes had a walking and talking clockwork named after his daughter Francine. The machine apparently simulated his daughter Francine, who died of scarlet fever at the age of five. The 18c Hungarian author and inventor Wolfgang von Kempelen created the Mechanical Turk, (a fake) chess-playing and speaking machine to impress the Empress Maria Theresa of Austria.
It is not surprising that our perception of AI is dominated by such issues given that our Sci-Fi pop culture plays an influential role towards our perception of AI. The same culture feeds on overhype and exaggeration of the state of AI. The researchers themselves are also often as responsible for miscommunication and misunderstanding about the state of the art of the filed. And the more hyped a piece of work is, the more attention it is given – look no further than the narrative surrounding Sophia – an excessively anthropomorphized and overhyped machine.
Having said that, the problem goes further than misleading coverage and overhype. The overhype, the narrow one-dimension view of AI as concerned with question of artificial self and “self-driving” cars, detracts from nuanced and most important and more pressing issues in AI that impact the very poor, disfranchised, socially, economically disadvantaged. For example, in the current data economy, insurance systems reward and offer discounts for those that are willing to be tracked and provide as much information about their activities and behaviours. Consumers who want to withhold all but the essential information from their insurers will pay a premium. Privacy, increasingly, will come at a premium cost only the privileged can afford.
An implicit assumption that AI is some sort of autonomous, discrete entity separate from humans, and not a disruptive force for society or the economy, underlies this narrow one-dimensional view of AI and the preoccupation with the creation of artificial self. Sure, if your idea of AI revolves around sentient robots, that might bear some truth. This implicit assumption seems, to me, a hangover from Cartesian dichotomous thinking that remains persistent even among scholars within the embodied and enactive tradition who think that their perspectives account for complex reality. This AI vs humans thinking is misleading and unhelpful, to say the least.
AI systems are ubiquitous and this fact is apparent if you abandon the narrow and one-dimensional view of AI. AI algorithms are inextricably intertwined with our social, legal, health and educational system and not some separate independent entities as we like to envision when we think of AI. The apps that power your smart phone, the automated systems, including those that contribute to the decision towards whether you get a loan or not, whether you are hired or not, or how much your car insurance premium will cost you all are AI. AI that have real impact, especially on society’s most vulnerable.
Yet, most people working on AI (both in academia and Silicon Valley) are unwilling to get their hands dirty with any aspect of the social, economic or political aspect and impact of AI. The field seems, to a great extent, to be constituted of those who are socially, economically and racially privileged where these issues bear no personal consequences. The AI side of cognitive science is no different with its concerns of first world problems. Any discussion of a person or even society is devoid of gender, class, race, ability and so on. When scholars in these fields speak of “we”, they are barely inclusive of those that are outside the status quo which is mostly a white, male, Western, middle-class educated person. If your model of self is such, how could you and why would you be concerned about the class, economic, race and gender issues that emerge due to unethical application of AI, right? After all, you are unlikely to be affected. Not only is the model of self unrepresentative of society, there barely is awareness of the issue as a problem in the first place. The problem is invisible due to privilege which renders diversity and inclusivity of perspectives as irrelevant.
This is not by any means a generalization of everyone within the AI scholarship. There are, of course, plenty of people who acknowledge the political and social forces as part of issues to be concerned about within the discussion of AI. Unsurprisingly, such important work in this regard is done by people of colour and women who unfortunately, remain a minority. And the field as a whole would do well to make sure that it is inclusive of such voices, and to value their input instead of dismissing them.