Editorial

#5 Spectres of AI

Artificial intelligence (AI) is arguably the new spectre of digital cultures. By filtering information out of existing data, it determines the way we see the world and how the world sees us. Yet the vision algorithms have of our future is built on our past. What we teach these algorithms ultimately reflects back on us and it is therefore no surprise when artificial intelligence starts to classify on the basis of race, class and gender. This odd ‘hauntology’1Jacques Derrida, Spectres de Marx, Paris, Galilée, 1993. is at the core of what is currently discussed under the labels of algorithmic bias or pattern discrimination.2For example: Safiya Umoja Noble, Algorithms of Oppression, New York, NYU Press, 2018; Virginia Eubanks, Automating Inequality, New York, Picador, 2019; Clemens Apprich, Wendy Hui Kyong Chun, Florian Cramer and Hito Steyerl (eds.), Pattern Discrimination, Lüneburg/Minneapolis, meson press/University of Minnesota Press, 2019. By imposing identity on input data, in order to filter, that is to discriminate signals from noise, machine learning algorithms invoke a ghost story that works at two levels. First, it proposes that there is a reality that is not this one, and that is beyond our reach; to consider this reality can be unnerving. Second, the ghost story is about the horror of the past – its ambitions, materiality and promises – returning compulsively and taking on a present form because of something that went terribly wrong in the passage between one conception of reality and the next. The spectre does not exist, we claim, and yet here it is in our midst, creating fear, and re-shaping our grip on reality.3Mark Fisher, “What Is Hauntology?”, Film Quarterly, 66 (1), 2012, pp. 16–24. Available at: http://www.jstor.org/stable/10.1525/fq.2012.66.1.16 [accessed November 17, 2019].

Over the last few years, we have been witnessing a shift in the conception of artificial intelligence: away from so-called ‘expert systems’ towards the ‘smartification’ of almost everything.4See Orit Halpern, Robert Mitchell and Bernard Dionysius Geoghegan, “The Smartness Mandate: Notes toward a Critique”, Grey Room, 68, 2017, pp. 106–129. Available at: https://doi.org/10.1162/GREY_a_00221 [accessed November 17, 2019]. We can find AI in our mobile devices, our urban infrastructure, it curates recommendation systems telling us what to listen to, what to buy, what art is, and which person to date. And if you are listening to current media coverage, you get the impression that artificial intelligence is coming not only for our love life, but also our jobs, dreams, and brains. With this issue of spheres we want to focus on current discussions around AI, automation, robotics and machine learning, from an explicitly political perspective. Instead of invoking and, therefore, perpetuating the spectre of artificial intelligence as a ‘programmed vision’5Wendy Hui Kyong Chun, Programmed Visions: Software and Memory, Cambridge, MA, MIT Press, 2011. built on our past, we are interested in tracing human and non-human agency within automated processes, discussing the ethical implications of machine learning, and exploring the ideologies behind the imaginaries of AI. With these impulses as starting points, we sought contributions that deal with AI at three different levels of analysis:

First, this issue considers reflections dealing with theoretical (re-)con­ceptualisations of artificial intelligence. What genealogies do terms such as artificiality, intelligence, learning, teaching and training have, and what are their hidden assumptions? How can the interrelation between human and machine intelligence be understood and how is intelligence operationalised within AI? In his contribution, Matteo Pasquinelli addresses these questions from the perspective of the often neglected technical limitations of artificial intelligence. Tracing a methodology of error he asks “[w]hat does it mean for intelligence and, in particular, for Artificial Intelligence to fail, to make a mistake, to break a rule?” In his comment Pablo Velasco responds to this question by stating that error has become an integral part of “an ideology of improvement”, typical for the current AI paradigm. What we see with machine intelligence is the idea that “failure is subsumed to an idea of progress” and, therefore, normalised as an optimisation-problem. Manan Asif, in his piece, tackles this problem from a philological perspective. He contextualises data science applications in US drone warfare within the substantial colonial pre-histories of naming and knowing the ‘other’ as determined through the discipline of ‘area studies’. Noopur Raval extends his analysis by offering “other histories, older and newer, to point to the fundamentally violent heart of techno-science as a historical, colonial enterprise.” Raval goes on to say that this is not new but “bears repeating because it brings into question whether repurposing historically violent disciplines, knowledge projects and technologies might realize the decolonial futures we want.” In her paper, Emma Stamm examines the utility of interpretative phenomenology in the psychedelic sciences to critically engage with the epistemic positions within the artificial intelligence discourse. She writes: “If psychedelic drugs do in fact bring forth new insights on the psyche, psychedelic science is poised to inform conceptions about mentality which prevail across various fields of scientific research and practice, including artificial intelligence.”

Second, the issue concerns implications of artificial intelligence, both in terms of its making as well as its real-world effects. What kind of data analysis and algorithmic classification is being developed and what are its parameters? How do these decisions get made and by whom? Along these lines, Simon Crowe takes a closer look at the micropolitics of recommender systems, which he identifies as “a producer of subjectivity, a resident of planet-spanning cloud computing infra­structures, a conveyor of inscrutable semiotics and a site of predictive control.” The question of control is picked up by Ariana Dongus and Pedro Oliveira who write about biometric technologies in the context of AI. Dongus presents the history of biometrics while showing us how the Iraqi city of Fallujah is a testing ground for a present and future regime of biometric identification and control. In keeping with her examination of how individuals have been turned into “biometric data points”, Oliveira introduces his art practice that investigates Europe’s and, in particular, Germany’s use of accent recognition software within border control regimes. According to him, “[b]iometric technologies are calibrated within a set of normative assumptions that, in effect, convey white supremacist modes of seeing and listening”. Adnan Hadzi and Denis Roio situate AI in a military industrial complex asking at what point a seemingly intelligent and self-conscious hardware/software system might be considered a ‘person’ and what implications this might have for “restorative justice for AI crimes and how the ethics of care could be applied to AI technologies.” From a different angle, Claire Larsonneur, in her contribution, focuses on machine translation systems such as Google Translate or DeepL. From a less speculative, but rather material perspective, she investigates the making of neural machine translation (NMT), in order to identify the genealogy and specificity of translation tools, to uncover the current sociology and geography of NMT agents, and to examine its impact on our relation to language.

Third, we consider imaginaries revealing the ideas shaping artificial intelligence. How do pop-cultural phenomena reflect the current reconfiguration of human-machine-relations? What can they tell us about the techno-capitalist unconscious, working behind the scenes of AI-systems? In their essay, “AI and the Imagination to Overcome Difference”, Christoph Ernst, Jens Schröter, and Andreas Sudmann revisit a longstanding ambition of AI computing, that of Turing’s universal – and universalizing – machine. In it, they examine how, through various applications, AI technology is imagined as singular and unified, addressing an astonishing diversity of nuanced social conditions like language translation, work, and the automation of war. They show how this happens through the flattening of the differential abilities of human and machine. In response, our guest editor, Maya Ganesh, expands on the notion of this difference through its synonyms: ‘gap’, ‘distinction’, ‘discrimination’ and ‘diversity’; by taking applications such as natural language processing (NLP) in hate speech identification, and autonomous weapon systems, Ganesh shows that they erase differences between humans, rather than acknowledging our uniqueness. In their contributions Sy Taffel and Yeawon Kim deal with the question in which way artistic practices can deal with a situation, dominated by algorithmic decision-making processes and increasing automation. In the documentary film “Automating Creativity” Taffel explores how workers in the creative industries and academics who study technology and culture understand the existing and emerging relationships between automation and creativity, and how these relationships inform contemporary communication, media and culture. Kim’s “Insectile Indices”, on the other hand, is an explicitly speculative design project that considers how electronically augmented insects could be trained to act as sophisticated data sensors, working in groups, as part of a neighbourhood crime predictive policing initiative in the city of Los Angeles, 2027. Both projects reflect questions surrounding the histories, modes and imaginary futures associated with varying forms of artificial intelligence.

To return to questions of spectres and hauntings, this issue brings together essays and artistic contributions about AI as something that is not present, has not come to be – assuming we believe that AI will be a fully sentient, unified, machinic super intelligence, and is not actually the rudimentary prototypes and broken toys we see around us now. Even in this partial state, it creates social relations about things that are similarly spectral – race, gender, caste, culpability, ‘killability’– which are entirely socially constructed and yet have material forms and embodied consequences. Our work then is to resolve the haunting and find ways to reconcile with ghosts.

References
1 Jacques Derrida, Spectres de Marx, Paris, Galilée, 1993.
2 For example: Safiya Umoja Noble, Algorithms of Oppression, New York, NYU Press, 2018; Virginia Eubanks, Automating Inequality, New York, Picador, 2019; Clemens Apprich, Wendy Hui Kyong Chun, Florian Cramer and Hito Steyerl (eds.), Pattern Discrimination, Lüneburg/Minneapolis, meson press/University of Minnesota Press, 2019.
3 Mark Fisher, “What Is Hauntology?”, Film Quarterly, 66 (1), 2012, pp. 16–24. Available at: http://www.jstor.org/stable/10.1525/fq.2012.66.1.16 [accessed November 17, 2019].
4 See Orit Halpern, Robert Mitchell and Bernard Dionysius Geoghegan, “The Smartness Mandate: Notes toward a Critique”, Grey Room, 68, 2017, pp. 106–129. Available at: https://doi.org/10.1162/GREY_a_00221 [accessed November 17, 2019].
5 Wendy Hui Kyong Chun, Programmed Visions: Software and Memory, Cambridge, MA, MIT Press, 2011.

Maya Indira Ganesh is a technology researcher and writer and speaker who works with arts and cultural organisations, academia and NGOs. She is working on a PhD at Leuphana University, about the cultural-computational shaping of the notion of machine ‘autonomy’, and the evolving role of the human in this. Her other areas of research expertise include: gender, feminism and technology; big data and discrimination; digital security and privacy in human rights defence; online activism. She has worked with Tactical Tech, the Citizen Lab at the University of Toronto, UNICEF India, and the APC Women’s Rights Program.

Stina Lohmüller has recently earned her M.A. degree in the “Culture, Arts and Media” Program at Leuphana University. In the broad field of digital cultures her research interests are focused on the status of digital media technologies in socio-cultural processes and a critical questioning of recent digitalization policies. Her Bachelors Thesis explored new forms of citizenship in smart city environments.