AI and the Imagination to Overcome Difference

The history of AI is essentially characterised by high expectations. Much has been written about these expectations and the disappointments they result in.1Cp. Andreas Sudmann, “Zur Einführung. Medien, Infrastrukturen und Technologien des maschinellen Lernens”, in Christoph Engemann and Andreas Sudmann (ed.), Machine Learning. Medien, Infrastrukturen und Technologien der Künstlichen Intelligenz, Bielefeld, transcript, 2018, pp. 9–23. This is due to the fact that future-oriented ideas of what is technically feasible have always been closely related to the ways in which (popular) culture has been imagining different applications of AI. Maybe we are now for the first time confronted with the historical situation in which the divide between AI as science fiction and AI as empirical research has become so minimal that it is no longer an easy task to distinguish both realms. With this contribution, we seek to demonstrate that the high expectations, manifesting both in the historical research as well as in the imagination of AI in (popular) culture, share a substantial similarity: various “sociotechnical imaginaries” to overcome difference.2Cp. Sheila Jasanoff, “Future Imperfect: Science, Technology, and the Imaginations of Modernity”, in Sheila Jasanoff and Sang-Hyun Kim (eds.), Dreamscapes of Modernity. Sociotechnical Imaginaries and the Fabrication of Power, Chicago, Chicago Univ. Press, 2015, pp. 1–33.

Given the rapid development of diverse social applications of AI-based technologies, this essay aims to discuss how idealisations of AI as a ‘universal’ technology in key fields of current debates mirror the imaginations (or even phantasms) of overcoming social and cultural differences in particular, and the difference between humans and machines in general. In the following, we give an overview of the concept of a universal translation of language, the idea of machines erasing the difference to human labour, and discuss the notion of ‘autonomy’ in debates on autonomous weapons systems. Of course, the articulation of overcoming difference varies in all of these scenarios. We believe, however, that there are significant similarities and relations between those articulations that reveal important aspects of how AI has been and continues to be imagined and explored.

Papert’s Critique of Universal AI Mechanisms

The difficulties that AI research still faces today may have something to do with what Seymour Papert already discussed in an article for the journal Daedalus back in the 1980s. In his essay, Papert criticises how the competing paradigms of AI, such as the symbolic-ruled based and the artificial neural networks (aka connectionism), are both “engaged in a search for mechanisms with a universal application.”3Seymour Papert, “One AI or Many?” in Stephen R. Graubard (ed.), The Artificial Intelligence Debate. False Starts, Real Foundations, Cambridge, Mass, MIT Univ. Press, Second Edition, 1989 [1988], pp. 1–14, here: p. 2. However, as he stresses, there is no “privileged and universal mechanism on any psychologically relevant level”4Ibid.. He explains this phenomenon by using the following analogy:

“An evolutionary biologist might try to understand how tigers came to have stripes. And a molecular biologist might try to understand the origin of life in some primeval soup. But how life started gives you no information about how a tiger looks. Yet this fallacy pervades the intellectual discourse of connectionists and programmers. The connectionists talk about experiments on the level of small groups of simulated neurons and then, almost in the same breath, talk about how one can walk and think at the same time. Multiprocessing is assumed to be the same kind of enterprise in both cases. Information processing experts display rule systems that match the behavior of people and computers solving logical problems, and jump from there to statements like Allen Newell’s: ‘Psychology has arrived at the possibility of a unified theory of cognition’.”5Ibid.

As this analogy suggests, Papert assumes that both approaches produce a categorical error if they believe “that the existence of a common mechanism provides both an explanation and a unification of all systems, however complex, in which this mechanism might play a central role.”6Ibid. For this reason, he argues in favour of an AI research that is not only devoted to the similarity of AI tasks but also addresses their specifics and differences. Indeed, it is easy to show that current AI research continues to be determined by the idea of pursuing universalistic concepts to meet various techno-social challenges, as one can, for example, observe concerning advanced machine translation systems.

Scenario 1: Language and Universal Translators

A particularly relevant case here is Google’s Neural Machine Translation System (GNMT), which has been the basis of the online application Google Translate since November 2016. According to the company, it has reduced the error rate of language translations by 60 per cent compared to the statistical method previously used by Google.7Cp. Yonghui Wu et al., “Google’s Neural Machine Translation System. Bridging the Gap between Human and Machine Translation”, September 26, 2016. Available at: http://arxiv.org/abs/1609.08144 [accessed June 28, 2017].

In technological terms, the system is based on an ANN approach called LSTM (Long Short-Term Memory).8Cp. Sepp Hochreiter and Jürgen Schmidhuber, “Long Short-Term Memory”, Neural Computation, 9 (8), 1997, pp. 1735–1780. The ability to store information both in the short and long term and thus cope with more sophisticated machine learning tasks characterises this method. With LSTM technology, the translation system can analyse a sentence and memorise its sequence of words. This procedure differs from previous statistical methods that divide sentences into individual phrases and words, but the chronological sequence of the sentence elements could not be evaluated analytically.

During its introduction, the GNMT system was created to operate with eight languages. For the future, the system is designed to work with the 100+ languages that Google Translate currently includes. This would require the system to be adapted for over 10,000 language pairs. However, precisely such individual customisation services are not necessary, according to Google. Instead, one single learning method is used for all language pairs. A technical requirement for this is the so-called zero-shot learning method that allows the system to translate between language pairs on which it has never trained with sample data before.9Cp. Wu et al., “Google’s Neural Machine Translation System.” In other words, Google’s system is designed to eradicate the difference technologically and thus, also the cultural specificity of languages.

Unsurprisingly, the cultural desire to produce such a universal translation machine is anything but new. Long before modern research in machine translation (MT) essentially took off in the 1950s, already philosophers and scholars of the seventeenth century like Beck, Leibniz or Descartes were interested in the idea of a universal language, attempting to develop a ‘numerical dictionary’. Furthermore, popular media10Already Murray Leinster’s 1945 published novella First Contact contains the idea of an universal translator. Another important reference is of course the Babel fish in Douglas Adams’ saga The Hitchhiker’s Guide to the Galaxy, next to popular depictions of universal translating devices in science fiction films and series like the Star Trek franchise (e.g. Star Trek: TOS, USA, NBC, 1966-69) or Men in Black (USA 1997). has frequently been depicting universal translating devices decades before a system like or similar to Google became a reality.11Precisely for this reason it is also important to focus on how technology is imagined in popular representations, since these imaginations potentially inspire and form the desire and ideas of engineers, scientists and business owners outside the world of books and screens. See David Kirby, “The Future is Now: Diegetic Prototypes and the Role of Popular Films in Generating Real-world Technological Development”, Social Studies of Science, 40 (1), 2010, pp. 41–70.

A central function of the universal translator in popular media is to provide narrative efficiency and plausibility, for example regarding the instant communication between humans and alien species. At the same time, as widely discussed among fans of science fiction featuring this device, there can be logical problems connected to it, when, for example, in films or TV series, the alien’s mouth moves in sync with the translated language the audience hears. Hence, some suspension of disbelief is needed to accept the narrative plausibility of such technology. However, the critical point here is that the universal translator’s central ability is to learn unknown languages quickly, and modern machine translation systems like Google’s are at least trying to achieve the same. In a certain sense, it is, therefore, true that current AI research is increasingly capable of turning science fiction fantasies of the past into a present reality. As the case of Google’s translation system demonstrates, this also includes the dream of creating an AI with universal capabilities.

However, it makes a difference whether one is confronted with a universal AI system that is used for a classification task like distinguishing triangles from circles, or whether it is supposed to function like a universal machine translator that can deal with the cultural and historical specifics of a language.

For instance, if you ask Google’s system to translate the German word “Blitzkrieg” into English, the system’s response is “flash war” instead of simply keeping the German word. However, if you repeat your translation request adding “Zweiter Weltkrieg”, the system can provide the right output. Hence, one can conclude that the system needs a bit of context to translate appropriately. Unfortunately, Google Translate is still not sophisticated enough to adequately take contextual information into account. For example, if you add “Gewitter” or “Unwetter” to “Blitzkrieg”, the output will remain “Blitzkrieg”. Of course, one can think of a complete sentence, in which the word in its untranslated form still makes sense. Yet, in this particular case, it might be more likely to refer to a literal “war of lightning flashes” (“ein Krieg der Blitze”).

As this example demonstrates, Google’s system still has trouble to deal with the cultural and historical specifics of a language. At the same time, it is quite likely that Google or perhaps some other company that provides systems of machine translation will soon find ways to better cope with this profound challenge.

However, as of now, there is still a significant discrepancy between claim and reality, or if you will, between imagination and reality, of what machine translation systems can achieve. This discrepancy could potentially have serious consequences, as it had become clear in the case regarding Microsoft’s Twitter chatbot Tay in 2016. Within only 24 hours after its initial launch, users were able to trick the adaptive system into making racist, antisemitic and misogynistic statements of all kind. As a result, Microsoft had to shut down the chatbot immediately.12Cp. James Vincent, “Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day”, The Verge, March 24, 2016. Available at: https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist [accessed March 25, 2016]. Chatbots are a different use case of AI compared to machine translation systems like Google Translate. However, that does not change the fact that the example of Tay demonstrates the limits of machinic language understanding. Further, as long as AI systems do not capture the ideological dimension of language in general or of a single word like Blitzkrieg in particular, they remain unsuitable for important tasks.

Scenario 2: Work of human and machines

The desire and imagination that AI can overcome difference with operations such as learning, planning, reasoning or the hierarchisation of knowledge are further central to recent, rather nervous discussions regarding the so-called ‘future of work’.13See only e.g. Carl Benedikt Frey and Michael A. Osborne, “The Future of Employment: How Susceptible are Jobs to Computerisation?”, Oxford Martin School, 2013. Available at: https://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf [accessed August 7, 2018]; Daron Acemoglu and Pascual Restrepo, “Robots and Jobs: Evidence from US Labor Markets”, MIT Economics, 2017. Available at: https://economics.mit.edu/files/12763 [accessed September 18, 2017].

The difference presumably erased in this case is the difference between man and machine regarding their capacity to work. There is much debate on what such erasure would mean for the future of our societies that are – as e.g. Hannah Arendt and Michel Foucault pointed out several times – centrally structured around work.14Cp. Hannah Arendt, The Human Condition, 2nd edition, Chicago, Chicago University Press, 1998; Michael Foucault, The Order of Things. An Archaeology of the Human Sciences, London, New York NY, Routledge, 2005 [1966], pp. 240–245. Even more, the erasure of this difference would pose unsolvable problems at least in the economic order of world capitalism, which has come to be completely naturalised: If work would get scarce, how should people then earn money to buy products? All of a sudden, AI seems to be a severe problem for the continuous existence of naturalised social forms.

It is important to remind ourselves that the discussion about the technological substitution of work is quite old and can be traced back to Aristotle, and much later be found in Marx, but also Norbert Wiener, Hannah Arendt, and many others. There are, further, important counterarguments why this substitution might not happen.15See Jens Schröter, “Digitale Medientechnologien und das Verschwinden der Arbeit”, in Caja Thimm and Thomas Bächle (eds.), Die Maschine: Freund oder Feind? Mensch und Technologie im digitalen Zeitalter, Wiesbaden, VS Verlag für Sozialwissenschaften, 2019 [in print] for an overview of the discussion. However, with the renewed interest in AI and AI-driven robotics, this discussion became urgent again. In movies like Blade Runner 2049 (USA 2017), cities are crowded by – besides humans – holograms, which thanks to AI behave like humans, i.e. intelligent and stunningly lifelike machines (e.g. the prostitute Mariette, which is a ‘replicant’). In a sense, one can read Blade Runner as a complex meditation on the future of work. The sex robot makes the work of real prostitutes superfluous, while the main character (‘K’) is itself a robot with the task to catch old robots and destroy them. Although it remains unclear how the economy of this future might work, it becomes evident that the difference between human and technological agents is almost non-existent. One day, the protagonist’s holographic girlfriend even hires the beautiful robot prostitute, and ‘overlays’ her in what might be the first augmented reality sex scene in cinema, to give K the feeling of having real sex. It is, therefore, no surprise that the plot is driven by the enigma that years before a female replicant had given birth to a baby – a phenomenon thought to be impossible. At stake here is not only the further erasure of the difference between human and artificial intelligence but also the machinisation of reproductive work. The crucial point, however, is the following: If you have such human-like machines with all difference seemingly erased, why should those machines not do all the manual work that is left from automatisation by more ‘primitive’ robots and AIs? In any case, Blade Runner 2049 suggests a world, in which merely no work whatsoever is left.

In the paper “Do Androids Dream of Surplus Value?”, whose title alludes to Philip K. Dick’s short story that served as the base of Blade Runner, Atle Mikkola Kjøsen discusses that such a technology would pose fundamental questions. On the one hand, it is often argued that capitalism could not be prolonged because the circle of wage work and consumption would be interrupted.16Cp. Atle Mikkola Kjøsen, “Do Androids Dream of Surplus Value?”, 2012. Available at: https://www.academia.edu/2455476/Do_Androids_Dream_of_Surplus_Value [accessed August 7, 2018]; cp. also Nick Dyer-Witheford, Atle Mikkola Kjosen and James Steinhoff, Inhuman Power. Artificial Intelligence and the Future of Capitalism, London, Pluto, 2019. On the other hand, by meticulously reading Marx Kjøsen argues that such highly developed AI-based robots might produce surplus value. Therefore a purely post-anthropocentric capitalism with the ability to reproduce without humans might emerge, leaving the latter to die out.17Ibid.

However, one does not have to share such radical speculations to see that the often imaginatively exaggerated scenarios of AI pose disturbing questions regarding the future of the economy, including the realm of politics. Srnicek and Williams regard the possible disappearance of work neither as a problem nor as a likely apocalyptic scenario like Kjøsen suggests. Instead, they argue for the significant opportunity to get rid of tedious or dangerous work.18Cp. Nick Srnicek and Alex Williams, Inventing the Future. Postcapitalism and a World without Work, London, Verso, 2015. AI and robotics should be developed in a direction to make all work superfluous, thereby erasing another difference: the difference between work and leisure. This demand for ‘full unemployment’ of course presupposes deep social transformations towards a ‘post-capitalism’. A universal basic income would only be the beginning.19See also Paul Mason, PostCapitalism. A Guide to Our Future, London, Allen Lane, 2015 and as a critique Rainer Fischbach, Die schöne Utopie. Paul Mason, der Postkapitalismus und der Traum vom grenzenlosen Überfluss, Köln, Papyrossa, 2017. Further, new ideas on an algorithmic and ‘post-monetary’ economy are based on perhaps somewhat exaggerated conceptions of AI:

“Once all payments are recorded and all purchases and sales are settled, economic processes can be controlled with the help of algorithms and artificial intelligence. This will not be to our detriment, because there is some evidence to suggest that cashless procedures deliver far better results for the distribution of goods and activities than the current financial system.”20Stefan Heidenreich, Geld. Für eine non-monetäre Ökonomie, Berlin, Merve, 2017, p. 8

However, most of this is speculation: It might never come to a post-monetary economy, a post-capitalism, or even to a post-anthropocentric capitalism, without any necessity to work. Instead, as some new studies suggest, it is quite likely that at least one of the more immediate effects of such a development is the increase of social inequality:

“Digitisation and automation will not lead to mass unemployment. The problem is not unemployment, but greater inequality and stagnating real wages in the middle of the wage spectrum. So far, the use of robots has only had a weak impact on wages. But with the advent of artificial intelligence and other digital technologies, things could get worse.”21Jens Südekum, “Digitalisierung und die Zukunft der Arbeit”, Wirtschaftspolitisches Zentrum, 2018. Available at: http://www.wpz-fgn.com/wp-content/uploads/PA19DigitalisierungZukunftArbeit20180726.pdf [accessed August 7, 2018], p. 1.

In this light, the imagination to overcoming the difference between worker and working machine might in fact mask the difference between rich and poor – and of course between the Global North and the Global South: If production is transferred back to the global centres, because a general AI will make production more lucrative than cheap labour from the Global South (while nowadays it is still less expensive to exploit children in Cambodia or to use hidden microwork than to use AI), then many possibilities to work – even if drastically underpaid or dangerous – will disappear. This is another reason why we should care for critically questioning imaginations of overcoming human and machine work in current discourses of AI.

Scenario 3: Autonomy and Autonomous Weapon Systems

The topic of using AI in so-called ‘autonomous weapon systems’ (AWS) has captured the public imagination over the last years. Rarely, a scientific talk, documentary, press article, or television show fails to refer to The Terminator (USA 1984) and Terminator 2Judgment Day (USA 1991). The predominant motif is the fear of losing human control over machines.22Cp. Christoph Ernst, “Beyond Meaningful Human Control? – Interfaces und die Imagination menschlicher Kontrolle in der zeitgenössischen Diskussion um autonome Waffensysteme (AWS)”, in Caja Thimm and Thomas Bächle (eds.), Die Maschine: Freund oder Feind? Mensch und Technologie im digitalen Zeitalter, Wiesbaden, VS Verlag für Sozialwissenschaften, 2019 [in print]. Widely noticed was the Open Letter by the Future of Life-Institute and signed by a who-is-who of researches in AI and related … Continue reading

However, while the public is concerned with the impact of AWS taking over their lives, experts in the field issue far more profane concerns. For authors such as Mary L. Cummings, a former fighter pilot of the US Air Force and director of the Humans and Autonomy Laboratory at Duke University, the real problem is that the “global defense industry is falling behind its commercial counterparts in terms of technology innovation […].”23Mary Cummings, “Artificial Intelligence and the Future of Warfare”, Chatham House, 2017, pp. 1–16, here: p. 1. Available at: https://www.chathamhouse.org/sites/default/files/publications/research/2017-01-26-artificial-intelligence-future-warfare-cummings-final.pdf [accessed August 8, 2018]. The development of AWS-technology is deeply intertwined with civilian usages of AI. This interesting constellation refers to the blurring of differentiations like private and military via a shifting relationship between man and machine. At the heart of this looms the imagination of “autonomy” as the self-determined ability of machines to reason, act, and decide in an indefinite number of new situations and contexts.24Lucy Suchman and Jutta Weber, “Human-Machine-Autonomies”, in Nehal Bhuta et al. (eds.), Autonomous Weapons Systems. Law, Ethics, Policy, Cambridge, Cambridge University Press, 2016, pp. 75–102, here: pp. 89–90.

Essential to the development of AWS is the step form “automated” to “autonomous” operations. As Cummings notes: “Unlike automated systems, when given the same input autonomous systems will not necessarily produce the same behavior every time; rather, such systems will produce a range of behaviors.”25Cummings, “Artificial Intelligence and the Future of Warfare”, p. 3. To conceptualise this,26Ibid., pp. 5–6. she maps four types of information processing onto such an increased level of uncertainty.27Cp. Mary Cummings, “Man versus Machine or Man + Machine?”, IEEE Intelligent Systems, 29 (5), 2014, pp. 62-69; Cummings, “Artificial Intelligence and the Future of Warfare”. According to her model, the most basic forms with the fewest uncertainties and the lowest level of complexity are skill-based-actions like navigating through unknown terrain. The next level, where AWS are currently situated, is rule-based reasoning as in autonomous driving. This will be surpassed by knowledge-based reasoning and fully “autonomous” expert behaviours. Although such developments are imaginable, for Cummings they transgress the abilities of existing technologies for the foreseeable future.28Cp. Cummings, “Artificial Intelligence and the Future of Warfare”, pp. 6–8.

Cummings’ model provides a plausible differentiation. However, one can challenge the model’s underlying notion of autonomy concerning the related concept of AI. Her premise to compare the autonomy of man and machine follows the, widely known, loop of sensing, processing/reasoning and acting. The analogy of media-based sensing, world-model-based reasoning and subsequent acting serves as a tertium to compare machines with humans.

“[…] the world must be perceived (or sensed through cameras, microphones and/or tactile sensors) and then reconstructed in such a way that the computer ‘brain’ has an effective and updated model of the world it is in before it can make decisions. The fidelity of the world model and the timeliness of its updates are the keys to an effective autonomous system.”29Ibid., p. 4.

However, what happens if one considers autonomy not as an inherent potential of cognitive faculties and the “world model” of an individually perceiving and reasoning entity, but rather as an effect of socio-technical configurations by focusing on the “capacities for action that arise out of particular socio-technical systems”?30Suchman and Weber, p. 78.

Lucy Suchman and Jutta Weber have proposed such an alternative view on autonomy.31The approach is of interest here, because it does not negate “the possibility […] of taking an operational approach to defining what have been categorized as lethal autonomous weapons” (ibid., p. 77). Thus, ‘operational’ perspectives on AWS like the one of Cummings are very relevant but questionable in their basic epistemology. In their discussion of autonomy in robotics and AI research, they seek to show how “the project of machine intelligence is built upon, and reiterates, older notions of agency as an inherent attribute and autonomy as a property of individual actors.”32Ibid., p. 98. These notions continue to exist in the design of “putatively intelligent, autonomous machines” today.33Ibid., p. 76. Their linear understandings of cognition like the loop of sensing, processing/reasoning, and acting are historically aligned with the idea of strong symbolic, rule-based artificial intelligence:34Cp. ibid., pp. 79–86.

“Accordingly, symbolic artificial intelligence repeats traditional, rational-cognitive conceptions of human intelligence in terms of planning. It does not promote the idea of autonomy of technical systems in the sense of the randomly based, self-learning behaviour of so-called new artificial intelligence.”35Ibid., p. 85. This line of critique reaches back to John Haugeland, Artificial Intelligence. The Very Idea, Cambridge, Mass, MIT University Press, 1985.

Although Cummings is aware of the fact that the era of symbolic AI is heavily contested, her model does not reflect what follows from this. It neither considers the impact of ‘new AI’ (ANNs, machine learning) nor a different understanding of human cognition for the conceptualisation of the man-machine-difference.36Cp. Susan Hurley, “Perception and Action. Alternative Views”, Synthese, 129 (1), 2001, pp. 3–40. Even when Cummings is calling for a new understanding of the man-machine-relation as a form of cooperation,37Cp. Cummings, “Man versus Machine or Man + Machine”, pp. 62–69. she is following a cognitive model from the era of classic symbolic AI.

If we look back at the Terminator-analogy, we have to think of AWS not as individual Terminators but in terms of “human-machine-assemblages”.38Cp. Suchman and Weber, p. 78. AWS do not act as a consequence of an autonomous reasoning process of a “world model” in any kind of traditional philosophical sense.39Ibid., p. 92. Their notion of ‘autonomy’ consists of independently identifying and taking opportunities that are created by the external structural capacities of a ‘kill chain’ comprised of human and non-human cognitive abilities. Just as it is not sufficient to consider media as cameras and sensors, it is not enough to consider the autonomy of AWS in traditional cognitive terms. Instead, we have to think of it as a shifting man-machine-relation on an infrastructural level. The idea of an AI based on the universal human ability to behave ‘autonomously’, is therefore misguided. It conceals the problem that there is no such concept of human autonomy in the first place.

Conclusion

The purpose of this essay was to identify the different ways in which AI research has been and continues to be characterised by the desire and imagination to overcome difference through universalistic expectations and strategies concerning the capabilities of AI-based technologies. As we have demonstrated, these expectations and strategies can be explained in close proximity to the imaginations of AI in the history of popular media.

Evidently, the concepts of difference inherent in the three scenarios we have discussed here themselves differ: On the one hand, we have demonstrated with regard to the case of machine translation how AI research has been and continues to be driven by the idea to overcome the cultural differences and specific ambiguities connected to certain tasks (like language understanding) through universal technological strategies. On the other hand, we have explored as to whether current AI research and innovations might eliminate the difference between human and machinic work. Obviously, the concept of difference in the latter example is much broader and at the same time much more fundamental, since what is at stake is not only a technological question of adequately dealing with the cultural differences and specificities of data as an input for AI systems based on modern machine learning. Instead, it addresses the old and fundamental question of how AI research (in the light of its historical experiences) attempts to handle the ontological difference between humans and machines. Further, as the example of autonomous weapon systems illustrated, it could be fatal if one tried to overcome the difference between man and machine by transferring universal notions of human learning, understood as an autonomous activity, to scenarios in which robots or AWS are supposed to act independently. Neither has human learning ever been autonomous nor are we likely to face a future of technology without any humans being involved. In other words, our concepts of learning are determined by a problematic way of thinking difference that positions human beings as the autonomous other in relation to machines and technology.

In sum, two interdependent positions or attitudes characterise the thinking of difference in AI research: Evidently, imaginaries to overcome difference are related to an instrumental-pragmatic account of technology. AI technology is meant to find universal solutions for different problems, and researchers in AI are or at least believe to be able to achieve this task. Yet, if one considers different use cases of AI research like those we have examined in this essay, it turns out that the imagination of overcoming difference through AI-based technology is much more than that: It is not only a strategy to reconceptualise the concept of a ‘universal machine‘, but also an ideology that disguises the real socio-cultural and material differences of our empirical world.

References
1 Cp. Andreas Sudmann, “Zur Einführung. Medien, Infrastrukturen und Technologien des maschinellen Lernens”, in Christoph Engemann and Andreas Sudmann (ed.), Machine Learning. Medien, Infrastrukturen und Technologien der Künstlichen Intelligenz, Bielefeld, transcript, 2018, pp. 9–23.
2 Cp. Sheila Jasanoff, “Future Imperfect: Science, Technology, and the Imaginations of Modernity”, in Sheila Jasanoff and Sang-Hyun Kim (eds.), Dreamscapes of Modernity. Sociotechnical Imaginaries and the Fabrication of Power, Chicago, Chicago Univ. Press, 2015, pp. 1–33.
3 Seymour Papert, “One AI or Many?” in Stephen R. Graubard (ed.), The Artificial Intelligence Debate. False Starts, Real Foundations, Cambridge, Mass, MIT Univ. Press, Second Edition, 1989 [1988], pp. 1–14, here: p. 2.
4 Ibid.
5 Ibid.
6 Ibid.
7 Cp. Yonghui Wu et al., “Google’s Neural Machine Translation System. Bridging the Gap between Human and Machine Translation”, September 26, 2016. Available at: http://arxiv.org/abs/1609.08144 [accessed June 28, 2017].
8 Cp. Sepp Hochreiter and Jürgen Schmidhuber, “Long Short-Term Memory”, Neural Computation, 9 (8), 1997, pp. 1735–1780.
9 Cp. Wu et al., “Google’s Neural Machine Translation System.”
10 Already Murray Leinster’s 1945 published novella First Contact contains the idea of an universal translator. Another important reference is of course the Babel fish in Douglas Adams’ saga The Hitchhiker’s Guide to the Galaxy, next to popular depictions of universal translating devices in science fiction films and series like the Star Trek franchise (e.g. Star Trek: TOS, USA, NBC, 1966-69) or Men in Black (USA 1997).
11 Precisely for this reason it is also important to focus on how technology is imagined in popular representations, since these imaginations potentially inspire and form the desire and ideas of engineers, scientists and business owners outside the world of books and screens. See David Kirby, “The Future is Now: Diegetic Prototypes and the Role of Popular Films in Generating Real-world Technological Development”, Social Studies of Science, 40 (1), 2010, pp. 41–70.
12 Cp. James Vincent, “Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day”, The Verge, March 24, 2016. Available at: https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist [accessed March 25, 2016].
13 See only e.g. Carl Benedikt Frey and Michael A. Osborne, “The Future of Employment: How Susceptible are Jobs to Computerisation?”, Oxford Martin School, 2013. Available at: https://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf [accessed August 7, 2018]; Daron Acemoglu and Pascual Restrepo, “Robots and Jobs: Evidence from US Labor Markets”, MIT Economics, 2017. Available at: https://economics.mit.edu/files/12763 [accessed September 18, 2017].
14 Cp. Hannah Arendt, The Human Condition, 2nd edition, Chicago, Chicago University Press, 1998; Michael Foucault, The Order of Things. An Archaeology of the Human Sciences, London, New York NY, Routledge, 2005 [1966], pp. 240–245.
15 See Jens Schröter, “Digitale Medientechnologien und das Verschwinden der Arbeit”, in Caja Thimm and Thomas Bächle (eds.), Die Maschine: Freund oder Feind? Mensch und Technologie im digitalen Zeitalter, Wiesbaden, VS Verlag für Sozialwissenschaften, 2019 [in print] for an overview of the discussion.
16 Cp. Atle Mikkola Kjøsen, “Do Androids Dream of Surplus Value?”, 2012. Available at: https://www.academia.edu/2455476/Do_Androids_Dream_of_Surplus_Value [accessed August 7, 2018]; cp. also Nick Dyer-Witheford, Atle Mikkola Kjosen and James Steinhoff, Inhuman Power. Artificial Intelligence and the Future of Capitalism, London, Pluto, 2019.
17 Ibid.
18 Cp. Nick Srnicek and Alex Williams, Inventing the Future. Postcapitalism and a World without Work, London, Verso, 2015.
19 See also Paul Mason, PostCapitalism. A Guide to Our Future, London, Allen Lane, 2015 and as a critique Rainer Fischbach, Die schöne Utopie. Paul Mason, der Postkapitalismus und der Traum vom grenzenlosen Überfluss, Köln, Papyrossa, 2017.
20 Stefan Heidenreich, Geld. Für eine non-monetäre Ökonomie, Berlin, Merve, 2017, p. 8
21 Jens Südekum, “Digitalisierung und die Zukunft der Arbeit”, Wirtschaftspolitisches Zentrum, 2018. Available at: http://www.wpz-fgn.com/wp-content/uploads/PA19DigitalisierungZukunftArbeit20180726.pdf [accessed August 7, 2018], p. 1.
22 Cp. Christoph Ernst, “Beyond Meaningful Human Control? – Interfaces und die Imagination menschlicher Kontrolle in der zeitgenössischen Diskussion um autonome Waffensysteme (AWS)”, in Caja Thimm and Thomas Bächle (eds.), Die Maschine: Freund oder Feind? Mensch und Technologie im digitalen Zeitalter, Wiesbaden, VS Verlag für Sozialwissenschaften, 2019 [in print]. Widely noticed was the Open Letter by the Future of Life-Institute and signed by a who-is-who of researches in AI and related fields. The intend of the letter was to raise public awareness for the potentially devasting effects of AI-based autonomous weapons. Cp. Future of Life Institute, “Autonomous Weapons: An Open Letter from AI & Robotics Researchers”, Future of Life Institute, 2015. Available at: http://futureoflife.org/open-letter-autonomous-weapons, [accessed August 8, 2018].
23 Mary Cummings, “Artificial Intelligence and the Future of Warfare”, Chatham House, 2017, pp. 1–16, here: p. 1. Available at: https://www.chathamhouse.org/sites/default/files/publications/research/2017-01-26-artificial-intelligence-future-warfare-cummings-final.pdf [accessed August 8, 2018].
24 Lucy Suchman and Jutta Weber, “Human-Machine-Autonomies”, in Nehal Bhuta et al. (eds.), Autonomous Weapons Systems. Law, Ethics, Policy, Cambridge, Cambridge University Press, 2016, pp. 75–102, here: pp. 89–90.
25 Cummings, “Artificial Intelligence and the Future of Warfare”, p. 3.
26 Ibid., pp. 5–6.
27 Cp. Mary Cummings, “Man versus Machine or Man + Machine?”, IEEE Intelligent Systems, 29 (5), 2014, pp. 62-69; Cummings, “Artificial Intelligence and the Future of Warfare”.
28 Cp. Cummings, “Artificial Intelligence and the Future of Warfare”, pp. 6–8.
29 Ibid., p. 4.
30 Suchman and Weber, p. 78.
31 The approach is of interest here, because it does not negate “the possibility […] of taking an operational approach to defining what have been categorized as lethal autonomous weapons” (ibid., p. 77). Thus, ‘operational’ perspectives on AWS like the one of Cummings are very relevant but questionable in their basic epistemology.
32 Ibid., p. 98.
33 Ibid., p. 76.
34 Cp. ibid., pp. 79–86.
35 Ibid., p. 85. This line of critique reaches back to John Haugeland, Artificial Intelligence. The Very Idea, Cambridge, Mass, MIT University Press, 1985.
36 Cp. Susan Hurley, “Perception and Action. Alternative Views”, Synthese, 129 (1), 2001, pp. 3–40.
37 Cp. Cummings, “Man versus Machine or Man + Machine”, pp. 62–69.
38 Cp. Suchman and Weber, p. 78.
39 Ibid., p. 92.

Christoph Ernst, PD Dr., is an adjunct professor (Privatdozent) for media studies ant the University of Bonn, currently working in a research project titled »Van Gogh-TV. Cataloguing, Multimedia Documentation and Analysis of their Legacy« (Prof. Anja Stöffler, Hochschule Mainz, Unversity of Applied Sciences & Prof. Dr. Jens Schröter, Rheinische Friedrich-Wilhelms-University Bonn) in the Department of Media Studies at the University of Bonn. Before that, he has worked as a substitute professor and research assistant at the Universities of Siegen, Bonn and Erlangen. Current research interests include diagrammatics & media aesthetics of information visualization; theories of implicit knowledge & digital media with regard to interface theory; media theory & media philosophy, especially with a focus on imagination. Selected publications: Diagrammatik – Ein interdisziplinärer Reader, edited by Birgit Schneider and Jan Wöpking, Berlin: De Gruyter 2016, Medien und implizites Wissen, edited by Jens Schröter, Siegen: Universitätsverlag 2017, Diagramme zwischen Metapher und Explikation – Studien zur Medien- und Filmästhetik der Diagrammatik, Bielefeld: transcript 2020 (forthcoming). Further information: www.christoph-ernst.com.

Jens Schröter, Prof. Dr., is chair for media studies at the University of Bonn since 2015. Since 4/2018 director (together with Anja Stöffler, Mainz) DFG-research project “Van Gogh TV.” (3 years). Since 10/2018 speaker “Society after Money – A Simulation” (4 years, VW foundation; together with Prof. Dr. Gabriele Gramelsberger; Dr. Stefan Meretz; Dr. Hanno Pahl and Dr. Manuel Scholz-Wäckerle). April/May 2014: “John von Neumann”-fellowship, University of Szeged, Hungary. September 2014: Guest Professor, Guangdong University of Foreign Studies, Guangzhou, China. Winter 2014/15: Senior-fellowship “Media Cultures of Computer Simulation”, Lüneburg. Summer 2017: Senior-fellowship IFK Vienna, Austria. Winter 2018: Senior-fellowship IKKM Weimar. Summer 2020: Fellowship, DFG special research area 1015 “Muße”, Freiburg. Recent publications: (together with „Project Society after Money“): Society after Money. A Dialogue, London/New York: Bloomsbury 2019; (together with Armin Beverungen, Philip Mirowski, Edward Nik-Khah): Markets, Minneapolis/London: University of Minnesota Press and Lüneburg: Meson (Series: In Search of Media); Medien und Ökonomie, Wiesbaden: Springer 2019. Visit www.medienkulturwissenschaft-bonn.de / www.theorie-der-medien.de / www.fanhsiu-kadesch.de

After having worked in Göttingen, Regensburg, Vienna, and Berlin, Andreas Sudmann currently teaches and researches as an adjunct professor (Privatdozent) of media studies at the Ruhr University Bochum, where he also completed his habilitation in 2016. Most recently, he held a guest professorship in media studies at the Philipps University in Marburg followed by a research fellowship at the IFK in Vienna. Current research foci include media-theoretical and historical problems of artificial intelligence (AI), specifically machine-learning methods, aesthetics and politics of media, forms and processes of seriality and documentary, media theory, and media critique. Sudmann is the author of several books and edited collections in the field of AI, media studies, and digital culture studies, among them: The Democratization of Artificial Intelligences. Net Politics in the Era of Learning Algorithms (transcript, 2019), Machine Learning. Medien, Infrastrukturen und Technologien der Künstlichen Intelligenz (Transcript, 2018, edited together with Christoph Engemann) and Serielle Überbietung. Zur televisuellen Ästhetik und Philosophie exponierter Steigerungen (Metzler 2017).