The AI of the Beholder

Book Review

Review of: Simone Natale, Deceitful Media. Artificial Intelligence and Social Life after the Turing Test, Oxford, Oxford University Press, 2021.

Simone Natale’s Deceitful Media is a book with an originality that is immediately apparent as you flip through it – and it starts with the acknowledgements. Here, where one would expect the author to list persons and institutions he owes a debt of gratitude to, Natale instead introduces the (at least temporarily) discarded idea of writing an AI-themed science fiction novel. Although this choice may seem unconventional, it is not accidental, as it hints at issues and problems that are at the heart of his new work. The yet-to-be-written novel is set in a not-too-distant future where human and synthetic speech are virtually indistinguishable. The story revolves around a woman who receives a mysterious phone call one morning from her husband, who died a few hours earlier. As the woman tragically finds herself in a condition of cognitive dissonance, one unsettling enigma remains to be solved: With whom, or what, had she spoken on the phone? Was it her husband or an AI almost perfectly mimicking his voice?

Echoing such deeply disturbing questions, the introduction begins with an anecdote about Duplex, a Google-parented voice assistant presented to the public on occasion of the 2018 edition of Google I/O festival,1See: Yaniv Leviathan and Yossi Matias, “Google Duplex: An AI System for Accomplishing Real-World Tasks Over the Phone”, Google AI Blog, May 8, 2018. Available at https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html [accessed February 14, 2022]. where it was shown to be able to fool people on the other end of a phone call into believing they were talking to a real person. For Natale, the ensuing controversy over whether Duplex’s performance was genuine or staged illustrates an underlying ambivalence marking the field of AI since its formative days: “either exceptional powers are attributed to it, or it is dismissed as a delusion and a fraud”.2Simone Natale, Deceitful Media. Artificial Intelligence and Social Life after the Turing Test, Oxford, Oxford University Press, 2021, p. 2. This brings the author to the remarkable point, which sets out the entire intellectual trajectory of his critical enquiry of the history of AI: “[w]e should regard deception not just as a possible way to employ AI but as a constitutive element of these technologies”.3Ibid, p. 2. In other words, deception is not incidental in AI, but crucial to its performance – a calculated false positive causing people to recognize some form of intelligence when in reality there may be none. Rephrasing a popular saying, this suggests that AI resides as much in the eye of the beholder as within the machine, if not more so.

With these premises, it may be tempting to retrace accomplishments in AI as just a continuation of the antique tradition of automaton-making, the history of which is full of spectacular machines, humanlike or otherwise, designed with the precise purpose of amazing audiences by giving an impression of intelligence.4Cp. Minsoo Kang, Sublime Dreams of Living Machines: The Automaton in the European Imagination, Harvard, Harvard University Press, 2012; cp. Jessika Riskin (ed.), Genesis Redux: Essays in the History and Philosophy of Artificial Life, Chicago, University of Chicago Press, 2007. Interestingly, however, Natale avoids doing so. Diverting from often-told stories of uncanny automata and other life-mimicking beings, one of the book’s main merits is to reframe deception in AI as part of a broader genealogy comprising conventional media like music and film. To this end, the author proposes what he refers to as ‘banal deception’ – that is, ordinary and as such not easily discernible mechanisms embedded in everyday uses and applications of AI. In this regard, an illustrative example is cinema, which routinely exploits our susceptibility to deception in such a way that “the impression of movement […] can be given through the fast succession of a series of still images”.5Natale, p. 12.

The opening chapter, which runs throughout the book as a common thread, is devoted to the Turing test and the related idea that user deception is a valid surrogate indicator of machine intelligence. For good or bad, so much has been said and written about it that it is difficult to add anything substantially new. Still, Natale’s reading of Turing’s seminal essay entails an original critique of dominant interpretations not just of the eponymous test, but of AI more generally.6Alan M. Turing, “Computing Machinery and Intelligence”, Mind, 59, 1950, pp. 433–460. In contrast to the widespread (mis)conceptions of AI as technologies which are believed to be, or potentially become, somehow autonomous from the human and society, Natale maintains that AI is quintessentially a relational endeavour. As Turing anticipated in 1950, AI, in other words, has less to do with technology in itself than with human-machine interactions – with both sides of the equation contributing in equal shares to define what AI is and does. For Natale, “Communication Game” would indeed be a more precise descriptor for what Turing called “Imitation Game”. What is more,

“if the test is an exercise in human-computer interaction, this interaction is shaped by trickery. […] Including lying and deception in the mandate of AI becomes thus equivalent with defining machine intelligence in terms of how it is perceived by human users, rather than in absolute terms.”7Natale, pp. 27-28.

In the second chapter, the author expands the question of deception to encompass practical and theoretical derivations of Turing’s original intuitions, commencing from the field-defining Dartmouth summer workshop on AI. Discussing the work of AI pioneers like Licklider, Greenberger, and Minsky, among others, Natale convincingly shows how the history of AI has proceeded in parallel with the development of human-computer interfaces, where user deception is key to achieving desired effects – including the illusion of intelligence. Framed in these terms, the oft-lamented opacity of AI – i.e. the gap between what is visible from the outside and what occurs within the machine – is irreducible to either technical illiteracy or inscrutable computational complexity. In human-machine systems, opacity is itself a convenient option in the quest for technologies appearing to be intelligent without necessarily being so. Accordingly, as a corollary to McLuhan’s famous dictum, Natale posits that media are as much extensions of the human as “they are meant to fit humans”.8Ibid, p. 39. And thus, one may add, the proverbial anthropocentrism of AI is not only due to the fact that it is modelled after human intelligence, but because it is tailored to the fallacies of human perception and reasoning.9Cp. Beth Preston, “AI, Anthropocentrism, and the Evolution of ‘Intelligence’”, Minds and Machines, 1(3), 1991, pp. 259–277.

Natale is cautious to remind us that deception is not an inherently bad thing, for it has its uses when it comes to designing user-friendly digital environments and products. In this sense, it “always contributes some form of value to the user”.10Natale, p. 128. More controversial, however, is the programmatic use of deceptive elements to manipulate users’ pattern-seeking correlations, and especially the all-too-human predisposition to ascribe humanlike agency and sociality to other-than-human entities. This issue is fully addressed in the following chapter through an in-depth analysis of ELIZA, Joseph Weizenbaum’s psychologist chatbot and the progenitor of contemporary conversational programs like Siri and Alexa. Perceived as if possessing far greater agential capabilities than it possibly could, ELIZA anticipated an important dynamic which continues to the present day: the user complicity in the AI myth.11Cp. Sherry Turkle, Life on the Screen: Identity in the Age of the Internet, London, Phoenix Paperback, 1997.

As the book progresses, the author further elaborates on the interplay between deception, anthropomorphic projection, and technical agency. In chapter four, the main argument is that people tend to attribute agentive powers to even relatively unsophisticated computer programs, and all the more so “when software permits actual interactions between computers and users”.12Natale, p. 74. In making such claim, Natale embraces a socioecological perspective on AI, recognizing context as integral to human-machine communication.13On the notion of ecological approaches to intelligence, see Sy Taffel’s discussion of Bateson featured in this issue. Cp. Sy Taffel, “Automating Creativity – Artificial Intelligence and Distributed Cognition”, spheres – Journal for Digital Cultures, 5, 2019. Available at: https://spheres-journal.org/contribution/automating-creativity-artificial-intelligence-and-distributed-cognition/ [accessed March 28, 2022]. This, in turn, leads him to question the circumstances under which deception is more likely to succeed, including situations where people knowingly suspend their disbelief for the sake of enjoyment. A case in point are interactions between human- and computer-controlled characters in videogames, “where part of the pleasure comes from falling into the illusion of entertaining a credible conversation with nonhuman agents”.14Natale, p. 85.

Focusing on the Loebner Prize – an annual contest awarding prizes to the chatbots that best mimic humans’ conversational skills – chapter five continues to advance the notion that contextual variables affect people’s perception and assessment of AI. Persuasively, Natale holds that, ever since its launch in 1991, the competition has been less effective to evaluate progress in AI than to explore psychological and situational factors leading to the attribution of intelligence and sociality to machines. Oftentimes dismissed as “obnoxious and stupid”,15This definition is attributed to Marvin Minsky, as cited in: John Sundman, “Artificial Stupidity”, Salon, February 26, 2003. Available at: https://www.salon.com/2003/02/
26/loebner_part_one/
[accessed March 20, 2022].
the Loebner Prize, Natale contends, has nonetheless provided the ideal conditions for developing, testing and refining deceptive tactics presently encoded into mundane AI devices inhabiting the most intimate corners of our everyday spaces. In anticipation of what will follow, the author thus explains that

“even voice assistants such as Alexa and Siri have adopted some of the strategies developed by the Loebner Contestants – with the difference that the ‘tricks’ are aimed not at making users believe that the vocal assistant is human but, more subtly, at sustaining the peculiar experience of sociality that facilitates engagements with these tools.”16Natale, p. 105.

Deceitful Media peaks and ends with a sharp-eyed diagnosis of the political stakes involved in the design and use of contemporary voice assistants like Siri, Alexa, and Google’s. By recourse to analytical tools introduced gradually throughout his monograph, in the closing chapter Natale delivers a powerful examination of the full implications and subtleties of banal deception in a number of key areas. Presented in a comprehensive and systematized manner, these range from the normalization of class and gender stereotypes to the ‘default whiteness’ of technology and its users, and from the constitution of new power relations to the opacity of software-mediated information access. Succinctly, Natale illustrates how

“[b]anal deception operates by concealing the underlying functions of digital machines through a representation constructed at the level of the interface. A critical analysis of banal deception, therefore, requires examination of the relationship between the two levels: the superficial level of the representation and the underlying mechanisms that are hidden under the surface”.17Ibid, p. 109.

In line with the above, in this chapter Natale manages to perfectly combine socio-phenomenological and techno-materialist perspectives, whereas in the preceding chapters one gets the impression that greater emphasis is placed on the former to the detriment of the latter. It’s interesting to note how the author points to conclusions somehow converging with those of scholars approaching the same subject yet from quite different angles – notably Crawford and Joler.18Kate Crawford and Vladan Joler, “Anatomy of an AI System – The Amazon Echo as an anatomical map of human labor, data and planetary resources”, Share Lab and AI Now Institute, 2018. Available at: https://anatomyof.ai/ [accessed March 22, 2022]. See also: Vladan Joler and Matteo Pasquinelli, “The Nooscope Manifested: AI as Instrument of Knowledge Extractivism”, visual essay, KIM HfG Karlsruhe and Share Lab, 1 May, 2020. Available at http://nooscope.ai2020 [accessed March 22, 2022]. By exposing the tension between the personification of Amazon Alexa as an individuated piece of software and the layered complexities occurring within nonrepresentational realms, what Natale seems to convey is that if we were to remove the veil of deception and look carefully behind the scenes, we wouldn’t find anything like a humanlike machinic intelligence or superintelligence. Rather, the haunting presence concealed behind the AI façade consists of

“a complex assemblage of infrastructures, hardware artifacts, and software systems, not to mention the dynamics of labor and exploitation that remain hidden from Amazon’s customers”.19Natale, p. 109.

As an attentive reader may recall from the book introduction, Natale makes it clear from the outset that his scope is limited to software applications of AI as distinct from “embodied physical artifacts”.20Ibid, p. 11. Nonetheless, as the book approaches its end, he eventually recognizes the extent to which, even when they are software-based, AI technologies are anything but immaterial.21Cp. Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, New Haven and London, Yale University Press, 2021. From the avid consumption of the Earth’s resources to the exploitation of unrecognized forms of labor,22Cp. Florian Alexander Schmidt, “Crowdsourced Production of AI Training Data. How Human Workers Teach Self-Driving Cars How to See”, Working Paper Nr. 155 – Hans-Böckler-Stiftung, 2019. Available at: https://d-nb.info/1197839895/34 [accessed June 22, 2022]. these technologies indeed always engender all-too-embodied consequences extending far beyond the immediate proximity of their end-use.

All in all, Deceitful Media provides a significant contribution to the field through a nuanced investigation into the political, epistemological and ontological consequences of deception as inherent to the very theory and practice of AI. It almost goes without saying that, like any attempt to chronicle such a complex, varied, and continuously evolving subject as that of AI is, the book cannot but offer a partial account. Besides the admitted fact of being expressly focused on a particular subcategory of AI – such as interactive AIs, it should be borne in mind that, in the wake of Turing, Deceitful Media traces past and present approaches where, for a variety of reasons, anthropomorphic imitation is what is ultimately asked of intelligent machines. Alternatively, or perhaps even in support of the same arguments that Natale is making, one could have included historical research paths concerned less with the question of whether machines could be made as intelligent as humans, or at least be perceived so, and more with producing new understandings of “intelligence” to begin with23Cp. Orit Halpern, Beautiful Data: A History of Vision and Reason Since 1945, Durham and London, Duke University Press, 2014. – including modalities and scales which we may not easily recognize precisely because they don’t conform to our anthropomorphic expectations.24Cp. Katherine N. Hayles, “Cognition Everywhere: The Rise of the Cognitive Nonconscious and the Costs of Consciousness”, New Literary History, 45 (2), 2014, pp. 199–220; cp. Benjamin H. Bratton, “Outing Artificial Intelligence: Reckoning with Turing Tests”, in: Matteo Pasquinelli (ed.), Alleys of Your Mind: Augmented Intelligence and Its Traumas, Lüneburg, meson press, 2015, pp. 69–80. Natale is very much going out of the usual way in his book to demonstrate the illusory nature of AI. This is in itself a priceless accomplishment, but one could also argue that the oft taken-for-granted equivalence between intelligence and human-thinking is no less an illusion. At a time when there is a complete polarization of opinion regarding AI, both perspectives are relevant and a dialogue between the two appears all the more urgent: the former helps dispel the hype surrounding existing technologies we have recently become accustomed to calling AI, the latter offers insights on possible future developments which may only be disclosed to us once we abandon hard-to-eradicate assumptions about what constitutes intelligence and how it operates.

Through an in-depth examination of the historical role of the human observer as the one giving meaning to human-machine communication, Natale convincingly identifies deception as the structuring principle of our relationship with AI. As significant and well-reasoned as this argument is, prudence is nonetheless needed not to draw hasty conclusions from it – a situation the author himself cautions us against. The non-correspondence between our perception and the material reality of AI requires us to be vigilant, even suspicious. Still, this should not be taken as sufficient ground for discarding scientific and technological achievements in the field altogether, nor to leap into an anti-technology position. Aware of the perils involved in this sort of thinking, Natale makes the case for a greater engagement of the humanities and the social sciences with computing and engineering. In this vein, he advocates for an ‘ethics of fairness and transparency’, urging all of us to actively interrogate “how the technology works”25Natale, p. 132. and to trace “the outcomes of different design features and mechanisms embedded in AI”.26Ibid, p. 132.

By revealing the very humanness, including our fallacies, behind anthropomorphic AIs, Deceitful Media offers a much-needed complement, rather than an alternative, to recent scholarly efforts to theorize the cognitive capabilities – both affordances and limitations – of contemporary computational media, as well as the composite assemblages they co-constitute with us.27Cp. Katherine N. Hayles, Untought: The Power of the Cognitive Nonconscious, Chicago and London, University of Chicago Press, 2017; cp. Casey R. Lynch and Vincent J. Del Casino, “Smart Spaces, Information Processing, and the Question of Intelligence”, Annals of the American Association of Geographers, 110 (2), 2020, pp. 382–390. In our present moment, when automated decision-making is ubiquitous across the full spectrum of human affairs, there are two possible responses to AI. One is to reclaim human intellectual superiority (and autonomy) as a practical benchmark and a moral imperative to which we should relentlessly appeal. The other, which I endorse, is to embrace the unsettling provocation that intelligence is always “distributed across human and technical agencies”.28Cp. Louise Amoore, “Introduction: Thinking with Algorithms: Cognition and Computation in the Work of N. Katherine Hayles”, Theory, Culture & Society, 0 (0), 2019, pp. 1–14, here: 4. But if we are to break with inherited views of intelligence and agency as attributes of the sovereign subject only, the question remains open as to how to formulate anew adequate ethico-political frameworks beyond the terms dictated by our liberal tradition.29Cp. Louise Amoore, Cloud Ethics: Algorithms and the Attributes of Ourselves and Others, Durham and London, Duke University Press, 2020; cp. Connal Parsley, “Automating Authority: The Human and Automation in Legal Discourse on the Meaningful Human Control of Lethal Autonomous Weapons Systems”, in:  Shane Chalmers and Sundhya Pahuja (eds.), Routledge Handbook of International Law and the Humanities, London, Routledge, 2021, pp. 432–445.

A timely, compelling, and well-documented book, Deceitful Media is a must-read for anyone who seeks an overarching grasp of AI, human psychology, and their mutually productive relationship. Marshalling fresh insights from media history, science and technology studies, social psychology, and communication studies, the book fully meets what ultimately is expected of it: making us more sophisticated AI users.

 

References
1 See: Yaniv Leviathan and Yossi Matias, “Google Duplex: An AI System for Accomplishing Real-World Tasks Over the Phone”, Google AI Blog, May 8, 2018. Available at https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html [accessed February 14, 2022].
2 Simone Natale, Deceitful Media. Artificial Intelligence and Social Life after the Turing Test, Oxford, Oxford University Press, 2021, p. 2.
3 Ibid, p. 2.
4 Cp. Minsoo Kang, Sublime Dreams of Living Machines: The Automaton in the European Imagination, Harvard, Harvard University Press, 2012; cp. Jessika Riskin (ed.), Genesis Redux: Essays in the History and Philosophy of Artificial Life, Chicago, University of Chicago Press, 2007.
5 Natale, p. 12.
6 Alan M. Turing, “Computing Machinery and Intelligence”, Mind, 59, 1950, pp. 433–460.
7 Natale, pp. 27-28.
8 Ibid, p. 39.
9 Cp. Beth Preston, “AI, Anthropocentrism, and the Evolution of ‘Intelligence’”, Minds and Machines, 1(3), 1991, pp. 259–277.
10 Natale, p. 128.
11 Cp. Sherry Turkle, Life on the Screen: Identity in the Age of the Internet, London, Phoenix Paperback, 1997.
12 Natale, p. 74.
13 On the notion of ecological approaches to intelligence, see Sy Taffel’s discussion of Bateson featured in this issue. Cp. Sy Taffel, “Automating Creativity – Artificial Intelligence and Distributed Cognition”, spheres – Journal for Digital Cultures, 5, 2019. Available at: https://spheres-journal.org/contribution/automating-creativity-artificial-intelligence-and-distributed-cognition/ [accessed March 28, 2022].
14 Natale, p. 85.
15 This definition is attributed to Marvin Minsky, as cited in: John Sundman, “Artificial Stupidity”, Salon, February 26, 2003. Available at: https://www.salon.com/2003/02/
26/loebner_part_one/
[accessed March 20, 2022].
16 Natale, p. 105.
17 Ibid, p. 109.
18 Kate Crawford and Vladan Joler, “Anatomy of an AI System – The Amazon Echo as an anatomical map of human labor, data and planetary resources”, Share Lab and AI Now Institute, 2018. Available at: https://anatomyof.ai/ [accessed March 22, 2022]. See also: Vladan Joler and Matteo Pasquinelli, “The Nooscope Manifested: AI as Instrument of Knowledge Extractivism”, visual essay, KIM HfG Karlsruhe and Share Lab, 1 May, 2020. Available at http://nooscope.ai2020 [accessed March 22, 2022].
19 Natale, p. 109.
20 Ibid, p. 11.
21 Cp. Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, New Haven and London, Yale University Press, 2021.
22 Cp. Florian Alexander Schmidt, “Crowdsourced Production of AI Training Data. How Human Workers Teach Self-Driving Cars How to See”, Working Paper Nr. 155 – Hans-Böckler-Stiftung, 2019. Available at: https://d-nb.info/1197839895/34 [accessed June 22, 2022].
23 Cp. Orit Halpern, Beautiful Data: A History of Vision and Reason Since 1945, Durham and London, Duke University Press, 2014.
24 Cp. Katherine N. Hayles, “Cognition Everywhere: The Rise of the Cognitive Nonconscious and the Costs of Consciousness”, New Literary History, 45 (2), 2014, pp. 199–220; cp. Benjamin H. Bratton, “Outing Artificial Intelligence: Reckoning with Turing Tests”, in: Matteo Pasquinelli (ed.), Alleys of Your Mind: Augmented Intelligence and Its Traumas, Lüneburg, meson press, 2015, pp. 69–80.
25 Natale, p. 132.
26 Ibid, p. 132.
27 Cp. Katherine N. Hayles, Untought: The Power of the Cognitive Nonconscious, Chicago and London, University of Chicago Press, 2017; cp. Casey R. Lynch and Vincent J. Del Casino, “Smart Spaces, Information Processing, and the Question of Intelligence”, Annals of the American Association of Geographers, 110 (2), 2020, pp. 382–390.
28 Cp. Louise Amoore, “Introduction: Thinking with Algorithms: Cognition and Computation in the Work of N. Katherine Hayles”, Theory, Culture & Society, 0 (0), 2019, pp. 1–14, here: 4.
29 Cp. Louise Amoore, Cloud Ethics: Algorithms and the Attributes of Ourselves and Others, Durham and London, Duke University Press, 2020; cp. Connal Parsley, “Automating Authority: The Human and Automation in Legal Discourse on the Meaningful Human Control of Lethal Autonomous Weapons Systems”, in:  Shane Chalmers and Sundhya Pahuja (eds.), Routledge Handbook of International Law and the Humanities, London, Routledge, 2021, pp. 432–445.

Fabio Iapaolo is Research Fellow in AI, Data and Society at the Institute for Ethical AI, Oxford Brookes University. His work emerges at the cross-section of digital geographies, the philosophy of science and technology, computing, the materiality of information, and the politics of automation. He has previously served as Post-Doctoral Researcher at the University of Turin, joining the research cluster DIGGEO – Digital Geographies of Socio-political and Economic Spaces. He holds a PhD in Urban and Regional Development from Polytechnic of Turin, with a one-year Visiting PhD Fellowship at Karlsruhe University of Arts and Design as a member the research group KIM – Critical AI Studies.