Comment

The Difference that Difference Makes

Christoph Ernst, Jens Schröter, and Andreas Sudmann’s essay, “AI and the Imagination to Overcome Difference” examines how the imagination of AI systems emerges from the instrumentalization of technology – that a singular, unified technology will address an astonishing diversity of nuanced social conditions like language translation, work, and the automation of war. There is a flattening of differences, they say, between human and machine, that ignores the social, cultural and political dimensions of these complex technologies. In this comment piece, I want to think through ‘difference’ in terms of some of its synonyms, such as ‘gap’, ‘distinction’, ‘diversity’ and ‘discrimination’; and differences not just between human and machine, but also between humans; and thus discuss the further implications of AI technologies in society.

Sometimes difference might be about ‘discrimination’, in the sense of how machine learning systems discern, or ‘see’ clearly and categorize people; and how this accentuates differences between humans to the point of disadvantage. There is a memorable quote by Donna Haraway in Primate Visions:

“Children, artificial intelligence (AI) computer programs, and nonhuman primates all here embody ‘almost minds’. Who or what has fully human status? […] What is the end, or telos, of this discourse of approximation, reproduction, and communication, in which the boundaries among and within machines, animals, and humans are exceedingly permeable? Where will this evolutionary, developmental, and historical communicative commerce take us in the techno-bio-politics of difference?”1Donna Haraway, Primate Visions: Gender, Race and Nature in the World of Modern Science, New York, Routledge, 1989, p. 376.

In using the interesting phrase “almost-minds”, Haraway is reminding us of the history of some people – ‘natives’, ‘slaves’, women, among others – not having complete human status because they were not believed to have ‘full’ minds. The history of modern technologies like photography, the archive, statistics, physiognomic measurements is one of discrimination between people: recording and comparing physical differences between people as a way to develop a taxonomy of character and identity.2Cp. Allan Sekula, “The Body and the Archive”, October, 39, 1986, pp. 3–64. 19th century image-making in particular, in Europe and North America, was used to establish typologies of people, to identify the ‘other’, to determine quickly “in the dangerous and congested spaces of the nineteenth century city”3Ibid., p. 11., mental illness, social deviance and pathology, and ultimately, “incorrigible and pliant criminals, and the disciplined conversion of the reformable into ‘useful’ proletarians.”4Ibid., pp. 7–14. These pseudo-scientific practices became a sort of handmaiden to a capitalism that was shaped by the idea of “individual cleverness and cunning.”5Ibid., p. 12.

Now, technology, in the form of speeded-up systems of classification becomes a kind of many-eyed monster seeing us in terms of similar kinds of superficial characteristics. Stanford academics, Wang and Kosinski, trained an algorithm on more than 30,000 profile pictures of self-identified gay and lesbian people scraped (without explicit permission to do so) from Facebook and a dating site.6Cp. Yilun Wang and Michal Kosinski, “Deep Neural Networks Are More Accurate than Humans at Detecting Sexual Orientation from Facial Images”, Journal of Personality and Social Psychology, 114 (2), 2017, pp. 246–257. Based on this, the algorithm then developed a model of what a gay face and a lesbian face are; thus, when exposed to a new set of faces, the model should be able to discern a gay or lesbian person from one that is not. The authors apparently wanted to demonstrate the dangers of how facial recognition technologies could be deployed to persecute queer people. As critics have argued, this entire study has serious design flaws, not to mention how unethical it is; and, the algorithm performs inaccurately in identifying gay men, and is even more inaccurate in identifying queer women.7Cp. Greggor Mattson, “Artificial Intelligence Discovers Gayface. Sigh.”, Personal Blog, September 9, 2017. Available at: https://greggormattson.com/2017/09/09/artificial-intelligence-discovers-gayface/ [accessed November 3, 2019]. The study is under ethical review.8Cp. Adrianne Jeffries, “That Study on Artificially Intelligent ‘Gaydar’ is now Under Ethical Review”, The Outline, September 11, 2017. Available at: https://theoutline.com/post/2228/that-study-on-artificially-intelligent-gaydar-is-now-under-ethical-review-michal-kosinski?zd=2&zi=nfnaxzqb [accessed November 3, 2019]. For people who have been discriminated against, and literally not allowed to be seen for who they are, are now, in an ugly and ironic twist, hyper-visible, but not on their own terms. But there is a strong resistance to these technologies of discrimination. Artist Zach Blas’ ‘Fag Face Mask’ in Facial Weaponization Suite is a direct challenge to Wang and Kosinski. FFM is a 3-D printed mask made from queer men’s facial recognition data. Facial Weaponization Suite also has other masks made from the facial data of people of colour, migrants and women. All the masks are rendered as pink blobs that are unintelligible as faces, and as humans to machines.9Cp. Zach Blas, “Facial Weaponization Suite”, 2011-2014. Available at: <a href=”http://www.zachblas.info/works/facial-weaponization-suite/”>http://www.zachblas.info/works/facial-weaponization-suite/</a> [accessed November 3, 2019].

Since the beginning of the public internet, queer people have taken to it and to social media to find each other, form communities, and to enjoy and shape visibility of their identities to each other and for themselves. However, the negotiation between visibility and unwanted exposure is a constant challenge, given that lateral surveillance could lead to being outed is a real threat.10Cp. Maya Indira Ganesh, Jeff Deutsch and Jennifer Schulte, Privacy, Visibility, Anonymity: Dilemmas in Tech Use by Marginalised Communities. Making All Voices Count, Brighton, Institute for Development Studies, 2016. Available at: https://opendocs.ids.ac.uk/opendocs/bitstream/handle/20.500.12413/12110/TacticalTech_Online_FINAL3.pdf [accessed November 3, 2019]. Could hyper-visibility through the machine be offset by the inclusion of more diverse people from marginalized backgrounds in the design of technologies? Not necessarily, say the groups behind the “please don’t include us”11Cp. “Call for Participation: Please Don’t Include Us”, Digital Justice Lab, 2019. approach:

“For instance, if we look at the popular rise of facial recognition tools over the past few years, people of color have been excluded from the design and implementation process. The tools are often discriminatory, fail to recognize people of color, and at times, misgender them. However, in parallel, facial recognition technology is increasingly integrated into police and state surveillance tools, and perfecting that technology could significantly further impact communities that are already over-policed and over-surveilled.”12Ibid.

We humans do not like the un-inflected, flat, electronic voices used in machine systems, so engineers at Google developed a new product called Duplex, a digital voice assistant that is made to sound more human.13Cp. Yaniv Leviathan and Yossi Matias, “Google Duplex: An AI System for Accomplishing Real-World Tasks Over the Phone”, Google AI Blog, May 8, 2018. Available at: https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conver
sation.html
[accessed November 3, 2019].
They did this by introducing ‘disfluencies’, or the ‘umms’, ‘ahhs’ and other kinds of hesitation that are common in human speech. This is thought to ease the use of digital assistants, making it easier for us to ignore the discomfort provoked by how different machine speech is. What is the standard of appropriate, comfortable, familiar speech? What happens when some humans just sound like themselves? Consider the case of machine learning in natural language programming (NLP) being used to identify hate speech online.

The n-word, ending in -er, was used by White slave owners to refer to Black slaves; and Black slaves used it, with its ‘schwa’14Cp. “Schwa”, Wikipedia, 2019. Available at: https://en.wikipedia.org/wiki/Schwa [accessed November 3, 2019]., that is without its -er ending but an -a, to refer to themselves.15Cp. Jaqueline Rahman, “The N word: Its History and Use in the African American Community”, Journal of English Linguistics, 40 (2), 2012, pp. 137–171, here: 138–139. But despite its negative historical reference, the word circulates freely in popular culture, Black comedy, hip-hop and rap music. It has multiple functions: as counter-language; a form of solidarity; as emblematic of cultural, affective and spiritual practices of survival;16Cp. ibid. and is used performatively to discuss the conditions of racism, poverty, institutionalized violence and class discrimination that Black people in the US struggle through. Perhaps the most critical aspect of African American culture and politics is the notion of ‘double consciousness’: that personal Black identity is shaped by communal Black solidarity, as well as the reality of nationalist White supremacist ideology.17Cp. W.E.B. du Bois, cited in André Brock, “From the Blackhand Side: Twitter as a Cultural Conversation”, Journal of Broadcasting & Electronic Media, 56 (4), 2012, pp. 529–549, here: p. 532. Thus, it is not just about the n-word as a form of recognition within a community, but acknowledges the ongoing interpellation of African Americans as former slaves – evidence of the persistence of White supremacy in US society. In this context, then, while the n-word in Hip-hop may seem like ‘just entertainment’, it is in fact “ritual drama”, Rahman says, in the discursive construction of Blackness.18Cp. Kathryn A. Woolard and Bambi B. Schieffelin, “Language Ideology”, Annual Review of Anthropology, 23, 1994, pp. 55–82. Available at: https://www.annualreviews.org/doi/abs/10.1146/annurev.an.23.100194.000415 [accessed November 3, 2019].

However, the n-word’s use in African American Verbal Expression (AAVE) and in popular culture presents an acute and particular problem for algorithmic speech and content management practices: algorithmic identification and monitoring of online speech cannot distinguish between a contextual use of this word (for example, when Black people might be speaking to each other, or rapping), and its use as a racial slur (for example, when someone might use n-, b-, or h-words to be abusive to an individual or about a community). This is because working at scale, natural language processing (NLP) cannot identify who is speaking and in what context without additional information beyond the text. As a result, AAVE gets classified as ‘toxic by NLP systems because of the presence of these words.19Cp. Anna Chung, “How Automated Tools Discriminate Against Black Language”, Medium, February 28, 2019. Available at: https://onezero.medium.com/how-automated-tools-discriminate-against-black-language-2ac8eab8d6db [accessed November 3, 2019]. And, as Pedro Oliveira writes in this issue, there is “a divorce of sound from meaning” thus underscoring what Ernst, Schröter and Sudmann argue: in reaching for a universalizing technology, the development of AI risks eradicating the cultural uniqueness of language and people.

‘Difference’ is also ‘distinction’ in the sense of how cultural techniques work.20Cp. Bernhard Siegert, “Cultural Techniques: Or the End of the Intellectual Postwar Era in German Media Theory”, Theory, Culture & Society, 30 (6), 2013, pp. 48–65. Available at: https://doi.org/10.1177%2F0263276413488963 [accessed November 3, 2019]; Geoffrey Winthrop-Young, “Cultural Techniques: Preliminary Remarks”, Theory, Culture & Society, 30 (6), 2013, pp. 3–19. Available at: https://doi.org/10.1177%2F0263276413500828 The distinction between human and machine made by autonomous weapon systems (AWS) operates on the fault lines of what we might consider to be unique about human-ness: our ability for moral reasoning and complex decision-making. AWS are thought to make fewer errors, and limit the risk to humans on the field of battle. The Campaign Against Killer Robots argues that AWS transfer the decision-making about who lives and who dies to computational logics and big data harvested in questionable ways, and erases human oversight. However, in making this claim, the campaign ends up repeating the logic of the distinction between human and machine in autonomous systems, instead of implicating both in conditions of co-production and entanglement. As Karppi, Böhlen and Granata argue, the language of advocacy against AWS deploys a “teleology of techno-determinism [that] implies a distinction between human and machine, as it seems to offer a clear ‘evolutionary’ break, or a categorical distinction, between humans-in-control of machines versus autonomous weapons as machines-in-control-of-themselves.”21Tero Karppi, Marc Böhlen and Yvette Granata, “Killer Robots as Cultural Techniques”, International Journal of Cultural Studies, 21 (2), 2018, pp. 107–123, here: p. 111. Moreover, Karppi et al. note, that human ethical decision-making is somehow considered ‘better’ reinforces the distinction between human and machine; and also, that the act of making the right ethical or proportionate decision is one of making distinctions, of making the difference between one kind of target and another, one that is ‘killable’22Cp. ibid., pp. 116–118. and another that is not. These distinctions are always shifting and unstable, and in each distinction made, AWS as cultural techniques are also ontic, and thus are making the world.23Cp. ibid., p. 118.

Difference, distinctions, gaps and discrimination are states and techniques that amplify the noise of the world. Ernst, Schröter and Sudmann urge us to seek those places of amplification.

References
1 Donna Haraway, Primate Visions: Gender, Race and Nature in the World of Modern Science, New York, Routledge, 1989, p. 376.
2 Cp. Allan Sekula, “The Body and the Archive”, October, 39, 1986, pp. 3–64.
3 Ibid., p. 11.
4 Ibid., pp. 7–14.
5 Ibid., p. 12.
6 Cp. Yilun Wang and Michal Kosinski, “Deep Neural Networks Are More Accurate than Humans at Detecting Sexual Orientation from Facial Images”, Journal of Personality and Social Psychology, 114 (2), 2017, pp. 246–257.
7 Cp. Greggor Mattson, “Artificial Intelligence Discovers Gayface. Sigh.”, Personal Blog, September 9, 2017. Available at: https://greggormattson.com/2017/09/09/artificial-intelligence-discovers-gayface/ [accessed November 3, 2019].
8 Cp. Adrianne Jeffries, “That Study on Artificially Intelligent ‘Gaydar’ is now Under Ethical Review”, The Outline, September 11, 2017. Available at: https://theoutline.com/post/2228/that-study-on-artificially-intelligent-gaydar-is-now-under-ethical-review-michal-kosinski?zd=2&zi=nfnaxzqb [accessed November 3, 2019].
9 Cp. Zach Blas, “Facial Weaponization Suite”, 2011-2014. Available at: <a href=”http://www.zachblas.info/works/facial-weaponization-suite/”>http://www.zachblas.info/works/facial-weaponization-suite/</a> [accessed November 3, 2019].
10 Cp. Maya Indira Ganesh, Jeff Deutsch and Jennifer Schulte, Privacy, Visibility, Anonymity: Dilemmas in Tech Use by Marginalised Communities. Making All Voices Count, Brighton, Institute for Development Studies, 2016. Available at: https://opendocs.ids.ac.uk/opendocs/bitstream/handle/20.500.12413/12110/TacticalTech_Online_FINAL3.pdf [accessed November 3, 2019].
11 Cp. “Call for Participation: Please Don’t Include Us”, Digital Justice Lab, 2019.
12 Ibid.
13 Cp. Yaniv Leviathan and Yossi Matias, “Google Duplex: An AI System for Accomplishing Real-World Tasks Over the Phone”, Google AI Blog, May 8, 2018. Available at: https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conver
sation.html
[accessed November 3, 2019].
14 Cp. “Schwa”, Wikipedia, 2019. Available at: https://en.wikipedia.org/wiki/Schwa [accessed November 3, 2019].
15 Cp. Jaqueline Rahman, “The N word: Its History and Use in the African American Community”, Journal of English Linguistics, 40 (2), 2012, pp. 137–171, here: 138–139.
16 Cp. ibid.
17 Cp. W.E.B. du Bois, cited in André Brock, “From the Blackhand Side: Twitter as a Cultural Conversation”, Journal of Broadcasting & Electronic Media, 56 (4), 2012, pp. 529–549, here: p. 532.
18 Cp. Kathryn A. Woolard and Bambi B. Schieffelin, “Language Ideology”, Annual Review of Anthropology, 23, 1994, pp. 55–82. Available at: https://www.annualreviews.org/doi/abs/10.1146/annurev.an.23.100194.000415 [accessed November 3, 2019].
19 Cp. Anna Chung, “How Automated Tools Discriminate Against Black Language”, Medium, February 28, 2019. Available at: https://onezero.medium.com/how-automated-tools-discriminate-against-black-language-2ac8eab8d6db [accessed November 3, 2019].
20 Cp. Bernhard Siegert, “Cultural Techniques: Or the End of the Intellectual Postwar Era in German Media Theory”, Theory, Culture & Society, 30 (6), 2013, pp. 48–65. Available at: https://doi.org/10.1177%2F0263276413488963 [accessed November 3, 2019]; Geoffrey Winthrop-Young, “Cultural Techniques: Preliminary Remarks”, Theory, Culture & Society, 30 (6), 2013, pp. 3–19. Available at: https://doi.org/10.1177%2F0263276413500828
21 Tero Karppi, Marc Böhlen and Yvette Granata, “Killer Robots as Cultural Techniques”, International Journal of Cultural Studies, 21 (2), 2018, pp. 107–123, here: p. 111.
22 Cp. ibid., pp. 116–118.
23 Cp. ibid., p. 118.

Maya Indira Ganesh is a technology researcher and writer and speaker who works with arts and cultural organisations, academia and NGOs. She is working on a PhD at Leuphana University, about the cultural-computational shaping of the notion of machine ‘autonomy’, and the evolving role of the human in this. Her other areas of research expertise include: gender, feminism and technology; big data and discrimination; digital security and privacy in human rights defence; online activism. She has worked with Tactical Tech, the Citizen Lab at the University of Toronto, UNICEF India, and the APC Women’s Rights Program.