November 20, 2019

Automating Creativity – Artificial Intelligence and Distributed Cognition

Recent years have seen a surge of interest in how digital automation may transform labour and society in the near future. While automation has historically been associated with machines conducting routine and repetitive mechanical tasks, advances in artificial intelligence (AI) and machine learning have led to predictions that soon many ‘creative’, decision-making processes will largely be automated.1Cp. Carl Benedikt Frey, and Michael A Osborne, “The Future of Employment: How Susceptible Are Jobs to Computerisation?”, Technological Forecasting and Social Change, 114, 2017, pp. 254–280. Positing the existence of creative and intelligent machines produces a sense of automation anxiety surrounding an imagined opposition between machines and humans which is heightened by the contemporary context of precarious employment. Such rhetoric additionally challenges the ontological claims of anthropocentric Western philosophies that situate humans as being uniquely creative and rational animals. At the same time, however, contemporary creative work is heavily reliant upon digital technologies and specific processes and practices of automation that enable contemporary production techniques for video, photography, music and games.

Automating Creativity is a documentary film that explores how workers in the creative industries and academics who study technology and culture understand the existing and emerging relationships between automation and creativity, and how these relationships inform contemporary communication, media and culture. The excerpt from the documentary published here focuses upon questions surrounding the histories, modes and biases associated with varying forms of artificial intelligence. This accompanying text aims to expand upon some of the key lines of argumentation, specifically focussing upon the questions of whether intelligence and creativity are attributable to individuals or assemblages, how AI departs from other modes of intelligence, and how computational systems that are often assumed to be neutral and objective frequently have racist, sexist and classist values embedded within them.

Intelligent Agents or Assemblages

One question immediately posed by the notion of AI surrounds how we understand intelligence, and the kinds of actors we designate as being intelligent. Within Western philosophy, intelligence has traditionally been attributed to the sovereign human being, the rational animal who possesses free will and agency and so stands in distinction to the determinate automatons that comprise the natural and technical worlds.2Contemporary research into animal intelligence provides a wealth of empirical evidence that strongly contests these anthropocentric assumptions. For example, see: Donald R. Griffin, Animal Minds: Beyond cognition to consciousness, Chicago, University of Chicago Press, 2013; Keith E. Stanovich, “Why humans are (sometimes) less rational than other animals: Cognitive complexity and the axioms of rational choice”, Thinking & Reasoning, 19 (1), 2013, pp. 1–26. This Cartesian perspective on intelligence which rests upon Judeo-Christian human exceptionalism, became the normative model for early AI research which sought to replicate human intelligence in machinic forms. The paradigmatic mid-20th century test for machine intelligence, the Turing test, developed by pioneering British mathematician Alan Turing in 1950, sought to comprehensively answer the question, “Can machines think?”, through an imitation game. In this game, a machine is designated as intelligent if it can participate in a text-based conversation so that an interrogator cannot accurately identify which of the participants is a human.3More precisely, Turing’s imitation game asked whether an interrogator C could correctly identify the gender identities of a man A and a woman B, and asked “What will happen when a machine takes the part of A in this game? Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?” The gendered dimension of Turing’s original test is typically ignored by contemporary tests. Alan M. Turing, “Computing … Continue reading The question of machinic intelligence is thereby reduced to whether a computer can resemble human intelligence in one specific situation – that of conversation. Consequently, a quite reasonable critique of the Turing test, one that Turing himself noted, is that it exclusively equates intelligence with human intelligence. In other words, it is the Cartesian model of intelligence – which we should situate as a male, white, bourgeois model disguised beneath an ideological veil of universalism – which is being sought here.

For Turing the question, “Can machines think?” is “too meaningless to deserve discussion”4Ibid., p. 42., it merely invites abstract philosophical speculation. By contrast, the imitation game provides a quantitative answer by replacing the original question with a proxy. This process of substitution is commonplace within contemporary paradigms of computer modelling, simulation, machine learning and AI, whereby the complex, seemingly chaotic and indecipherable patterns of social and ecological life are reduced to particular proxy markers, which are designed to provide workable approximations for problems that exist at scales that are impossible or impractical to directly observe or measure. This process of substitution is essential for producing important data such as global climate models and molecular modelling for drug discovery. However, the process of substitution also presents problems, as the map is not the territory, so important discrepancies can exist between proxies and real-word phenomena.5As Hito Steyerl argues, within digital culture there exists a politics of proxies. Cp. Hito Steyerl, “Proxy Politics: Signal and Noise”, E-Flux, 60, 2014. Available at: https://www.e-flux.com/journal/60/61045/proxy-politics-signal-and-noise/ [accessed October 23, 2018]. In the specific case of the Turing test, the ability of computational systems to perform well in the imitation game is a poor proxy for human intelligence.

The chatbots that contest Turing test competitions partake in discussions but have no knowledge or understanding of how to drive a car, feed a child, play football, or undertake any of the myriad other tasks that humans routinely perform. The Turing test, then, employs a very limited understanding of what intelligence is, albeit one that as of 2018 has still been beyond the boundaries of what computers have accomplished. While in 2014 a program called “Eugene Goostman” passed a limited-duration Turing test (the test only ran for five minutes, one fifth of the typical duration), the chatbot posed as a 13-year-old Ukrainian boy for whom English was a second language. This problematises the abstract and homogenous notion of human intelligence posited in the Turing test; a child who can barely converse in a given language sets up quite different conversational expectations to an adult speaking a language they have been exposed to since birth.6Turing does raise questions of educational development and machine learning: “Instead of trying to produce a program to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain. Presumably the child-brain is something like a notebook as one buys it from the stationers.” (Ibid., p. 60) Turing’s model of the brain resembling a blank notebook is, however, far … Continue reading Taken to extremes, why not have a chatbot impersonate a six-week-old baby or coma patient and simply type nothing?

A very different approach to the question “Can a computer think?” is found in the work of the anthropologist, cyberneticist and ecologist Gregory Bateson, who argued:

“The computer is only an arc of a larger circuit which always includes a man and an environment from which information is received and upon which efferent messages from the computer have effect. This total system, or ensemble, may legitimately be said to show mental characteristics. It operates by trial and error and has creative character.”7Gregory Bateson, Steps to an ecology of mind: Collected essays in anthropology, psychiatry, evolution, and epistemology, Chicago, University of Chicago Press, 1972, p. 323.

Whereas Turing sought to transform the question into a quantitively measurable outcome via an anthropomorphic proxy, Bateson instead challenges the premise of the question, which posits the computer as a discrete entity capable of thought or intelligence. Highlighting the epistemological errors associated with hegemonic forms of competitive individualism in the mid/late 20th century,8We should note that Bateson argues that competitive individualism which pits individuals against one another and humans against the environment coupled with a misplaced belief that technology would solve any arising problems were dominant social values in Western cultures prior to the rise of neoliberal economics and political parties. Bateson proposed an ecological approach to intelligence. Instead of individual humans, computers or other actors, what requires attention is the system or ensemble, what we may otherwise term the assemblage, which includes humans (we should note with dismay the usage of the term ‘man’ to stand in for ‘humanity’), technical entities and an environment. Intelligence and creativity are both therefore understood as relational capacities, qualities that are only actualized within more-than-human assemblages,9Cp. Manuel DeLanda, Intensive Science and Virtual Philosophy, London, Continuum, 2002. rather than being innate characteristics that are exclusively possessed by human beings. From this perspective, rather than Mark Zuckerberg and Steve Jobs being individual geniuses responsible for Facebook and Apple’s successes, they are recast as bit-part players within assemblages whose spatio-temporal dimensions far exceed those of individual human beings. Technocultural creativity is reliant upon our human ancestors and peers, nonhuman prostheses and geological materialities, however these are typically unacknowledged by accounts that foreground individual human agency. The ideologies of humanism and competitive individualism provide apertures that misidentify the relevant scales at which creativity and intelligence operate.

An ecological model of intelligence marks a decisive departure from a Cartesian model of the subject, contending that the mind is immanent in circuits that exceed the human body, extending into the environment and technologies. Technology is thus no longer an external and isolatable actor, it is understood as something which is entangled with human societies and cultures. This position resembles that of the French philosopher of technology Bernard Stiegler, for whom cultural change results from a process of ‘epiphylogenesis’, a form of evolution that takes place outside of the genome, occurring through the exteriorised organology of technics and technologies.10Cp. Bernard Stiegler, Technics and Time 1: The fault of Epimetheus, Stanford, California, Stanford University Press, 1998. If contemporary humans are more intelligent than our cave dwelling ancestors, it is not the result of genetic alterations, but an epiphylogenetic process that has gradually seen the construction of more sophisticated systems of distributed cognition.11Cp. N. Katherine Hayles, How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics, Chicago, University of Chicago Press, 1999, p. 289. Approaching AI in this way questions whether anthropogenic intelligence is ever artificial or organic; contra Haraway, we have always been cyborgs.12Cp. Donna Haraway, Simians, Cyborgs, and Women, New York, Routledge, 1991. This asks us to consider how technologies impact and enhance human intelligence as the species has constructed environments that exteriorise knowledge in increasingly complex ways, allowing for greater systemic processes of learning to occur, whilst simultaneously contemplating how human memory and knowledge has become increasingly corporatized and commodified. Intelligence, then, is necessarily a collective and more-than-human process.

Computational Affordances and Errors

What the Turing test does not recognise as intelligent behaviours are the capacities of networked computational systems to perform calculations at speeds which enormously exceed the capabilities of humans, to use these calculations to derive patterns from enormous datasets, and to communicate results at speeds close to the speed of light, allowing ‘real-time’ monitoring and predictive forms of dataveillance. Emblematic of such activity is the behaviour of the high frequency trading (HFT) algorithms that today account for approximately half of all stock market trades. HFT systems execute tasks “that no human could ever hope to attempt”; whereas it takes a human around 200 milliseconds to perceive a change, let alone respond to it, HFT algorithms execute trades in just a few milliseconds.13Cp. John Beddington, Clara Furse, Philip Bond, Dave Cliff, Charles Goodhart, Kevin Houstoun, Oliver Linton, and Jean-Pierre Zigrand, “Foresight: the future of computer trading in financial markets”, Final Project Report, The Government Office for Science, London, 2012, p. 33. Consequently, network latency among HFT systems is a key bottleneck, so shaving off milliseconds by drilling through mountains to shorten fibre-optic cable runs, or installing microwave networks that outpace fibre-optic connections to provide a few milliseconds advantage justifies enormous infrastructural investment.14Cp. Donald MacKenzie, Daniel Beunza, Yuval Millo, and Juan Pablo Pardo-Guerra, “Drilling through the Allegheny Mountains: Liquidity, materiality and high-frequency trading”, Journal of Cultural Economy, 5 (3), 2012, pp. 279–296; Matthew Zook and Michael H. Grote, “The microgeographies of global finance: High-frequency trading and the construction of information inequality”, Environment and Planning A: Economy and Space, 49 (1), 2017, pp. 121-140. HFT systems execute huge volumes of trades and although each transaction produces miniscule profits, the massive number of minute quantities adds up to significant amounts, while greatly increasing the overall volume of financial exchanges. Whereas in 1945 US stocks were held for an average of four years, by 2011 this had decreased to a mere 22 seconds.15Cp. Alberto Toscano, “Gaming the plumbing: High-frequency trading and the spaces of capital”, Mute Magazine, 3 (4), 2013. Available at: http://www.metamute.org/editorial/articles/gaming-plumbing-high-frequency-trading-and-spaces-capital [accessed March 29, 2019]. HFTs demonstrate another narrow form of intelligence, albeit one that unlike chatbots does not imitate human capacities but leverages the affordances of networked digital systems.

While under most circumstances HFTs add liquidity to markets due to the increased volume of transactions, there are also situations where they have contributed to sudden shortages of liquidity and the formation of ‘flash crashes’; episodes where enormous sums have been wiped off the value of global stocks. Within these episodes HFT algorithms not only contribute to the speed of flash crashes, they also behave ‘erratically’, buying stocks at ‘stub quotes’ of one cent or $ 100,000 (the lowest and highest possible price) that are never intended to be purchased or sold, and never would be by the human traders that are being displaced by algorithmic trading systems. This exemplifies the kind of artificial stupidity Sean Cubitt mentions in the excerpt from the film; when these systems deviate from their intended behaviour, they often do so in ways that are very different from human errors. Furthermore, the speed that HFTs operate at entails that real-time governance of these periodically erratic and destructive algorithmic agents is impossible.

Automating tasks can often mean a higher success rate when compared to human labour, but humans are quite good at knowing where, when and why human errors are likely to occur. Conversely, automated systems make errors that are inexplicable, often due to a reliance on proprietary code or machine learning processes that are opaque.16Cp. Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information, Cambridge, MA/London, UK, Harvard University Press, 2015. In some circumstances, recommendation or pattern recognition algorithms make spectacular but harmless errors, however, this is not always the case, such as when Google’s Photos app’s pattern recognition algorithm repeatedly misidentified African-American humans as gorillas, unintentionally aligning its categorisation with colonialist histories and contemporary practices of racism.17Cp. Loren Grush, “Google engineer apologizes after Photos app tags two black people as gorillas”, The Verge, 2015. Available at: https://www.theverge.com/2015/7/1/8880363/google-apologizes-photos-app-tags-two-black-people-gorillas [accessed October 23, 2018].

Machine learning systems are ‘trained’ to classify information based upon inductively finding patterns within the particular dataset used for training. This can lead to two main sources of error: underfitting and overfitting. Underfitting refers to situations where the algorithm fails to acquire a useful pattern from the training data, so is unable to reliably identify the target object. Overfitting, conversely, occurs when algorithms correctly identify patterns present within the training data, but that pattern diverges from those found in the real world. Overfitting is what happened with the Google Photos algorithm; the training images of humans all had light skin, so faced with dark skinned people, the algorithm misclassified them as gorillas.18Cp. Adam Greenfield, Radical Technologies: The Design of Everyday Life, New York/London, Verso Books, 2017, p. 218.

This case speaks to the forms of racial discrimination that have long been pervasive within media representations and technologies; whiteness, alongside a male gaze and bourgeois ideology have long been problematically universalised and invisibly normalised. For example, colour photographic and cinematic film stocks throughout the 20th century were calibrated for how they rendered white skin tones, they typically required extra lighting for black subjects. Television cameras were colour-calibrated using ‘Shirley’ reference cards that until recently exclusively featured white women, similarly privileging whiteness as the universal standard to which technical parameters were tuned.19Cp. Brian Winston, “A whole technology of dyeing: A note on ideology and the apparatus of the chromatic moving image”, Daedalus, 1989, pp. 105–123; Lorna Roth, “Looking at Shirley, the Ultimate Norm: Colour Balance, Image Technologies, and Cognitive Equity”, Canadian Journal of Communication, 34 (1), 2009, pp. 111–136. In a culture where racist, sexist and classist biases have long been integrated into technologies as well as media representations, we should not expect this to simply dissipate in the face of computational systems. Far from being neutral and objective individual actors, they inhabit the same prejudiced distributed cognitive circuits as the society that designed them.20Cp. Bryce Goodman and Seth Flaxman. “European Union Regulations on Algorithmic Decision-Making and a ‘Right to Explanation’”, AI Magazine, 38 (3), 2017.

The example of Google Photos is far from an isolated incident. Important recent works into systemic biases within automated digital systems have examined the propensity of Google’s search engine results to produce sexualised or pornographic results for the term ‘black girls’ while displaying a relatively homogenous set of white women for the term ‘beauty’, how pattern recognition systems produce positive feedback loops that send more police to poor neighbourhoods, thereby further increasing incarceration rates in those areas (and leading the system to be considered a success), and profiling impoverished parents for extra child welfare scrutiny based upon the harm that poverty inflicts upon children.21Cp. Safiya Umoja Noble, Algorithms of Oppression: How search engines reinforce racism, New York, NYU Press, 2018; Cathy O’Neil, Weapons of math destruction: How big data increases inequality and threatens democracy, Great Britain, Allen Lane, 2016; Virginia Eubanks, Automating inequality: How high-tech tools profile, police, and punish the poor, New York, St. Martin’s Press, 2018. Across the diverse examples these books explore algorithms designed to allow predictive policing, calculate the probability of convicts reoffending, rank knowledge online, and organise welfare and insurance claims. The consistent findings are that “automated decision-making systems are disproportionately harmful to the most vulnerable and the least powerful, who have little ability to intervene in them.”22Noble, Algorithms of Oppression, p. 47. Addressing these systemic problems requires more than just reprogramming particular algorithms, it entails addressing the techno-cultural assemblages that continue producing them.

Conclusion: Intelligent and Creative (but prejudiced and inhuman) Assemblages

The way we typically conceptualise AI is as flawed as the individualist approaches to human intelligence that posit cognition as an individual affair constrained by the boundaries of the body. Intelligence and creativity are distributed processes that encompass assemblages of humans, technologies and ecosystems. The precise types of intelligence, thought and action that are possible are significantly modified by digital technologies that calculate, communicate and act at speeds that greatly exceed human capacities and which often exemplify the inhuman logic of short-term capital accumulation to the detriment of equity and sustainability, traits that are both illustrated by HFT.

AI and computationally aided creativity are not inherently bad things, they are pivotal to the range of thoughtful, provocative, challenging and beautiful modes of artistic production. However, in a technoculture where racism, sexism and classism are still unfortunately commonplace, it is unsurprising that pattern recognition-based computational systems identify those patterns and re-inscribe them into technologies that are mistakenly believed to be neutral and objective. Put another way, big data relies upon collecting information from society and analysing it, so if that society features inequalities and discrimination, then so will the data about it. Consequently, automated decision-making systems frequently reify pre-existing patterns of discrimination, while making it harder to challenge them as they are thought to be objectively derived.

Additionally, we should recognise and challenge the problematic forms of representation within the creative and tech industries, which are all too often dominated by white middle-class men. As Jennifer Whitney states in the excerpt from the film; reducing the male, white middle-class biases in AI and the creative industries means having workforces that aren’t predominantly staffed by white, male, university graduates. If there was more diversity within these sectors, they would likely function in ways that are less likely to reinforce existing hierarchies while espousing a flawed logic of universalism that denies the problems associated with those inequalities.

We need more, however, than just better diversity in industry and more diffused digital literacies. Rethinking automation and AI means fundamentally reconsidering how we understand intelligent behaviour. An intelligent system is not one premised upon the fallacy of infinite growth on a finite planet. Neither does it seek to externalise the costs of unsustainable contemporary consumption upon future generations of humans and nonhumans. Consequently, thinking about how technology augments circuits of distributed cognition should be orientated toward revising our relationships with other humans, technologies and our environment, so that we assemble systems that are driven by equity and sustainability, not short-term profitability and efficiency.

References
1 Cp. Carl Benedikt Frey, and Michael A Osborne, “The Future of Employment: How Susceptible Are Jobs to Computerisation?”, Technological Forecasting and Social Change, 114, 2017, pp. 254–280.
2 Contemporary research into animal intelligence provides a wealth of empirical evidence that strongly contests these anthropocentric assumptions. For example, see: Donald R. Griffin, Animal Minds: Beyond cognition to consciousness, Chicago, University of Chicago Press, 2013; Keith E. Stanovich, “Why humans are (sometimes) less rational than other animals: Cognitive complexity and the axioms of rational choice”, Thinking & Reasoning, 19 (1), 2013, pp. 1–26.
3 More precisely, Turing’s imitation game asked whether an interrogator C could correctly identify the gender identities of a man A and a woman B, and asked “What will happen when a machine takes the part of A in this game? Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?” The gendered dimension of Turing’s original test is typically ignored by contemporary tests. Alan M. Turing, “Computing Machinery and Intelligence”, in Robert Epstein, Gary Roberts and Grace Beber (eds.), Parsing the Turing Test, Springer, 2009 [1950], pp. 23–65, here: p. 25.
4 Ibid., p. 42.
5 As Hito Steyerl argues, within digital culture there exists a politics of proxies. Cp. Hito Steyerl, “Proxy Politics: Signal and Noise”, E-Flux, 60, 2014. Available at: https://www.e-flux.com/journal/60/61045/proxy-politics-signal-and-noise/ [accessed October 23, 2018].
6 Turing does raise questions of educational development and machine learning: “Instead of trying to produce a program to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain. Presumably the child-brain is something like a notebook as one buys it from the stationers.” (Ibid., p. 60) Turing’s model of the brain resembling a blank notebook is, however, far removed from contemporaneous models of neuroplasticity, synaptogenesis and synaptic pruning, where the infant brain undergoes a rapid growth in the number of synapses, which is then followed by almost half of those synapses withering away during childhood. We should note also how mind and brain are employed as interchangeable terms, denoting how for Turing mind is reducible to the brain. Differentiation between different levels of cognitive development are notably absent from the imitation game itself.
7 Gregory Bateson, Steps to an ecology of mind: Collected essays in anthropology, psychiatry, evolution, and epistemology, Chicago, University of Chicago Press, 1972, p. 323.
8 We should note that Bateson argues that competitive individualism which pits individuals against one another and humans against the environment coupled with a misplaced belief that technology would solve any arising problems were dominant social values in Western cultures prior to the rise of neoliberal economics and political parties.
9 Cp. Manuel DeLanda, Intensive Science and Virtual Philosophy, London, Continuum, 2002.
10 Cp. Bernard Stiegler, Technics and Time 1: The fault of Epimetheus, Stanford, California, Stanford University Press, 1998.
11 Cp. N. Katherine Hayles, How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics, Chicago, University of Chicago Press, 1999, p. 289.
12 Cp. Donna Haraway, Simians, Cyborgs, and Women, New York, Routledge, 1991.
13 Cp. John Beddington, Clara Furse, Philip Bond, Dave Cliff, Charles Goodhart, Kevin Houstoun, Oliver Linton, and Jean-Pierre Zigrand, “Foresight: the future of computer trading in financial markets”, Final Project Report, The Government Office for Science, London, 2012, p. 33.
14 Cp. Donald MacKenzie, Daniel Beunza, Yuval Millo, and Juan Pablo Pardo-Guerra, “Drilling through the Allegheny Mountains: Liquidity, materiality and high-frequency trading”, Journal of Cultural Economy, 5 (3), 2012, pp. 279–296; Matthew Zook and Michael H. Grote, “The microgeographies of global finance: High-frequency trading and the construction of information inequality”, Environment and Planning A: Economy and Space, 49 (1), 2017, pp. 121-140.
15 Cp. Alberto Toscano, “Gaming the plumbing: High-frequency trading and the spaces of capital”, Mute Magazine, 3 (4), 2013. Available at: http://www.metamute.org/editorial/articles/gaming-plumbing-high-frequency-trading-and-spaces-capital [accessed March 29, 2019].
16 Cp. Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information, Cambridge, MA/London, UK, Harvard University Press, 2015.
17 Cp. Loren Grush, “Google engineer apologizes after Photos app tags two black people as gorillas”, The Verge, 2015. Available at: https://www.theverge.com/2015/7/1/8880363/google-apologizes-photos-app-tags-two-black-people-gorillas [accessed October 23, 2018].
18 Cp. Adam Greenfield, Radical Technologies: The Design of Everyday Life, New York/London, Verso Books, 2017, p. 218.
19 Cp. Brian Winston, “A whole technology of dyeing: A note on ideology and the apparatus of the chromatic moving image”, Daedalus, 1989, pp. 105–123; Lorna Roth, “Looking at Shirley, the Ultimate Norm: Colour Balance, Image Technologies, and Cognitive Equity”, Canadian Journal of Communication, 34 (1), 2009, pp. 111–136.
20 Cp. Bryce Goodman and Seth Flaxman. “European Union Regulations on Algorithmic Decision-Making and a ‘Right to Explanation’”, AI Magazine, 38 (3), 2017.
21 Cp. Safiya Umoja Noble, Algorithms of Oppression: How search engines reinforce racism, New York, NYU Press, 2018; Cathy O’Neil, Weapons of math destruction: How big data increases inequality and threatens democracy, Great Britain, Allen Lane, 2016; Virginia Eubanks, Automating inequality: How high-tech tools profile, police, and punish the poor, New York, St. Martin’s Press, 2018.
22 Noble, Algorithms of Oppression, p. 47.

Sy Taffel is a senior lecturer in media studies and co-director of the Political Ecology Research Centre at Massey University, Aotearoa New Zealand. He has published work on political ecologies of digital media, media and materiality, hacktivism, automation, and pervasive/locative media. He is the author of Digital Media Ecologies (Bloomsbury 2019) and a co-editor of Ecological Entanglements in the Anthropocene (Lexington, 2017 with Nicholas Holm).