Restorative Justice in Artificial Intelligence Crimes

In order to lay the foundations for a discussion around the argument that the adoption of artificial intelligence (AI) technologies benefits the powerful few,1Cp. G. Chaslot, “YouTube’s A.I. was divisive in the US presidential election”, Medium, November 27, 2016. Available at: https://medium.com/the-graph/youtubes-ai-is-neutral-towards-clicks-but-is-biased-towards-people-and-ideas-3a2f643dea9a#.tjuusil7d [accessed February 25, 2018]; E. Morozov, “The Geopolitics Of Artificial Intelligence”, FutureFest, London, 2018. Available at: https://www.youtube.com/watch?v=7g0hx9LPBq8 [accessed October 25, 2019]. focussing on their own existential concerns,2Cp. M. Busby, “Use of ‘Killer Robots’ in Wars Would Breach Law, Say Campaigners”, The Guardian, August 21, 2018. Available at: https://web.archive.org/web/20181203074423/https://www.theguardian.com/science/2018/aug/21/use-of-killer-robots-in-wars-would-breach-law-say-campaigners [accessed October 25, 2019]. we decided to narrow down our analysis of the argument to jurisprudence (i.e. the philosophy of law), considering also the historical context. This paper signifies an edited version of Adnan Hadzi’s text on Social Justice and Artificial Intelligence,3Cp. A. Hadzi, “Social Justice and Artificial Intelligence”, Body, Space & Technology, 18 (1), 2019, pp. 145–174. Available at: https://doi.org/10.16995/bst.318 [accessed October 25, 2019]. exploring the notion of humanised artificial intelligence4Cp. A. Kaplan and M. Haenlein, “Siri, Siri, in my Hand: Who’s the Fairest in the Land? On the Interpretations, Illustrations, and Implications of Artificial Intelligence”, Business Horizons, 62 (1), 2019, pp. 15–25. https://doi.org/10.1016/j.bushor.2018.08.004; S. Legg and M. Hutter, A Collection of Definitions of Intelligence, Lugano, Switzerland, IDSIA, 2007. Available at: http://arxiv.org/abs/0706.3639 [accessed October 25, 2019]. in order to discuss potential challenges society might face in the future. The paper does not discuss current forms and applications of artificial intelligence, as, so far, there is no AI technology, which is self-conscious and self-aware, being able to deal with emotional and social intelligence.5N. Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford, Oxford University Press, 2014. It is a discussion around AI as a speculative hypothetical entity. One could then ask, if such a speculative self-conscious hardware/software system were created, at what point could one talk of personhood? And what criteria could there be in order to say an AI system was capable of committing AI crimes?

In order to address AI crimes, the paper will start by outlining what might constitute personhood in discussing legal positivism and natural law. Concerning what constitutes AI crimes the paper uses the criteria given in Thomas King et al.’s paper Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions,6Cp. T. King, N. Aggarwal, M. Taddeo and L. Floridi, “Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions”, SSRN Scholarly Paper No. ID 3183238, Rochester, NY, Social Science Research Network, 2018. Available at: https://papers.ssrn.com/abstract=3183238 [accessed October 25, 2019]. where King et al. coin the term “AI crime”, mapping five areas in which AI might, in the foreseeable future, commit crimes, namely:

  • commerce, financial markets, and insolvency;
  • harmful or dangerous drugs;
  • offences against persons;
  • sexual offences;
  • theft and fraud, and forgery and personation.

We discuss the construction of the legal system through the lens of political involvement of what one may want to consider to be ‘powerful elites’7P. Mason, Clear Bright Future, London, Allen Lane Publishers, 2019. . In doing so we will be demonstrating that it is difficult to prove that the adoption of AI technologies is undertaken in a way, which mainly serves a powerful class in society. Nevertheless, analysing the culture around AI technologies with regard to the nature of law with a philosophical and sociological focus enables us to demonstrate a utilitarian and authoritarian trend in the adoption of AI technologies. We will narrow down our discussion of utilitarian and authoritarian trends through the use of Tim Crook’s notion on power elites,8Cp. T. Crook, Comparative Media Law and Ethics, London, Routledge, 2009; T. Crook, “Power, Intelligence, Whistle-blowing and the Contingency of History”, paper presented at the Annual Conference of the Institute of Communication Ethics, The Foreign Press Association, London, November 3, 2010. Available at: https://www.gold.ac.uk/media-communications/staff/crook/ [accessed October 25, 2019]. and Paul Mason’s analysis of power elites through four main ethical systems,9Cp. Mason, Clear Bright Future. drawing on Karl Marx’s class concept.10Cp. K. Marx, “Estranged Labour. London: Karl Marx Economic and Philosophical Manuscripts”, 1844. Available at: https://www.marxists.org/archive/marx/works/1844/manuscripts/labour.htm [accessed October 25, 2019]. Namely Mason is discussing, in regards to power elites: utilitarianism, social justice, Nietzsche’s ‘higher men’ approach, and finally Aristotle’s virtue ethics. Mason argues that “virtue ethics is the only ethics fit for the task of imposing collective human control on … Continue readingand AI. We will apply virtue ethics to our discourse around artificial intelligence and ethics. Furthermore, Mason brings forward the notion of radical humanism and in three points Mason outlines how AI could be designed and implemented:

  1. The most comprehensive human-centric ethical system for AI has to be one based on virtue. All other systems – for example safety codes or ‘maximum happiness’ objectives – would have to be sub-systems of an ethical approach based on virtue, which instructs the technology to create and maintain human freedom.
  2. You resolve the class, gender, national and other competing claims through democracy and regulation (i.e. form of social contract [restorative justice] more prescriptive than the one required by fairness ethics).
  3. You need industry standards regulated by law and should refrain from developing AI without first signing up to these standards; nor should you deploy it into any rules-free space.11Cp. ibid.

As expert in AI safety Steve Omonhundro believes that AI is “likely to behave in antisocial and harmful ways unless they are very carefully designed.”12S. Omohundro, “Autonomous Technology and the Greater Human Good”, Journal of Experimental & Theoretical Artificial Intelligence, 26 (3), 2014, pp. 303–315, here: p. 303. It is through virtue ethics that this paper will propose for such a design to be centred around restorative justice in order to take control over AI and thinking machines, following Mason’s radical defence of the human and his critique of current thoughts within trans- and post-humanism as a submission to machine logic.

Following Mason and Crook we introduce our discussion around power elites with the notions of legal positivism and natural law, as discussed in the academic fields of philosophy and sociology. The paper will then look, in a more detailed manner, into theories analysing the historical and social systematisation, or one may say disposition, of laws, and the impingement of neo-liberal tendencies upon the adoption of AI technologies.13Cp. P. Parikh, “On Liberalism and Neoliberalism”, Medium, October 21, 2017. Available at: https://medium.com/@pparikh1/on-liberalism-and-neoliberalism-5946523aa2ca [accessed January 4, 2019]. Salvador Pueyo demonstrates those tendencies with a thought experiment around superintelligence in a neoliberal scenario.14Cp. S. Pueyo, “Growth, Degrowth, and the Challenge of Artificial Superintelligence”, Journal of Cleaner Production, 197, 2018, pp. 1731-1736. https://doi.org/10.1016/j.jclepro.2016.12.138 [accessed October 25, 2019]. In Pueyo’s thought experiment the system becomes techno-social-psychological with the progressive incorporation of decision-making algorithms and the increasing opacity of such algorithms,15Cp. J. Danaher, “The Threat of Algocracy: Reality, Resistance and Accommodation”, Philosophy & Technology, 29 (3), 2016, pp. 245–268. Available at: https://doi.org/10.1007/s13347-015-0211-1 [accessed October 25, 2019]. with human thinking partly shaped by firms themselves.16Cp. J.K. Galbraith, The New Industrial State, Oxford, Princeton University Press, 2015.

The regulatory, self-governing potential of AI algorithms17Cp. S. Poole, “Arabic, Algae and AI: The Truth About ‘Algorithms’”, The Guardian, September 20, 2018. Available at: https://web.archive.org/web/20181119100303/https://www.theguardian.com/books/2018/sep/20/from-arabic-to-algae-like-ai-the-alarming-rise-of-the-algorithm- [accessed October 25, 2019]; D. Roio, “Algorithmic Sovereignty” Thesis, University of Plymouth, 2018. Available at: https://pearl.plymouth.ac.uk/handle/10026.1/11101 [accessed October 25, 2019]; A. Smith, … Continue reading and the justification by authority of the current adoption of AI technologies within society, mainly through investments into AI implementation within the armed forces, surveillance technologies,18Cp. Mason, Clear Bright Future. and the military-industrial complex, will be analysed next. The paper will conclude by proposing an alternative practically unattainable, approach to the current legal system by looking into restorative justice for AI crimes,19Cp. C. Cadwalladr, “Elizabeth Denham: ‘Data Crimes are Real Crimes”, The Guardian, July 15, 2018. Available at: https://web.archive.org/web/20181121235057/https:// www.theguardian.com/uk-news/2018/jul/15/elizabeth-denham-data-protection-information-commissioner-facebook-cambridge-analytica [accessed October 25, 2019]. and how the ethics of care could be applied to AI technologies. In conclusion the paper will discuss affect20Cp. B. Olivier, “Cyberspace, Simulation, Artificial Intelligence, Affectionate Machines and Being Human”, Communicatio, 38 (3), 2012, pp. 261–278. https://doi.org/10.1080/02500167.2012.716763 [accessed October 25, 2019]; E.A. Wilson, Affect and Artificial Intelligence, Washington, University of Washington Press, 2011. and humanised artificial intelligence with regards to the emotion of shame, when dealing with AI crimes.

Legal Positivism and Natural Law

In order to discuss AI in relation to personhood this paper follows the descriptive psychology method21Cp. P.G. Ossorio, The Behavior of Persons, Ann Arbor, Descriptive Psychology Press, 2013. Available at: http://www.sdp.org/sdppubs-publications/the-behavior-of-persons/ [accessed October 25, 2019]. of the paradigm case formulation22Cp. J. Jeffrey, “Knowledge Engineering: Theory and Practice”, Society for Descriptive Psychology, 5, 1990, pp. 105–122. developed by Peter Ossorio.23Cp. P.G. Ossorio, Persons: The Collected Works of Peter G. Ossorio, Volume I. Ann Arbor, Descriptive Psychology Press, 1995. Available at: http://www.sdp.org/sdppubs-publications/persons-the-collected-works-of-peter-g-ossorio-volume-1/ [accessed October 25, 2019]. Similar to how some animal rights activists call for certain animals to be recognised as non-human persons,24Cp. M. Mountain, “Lawsuit Filed Today on Behalf of Chimpanzee Seeking Legal Personhood”, Nonhuman Rights Blog, December 2, 2013. Available at: https://www.nonhumanrights.org/blog/lawsuit-filed-today-on-behalf-of-chimpanzee-seeking-legal-personhood/ [accessed January 8, 2019]; M. Midgley, “Fellow Champions Dolphins as ‘Non-Human Persons’”, Oxford Centre for Animal Ethics, January 10, 2010. Available at: … Continue reading this paper speculates on the notion of AI as a non-human person being able to reflect on ethical concerns.25Cp. R. Bergner, “The Tolstoy Dilemma: A Paradigm Case Formulation and Some Therapeutic Interventions”, in K.E. Davis, F. Lubuguin and W. Schwartz (eds.), Advances in Descriptive Psychology, Vol. 9, 2010, pp. 143–160. Available at: http://www.sdp.org/sdppubs-publications/advances-in-descriptive-psychology-vol-9; P. Laungani, “Mindless Psychiatry and Dubious Ethics”, Counselling Psychology Quarterly, 15 (1), 2002, pp. 23–33. Available at: https://doi.org/10.1080/09515070110102305 … Continue reading Here Wynn Schwartz argues that “it is reasonable to include non-humans as persons and to have legitimate grounds for disagreeing where the line is properly drawn. In good faith, competent judges using this formulation can clearly point to where and why they agree or disagree on what is to be included in the category of persons.”26W. Schwartz, “What Is a Person and How Can We Be Sure? A Paradigm Case Formulation”, SSRN Scholarly Paper No. ID 2511486, Rochester, NY: Social Science Research Network, 2014, pp. 27–34. Available at: https://papers.ssrn.com/abstract=2511486 [accessed October 25, 2019].

According to Ossorio a deliberate action is a form of behaviour in which a person a) engages in an intentional action, b) is cognizant of that, and c) has chosen to do that.27Ossorio, The Behavior of Persons. Ossorio gives four classifications: ethical, hedonic, aesthetic, and prudent as fundamental motivations. Ethical motivations, as well as aesthetic motivations, can be distinguished from prudent (and hedonic) motivations due to the agent making a choice: “In the service of being able to choose, and perhaps think through the available options, a person’s aesthetic and ethical motives are often consciously available.”28W. Schwartz, “What Is a Person and How Can We Be Sure? A Paradigm Case Formulation, p. 30.

In the fields of philosophy and sociology countless theories have been advanced concerning the nature of law, addressing questions such as: Can unethical law be binding? Should there be a legal code for civil society? Can such a legal code be equitable, unbiased, and just, or, is the legal code always biased? In the case of AI technologies we ask whether the current vision for the adoption of AI technologies, a vision which is mainly supporting the military-industrial complex through vast investments in army AI,29Cp. Mason, Clear Bright Future. is a vision that benefits mainly powerful elites.

To address the question, we need to discuss the idea of equality. Here we refer to Aristotle’s account on how the legal code should be enacted in an unbiased manner.30Cp. Aristotle and T.J. Saunders, The Politics, London, Penguin UK, 1981. Aristotle differentiated between an unbalanced and balanced application of the legal code, pointing out that the balanced juridical discussion of a case should be courteous. Here, as with the above mentioned animal rights activists, in Dependent Rational Animals, Alasdair MacIntyre argued,31Cp. A. MacIntyre, Dependent Rational Animals: Why Human Beings Need the Virtues, revised edition, Chicago, Open Court, 2001. drawing on Thomas Aquina’s discussion of misericordia,32Cp. T. Aquinas, Summa Theologiae: Volume 33, Hope: 2a2ae, Cambridge, Cambridge University Press, 2006, pp. 17–22. for the recognition of our kinship to some species calling for the “virtues of acknowledged dependence”33A. MacIntyre, After Virtue, London, A&C Black, 2013, p. xi. . Austin, on the other hand, suggests that the legal code is defined by a higher power, “God”, to establish justice over society. For John Austin the legal code is an obligation, a mandate to control society.34Cp. J. Austin, The Province of Jurisprudence Determined: And, The Uses of the Study of Jurisprudence, Indianapolis, Hackett Publishing, 1998.

Herbert Lionel Adolphus Hart goes on to discuss the social aspect of legal code and how society apprehends the enactment of such legal code.35Cp. H.L.A. Hart, The Concept of Law, Oxford, Oxford University Press, 1961. Hart argues that the legal code is a strategy, a manipulation of standards accepted by society. Contrary to Hart, Ronald Dworkin proposes for the legal code to allow for non-rule standards reflecting ethical conventions of society.36Cp. R. Dworkin, A Matter of Principle, Oxford, Clarendon Press, 1986. Dworkin discusses legislation as an assimilation of these conventions, where legislators do not define the legal code, but analyse the already existing conventions to derive conclusions, which then in turn define the legal code. Nevertheless, Dworkin fails to explain how those conventions come into being. Here for Hans Kelsen legal code is a product of the political, cultural and historical circumstances society finds itself in.37Cp. H. Kelsen, Pure Theory of Law, Los Angeles, University of California Press, 1967; H. Kelsen, General Theory of Law and State, New Jersey, The Lawbook Exchange, Ltd., 2009. For Kelsen the legal code is a standardising arrangement which defines how society should operate.38Cp. H. Kelsen, General Theory of Norms, Oxford, Clarendon Press, 1991.

The theories discussed above serve to explain and analyse how legal codes deal with the emergence of legal issues concerning AI technologies or AI crimes. Nevertheless, in trying to evaluate the argument that the adoption of AI technologies is a process controlled by powerful elites who wield the law to their benefit, we also need to discuss the notion of power elites.

William Chamblis and Robert Seidman argue that powerful interests have shaped the writing of legal codes for a long time.39Cp. W.J. Chambliss and R.B. Seidman, Law, Order, and Power, London, Addison-Wesley Publishing Company, 1982. However, Chamblis and Seidman also state that legislation derives from a variety of interests, which are often in conflict with each other. We need to extend our analysis not only to powerful elites, but also to examine the notion of power itself, and the extent to which power shapes legislation, or, on the contrary, if it is legislation itself that controls power.

In an attempt to identify the source of legislation, Max Weber argues that legal code is powerfully interlinked with the economy. Weber goes on to argue that this link is the basis of our capitalist society.40Cp. M. Weber, Economy and Society: An Outline of Interpretive Sociology, Los Angeles, University of California Press, 1978. Here we can refer back to Marx’s idea of materialism and the influence of class society on legislation.41Cp. K. Marx, Capital, Vol 1, London, Penguin Books Limited, 1990. For Marx legislation, legal code is an outcome of the capitalist mode of production.42Cp. M. Harris, “Glitch Capitalism: How Cheating AIs Explain Our Stagnant Present”, New York Magazine, April 23, 2018. Available at: http://nymag.com/selectall/2018/04/malcolm-harris-on-glitch-capitalism-and-ai-logic.html [accessed May 16, 2018]. Marx’s ideas have been widely discussed with regards to the ideology behind the legal code. Nevertheless, Marx’s argumentation limits legal code to the notion of class domination.

Here Colin Sumner extended on Marx’s theories regarding legislation and ideology and discussed the legal code as an outcome of political and cultural discussions, based on the economic class domination.43Cp. C. Sumner, Reading Ideologies: An Investigation Into the Marxist Theory of Ideology and Law, London, Academic Press, 1979. Sumner expands the conception of the legal code not only as a product of the ruling class but also as bearing the imprint of other classes, including blue-collar workers, through culture and politics. Sumner argues that with the emergence of capitalist society, “the social relations of legal practice were transformed into commercial relations”44Sumner, Reading Technologies, p. 51. . However, Sumner does not discuss why parts of society are side-lined by legislation, and how capitalist society not only impacts on legislation, but also has its roots in the neo-liberal writing of legal code.

To apprehend how ownership, property and intellectual rights became enshrined in legal code and adapted by society we turn to Locke’s theories.45Cp. J. Locke, Political Writings. London, Mentor, 1993. Locke argued that politicians ought to look after ownership rights and to support circumstances allowing for the growth of wealth (capital). Following Locke one can conclude that contemporary society is one in which politicians influence legislation in the interest of a powerful upper-class – a neo-liberal society. Still, we need to ask, should this be the case, and should powerful elites have the authority over legal code, how legislation is enacted and maintained?

The Disciplinary Power of artificial intelligence

In order to discuss these questions, one has to analyse the history of AI technologies leading to the kind of ‘humanised’ AI system this paper posits. Already in the 50s, Alan Turing, the inventor of the Turing test,46Cp. J. Moor, The Turing Test: The Elusive Standard of Artificial Intelligence, New York, Springer Science & Business Media, 2003. had stated that:

“We may hope that machines will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision. Many people think that a very abstract activity, like the playing of chess, would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child. Things would be pointed out and named, etc. Again, I do not know what the right answer is, but I think both approaches should be tried. We can only see a short distance ahead, but we can see plenty there that needs to be done.”47A.M. Turing, “Computing Machinery and Intelligence”, Mind, 59 (236), 1950, pp. 433–460, here: p. 460.

The old-fashioned approach,48Cp. M. Hoffman, and R. Pfeifer, “The Implications of Embodiment for Behavior and Cognition: Animal and Robotic Case Studies”, in W. Tschacher and C. Bergomi (eds.), The Implications of Embodiment: Cognition and Communication, Exeter, Andrews UK Limited, 2015, pp. 31–58. Available at: https://arxiv.org/abs/1202.0440. some may still say contemporary approach, was to primarily research into ‘mind-only’49N.J. Nilsson, The Quest for Artificial Intelligence, Cambridge, Cambridge University Press, 2009. AI technologies/systems. Through high level reasoning, researchers were optimistic that AI technology would quickly become a reality.

Those early AI technologies were a disembodied approach using high level logical and abstract symbols. By the end of the 80s researchers found that the disembodied approach was not even achieving low level tasks humans could easily perform.50Cp. R. Brooks, Cambrian Intelligence: The Early History of the New AI, Cambridge, MA, A Bradford Book, 1999. During that period many researchers stopped working on AI technologies and systems, and the period is often referred to as the “AI winter”.51Cp. D. Crevier, AI: The Tumultuous History of the Search for Artificial Intelligence, New York, Basic Books, 1993; H.P. Newquist, The Brain Makers, Indianapolis, Ind: Sams., 1994.

Rodney Brooks then came forward with the proposition of “Nouvelle AI”,52Cp. R. Brooks, “A Robust Layered Control System for a Mobile Robot”, IEEE Journal on Robotics and Automation, 2 (1), 1986, pp. 14–23. Available at: https://doi.org/10.1109/JRA.1986.1087032 [accessed October 25, 2019]. arguing that the old-fashioned approach did not take into consideration motor skills and neural networks. Only by the end of the 90s did researchers develop statistical AI systems without the need for any high-level logical reasoning;53Cp. Brooks, Cambrian Intelligence. instead AI systems were ‘guessing’ through algorithms and machine learning. This signalled a first step towards humanistic artificial intelligence, as this resembles how humans make intuitive decisions;54Cp. R. Pfeifer, “Embodied Artificial Intelligence”, presented at the International Interdisciplinary Seminar on New Robotics, Evolution and Embodied Cognition, Lisbon, November, 2002. Available at: https://www.informatics.indiana.edu/rocha/publications/embrob/pfeifer.html [accessed October 25, 2019]. here researchers suggest that embodiment improves cognition.55Cp. T. Renzenbrink, “Embodiment of Artificial Intelligence Improves Cognition”, Elektormagazine, February 9, 2012. Available at: https://www.elektormagazine.com/articles/embodiment-of-artificial-intelligence-improves-cognition [accessed January 10, 2019]; G. Zarkadakis, “Artificial Intelligence & Embodiment: Does Alexa Have a Body?”, Medium, May 6, 2018. Available at: https://medium.com/@georgezarkadakis/artificial-intelligence-embodiment-does-alexa-have-a-body-d5b97521a201 … Continue reading

With embodiment theory Brooks argued that AI systems would operate best when computing only the data that was absolutely necessary.56Cp. L. Steels and R. Brooks, The Artificial Life Route to Artificial Intelligence: Building Embodied, Situated Agents, London/New York, Taylor & Francis, 1995. Further in Developing Embodied Multisensory Dialogue Agents Michal Paradowski argues that without considering embodiment, e.g. the physics of the brain, it is not possible to create AI technologies/systems capable of comprehension, and that AI technology

“could benefit from strengthened associative connections in the optimization of their processes and their reactivity and sensitivity to environmental stimuli, and in situated human-machine interaction. The concept of multisensory integration should be extended to cover linguistic input and the complementary information combined from temporally coincident sensory impressions.”57M.B. Paradowski, “Developing Embodied Multisensory Dialogue Agents”, presented at the AISB/IACAP 2012 Symposium, Birmingham, November, 2011. Available at: http://events.cs.bham.ac.uk/turing12/ [accessed October 25, 2019].

Today we have reached the point where AI technology is being deployed by the armed forces on a large scale:

“The Pentagon and the U.S. government have been put on notice that the only way to mitigate the risk of being at a technological disadvantage is by investing billions […] in artificial intelligence, machine learning and future technologies that will require no support from civilian companies.”58B. Ladd, “The Military Industrial Complex Is in a Massive Battle Against Big Tech”, Observer, May 24, 2019. Available at: https://observer.com/2019/05/military-industrial-complex-big-tech/ [accessed October 25, 2019].

With this historical analysis in mind we can now discuss the paper’s focus on power elites. Joseph Raz studied the procedures through which elites attain disciplinary power in society.59Cp. J. Raz, The Authority of Law: Essays on Law and Morality, Oxford, OUP Oxford, 2009. Raz argues that the notion of the disciplinary power of elites in society is exchangeable with the disciplinary power of legislation and legal code. For Raz legal code is perceived by society as the custodian of public order. He further explains that by precluding objectionable actions, legislation directs society’s activities in a manner appropriate to jurisprudence. Nevertheless, Raz did not demonstrate how legislation impacts on personal actions. This is where Michel Foucault’s theories on discipline and power come in. According to Foucault the disciplinary power of legislation leads to a self-discipline of individuals.60Cp. M. Foucault, Discipline and Punish: The Birth of the Prison, London, Vintage Books, 1995. Foucault argues that the institutions of courts and judges motivate such a self-disciplining of individuals,61S. Chen, “AI Research Is in Desperate Need of an Ethical Watchdog”, Wired, September 18, 2017. Available at: https://www.wired.com/story/ai-research-is-in-desperate-need-of-an-ethical-watchdog/ [accessed October 25, 2019]. and that self-disciplining rules serve “more and more as a norm”62M. Foucault, The History of Sexuality, Vol. 1, London: Harmondsworth, Penguin, 1981, p. 144. .

Foucault’s theories are especially helpful in discussing how the “rule of truth” has disciplined civilisation, allowing for an adoption of AI technologies which seem to benefit mainly the upper-class. But then should we think of a notion of ‘deep-truth’ as the unwieldy product of deep learning AI algorithms? Discussions around truth, Foucault states, form legislation into something that “decides, transmits and itself extends upon the effects of power”63M. Foucault, “Disciplinary Power and Subjection”, in S. Lukes (ed.), Power, New York, NYU Press, 1986, pp. 229–242, here: p. 230. . Foucault’s theories help to explain how legislation, as an institution, is rolled out throughout society with very little resistance, or “proletarian counter-justice”64M. Foucault, Power, edited by C. Gordon, London, Penguin, 1980, p. 34. . Foucault explains that this has made the justice system and legislation a for-profit system. With this understanding of legislation, and social justice, one does need to reflect further on Foucault’s notion of how disciplinary power seeks to express its distributed nature in the modern state. Namely one has to analyse the distributed nature of those AI technologies, especially through networks and protocols, so that the link can now be made to AI technologies becoming ‘legally’ more profitable, in the hands of the upper-class.

If power generates new opportunities rather than simply repressing them, then, following Foucault, more interaction and participation can extend and not simply challenge power relations.65Cp. Foucault, Power. Foucault offers a valuable insight into power relationships relevant also within AI technologies.66Cp. M. Foucault, “The Subject and Power”, Critical Inquiry, 8 (4), 1982, pp. 777–795. It is the product of research that was undertaken by Foucault over a period of over twenty years. Foucault uses the metaphor of a chemical catalyst for a resistance which can bring to light power relationships, and thus allow an analysis of the methods this power uses: “[r]ather than analysing power from the point of view of its internal rationality, it consists of analysing power relations through the antagonism of strategies.”67Foucault, “The Subject and Power”, p. 780.

In Protocol, Alexander Galloway describes how these protocols changed the notion of power and how “control exists after decentralization”68A.R. Galloway, Protocol: How Control Exists After Decentralization, Cambridge, MA, MIT Press, 2004, p. 81. . Galloway argues that protocol has a close connection to both Deleuze’s concept of control and Foucault’s concept of biopolitics69Cp. M. Foucault, The Birth of Biopolitics: Lectures at the Collège de France, 1978–1979, London, Pan Macmillan, 2008. by claiming that the key to perceiving protocol as power is to acknowledge that “protocol is an affective, aesthetic force that has control over life itself.”70Galloway, Protocol, p. 81. Galloway suggests that it is important to discuss more than the technologies, and to look into the structures of control within technological systems, which also include underlying codes and protocols, in order to distinguish between methods that can support collective production, e.g. sharing of AI technologies within society, and those that put the AI technologies in the hands of the powerful few.71Cp. Galloway, Protocol, p. 147. Galloway’s argument in the chapter Hacking is that the existence of protocols “not only installs control into a terrain that on its surface appears actively to resist it”72Galloway, Protocol, p. 146. , but goes on to create the highly controlled network environment. For Galloway hacking is “an index of protocological transformations taking place in the broader world of techno-culture.”73Galloway, Protocol, p. 157.

In order to be able to regulate networks and AI technologies, control and censorship mechanisms are introduced to networks by applying them to devices and nodes. This form of surveillance, or dataveillance, might constitute a development akin to Michel Foucault’s concept of “panopticism”74Cp. M. Foucault, Discipline and Punish: The Birth of the Prison, New York, Pantheon, 1977. or “panoptic apparatus”75Cp. M. Zimmer, “The Panoptic Gaze of Web 2.0: How Web 2.0 Platforms Act as Infrastructure of Dataveillance”, Kulturpolitik, 2, July 1, 2009, p. 5. Available at: http://michaelzimmer.org/files/Zimmer%20Aalborg%20talk.pdf [accessed October 26, 2019]. , defined as both massive collections and storage of vast quantities of personal data and the systemic use of such data in the investigation or monitoring of one or more persons. Laws and agreements like the Anti-Counterfeiting Trade Agreement,76Cp. European Commission, “The Anti-Counterfeiting Trade Agreement (ACTA)”, 2007. Available at: http://ec.europa.eu/trade/creating-opportunities/trade-topics/intellectual-property/anti-counterfeiting/ [accessed December 30, 2010]; J. Lambert, “Statement on Adoption of Joint Resolution on ACTA”, November 24, 2010. Available at: http://www.jeanlambertmep.org.uk/news_detail.php?id=620 [accessed December 15, 2010]. the Digital Economy Act and the Digital Millennium Copyright Act require surveillance of the AI technologies that consumers use in their “private spheres”,77Cp. C. Fuchs, Social Networking Sites and the Surveillance Society, Vienna, Austria, Verein zur Förderung der Integration der Informationswissenschaften, 2009; A. Medosch, “Post-Privacy or the Politics of Labour, Intelligence and Information”, The Next Layer, January 15, 2010. Available at: http://thenextlayer.org/node/1237 [accessed January 19, 2010]; C. Wolf, The Digital Millennium Copyright Act, Washington, Pike & Fischer – A BNA Company, 2003. and can be used to silence “critical voices”.78Cp. L.B. Movius, “Surveillance, Control, and Privacy on the Internet: Challenges to Democratic Communication”, Journal of Global Communication, 2 (1), 2009, pp. 209–224. The censorship of truth, and the creation of fear of law through moral panics stand in opposition to the development of a healthy democratic use of AI technologies. Issues regarding the ethics of AI arise from this debate.79Cp. Berkman Klein Center, “Ethics and Governance of AI”, 2018. Available at: https://cyber.harvard.edu/topics/ethics-and-governance-ai [accessed September 22, 2018]; J. Clark, “AI and Ethics: People, Robots and Society”, Washington Post, March 3, 2018. Available at: http://www.washingtonpost.com/video/postlive/ai-and-ethics-people-robots-and-society/2018/03/20/ffdff6c2-2c5a-11e8-8dc9-3b51e028b845_video.html [accessed September 22, 2018]; P. Green, “Artificial Intelligence and … Continue reading

Peter Fitzpatrick expands on Foucault’s theory, investigating the “symbiotic link between the rule of law and modern administration”80P. Fitzpatrick, The Mythology of Modern Law, London, Routledge, 2002, p. 147. . Here again we can make the link to ethical questionable advances with AI technologies. Legislation, or legal code, Fitzpatrick argues, corrects “the disturbance of things in their course and reassert the nature of things”81Fitzpatrick, The Mythology of Modern Law, p. 160. . For Fitzpatrick legislation is not an all-embracing, comprehensive concept as argued by Dworkin and Hart,82Cp. Dworkin, A Matter of Principle; Hart, The Concept of Law. but rather legislation is defined by elites. For Fitzpatrick legislation “changes as society changes and it can even disappear when the social conditions that created it disappear or when they change into conditions antithetical to it.”83Fitzpatrick, The Mythology of Modern Law, p. 6. Furthermore, Robin West suggests that the impact of disciplinary power through legislation on the belief system of individuals does not allow for an analytical, critical engagement by individuals with the issues at stake. Legislation is simply regarded as given.84Cp. R. West, Narrative, Authority, and Law, Michigan, MI, University of Michigan Press, 1993.

John Adams and Roger Brownsword give a more nuanced view of contemporary legislation. They argue that legislation aims to institute public order. Legislation sets up authoritative mechanisms whereby social order can be established and maintained, social change managed, disputes settled and policies and goals for the community adopted.85Cp. J.N. Adams and R. Brownsword, Understanding Law, New York, Sweet & Maxwell, 2006, p. 11. Adams and Brownsword go on to argue that legal code is skewed in favour of the upper-class and those who engage more with politics in society – examples of which could be the corporate sector producing AI technologies and business elites seeking to use AI technologies for profit. According to Adams and Brownsword there seems to be no unbiased, fair legislation or legal code, and the maintenance of public order must simply reproduce an unfair class society. If this is the case, following Adams and Brownsword argumentation, one can argue that indeed the adoption of AI technologies does not follow a utilitarian ethical code, benefiting society, but rather conforms to the interests of a small group, those owning AI technologies.

A further discussion of disciplinary power within the process of writing legal code is that of William Chamblis and Robert Seidman, who argue that legislation is not produced through a process characterised by balanced, fair development, but rather by powerful elites writing legal code by themselves.86Cp. Chamblis and Seidman, Law, Order, and Power. Translating this again back to the adoption of AI technologies, it becomes evident that the freedom to engage with those technologies is left to those who have the financial means, and with it the legal means, to do so. According to Chamblis and Seidman, in a culture dominated by economics, legislation and technologies are being outlined and modelled by those powerful elites.

The analysis of the theories above has attempted to show that the implementation of AI technologies might be construed as a project deriving from, and serving the interests of, the dominant class; following Foucault’s terminology, this is achieved using the disciplinary power of legislation, through regimes of truths, over individuals. AI technologies, rather than benefiting society, could very well be implemented against society. The implementation of AI technologies follows legislation set out by elites, raising issues connected with privacy, national security, or intellectual property laws.

We will conclude our analysis of the disciplinary power of AI technologies by discussing issues concerning privacy and secrecy laws,87Cp. Nicole Moreham, The Law of Privacy and the Media, Oxford, Oxford University Press, 2016. as examples of how powerful elites use such legislation to safeguard their political and economic influence in the implementation of AI technologies. Crook argues that a fear of legislation is being cultivated as a check on the analysis of how elites abuse their power. For Crook the “moral panic of invasion of privacy has been constructed as a mischief perpetrated by media when there is scant scrutiny of the state’s invasion of personal privacy by surveillance, covert investigation, collection and misuse of data.”88T. Crook, Comparative Media Law and Ethics, London, Routledge, 2009, p. 115.

With the implementation of AI technologies come national security concerns. The legislation covering national security, in the example of the UK the Official Secrets Acts, was initiated stressing the notion of the security of the nation state. Nevertheless, Crook states that the “Official Secrets Acts have been repeatedly used by governments to suppress revelations that were, and are, politically embarrassing rather than genuine threats to national security.”89Crook, Comparative Media Law and Ethics, p. 322. Crook explains further that the Official Secrets Act is being used to not only to censor, but also to spy on citizens. As AI technology is deeply implemented within the army, we cannot but wonder if this legislation is only safeguarding the interests of the political elite. Also, for Steven Warner the Official Secrets Act is legislation used “to suppress embarrassing or controversial revelations and to undermine the public’s right to know.”90Steve Warner, “Secrets, Spies and Whistleblowers”, Article 19 and Liberty, London, 2000. Available at: https://www.article19.org/data/files/pdfs/publications/secrets-spies-and-whistleblowers.pdf [accessed November 1, 2019]. Warner argues that legislation in the hands of the power elites is profoundly against democratic principles and criticises therefore the lack of support for whistle-blowers who bring to light such disciplinary use of power. The censorship of truth stands in opposition to the development of a healthy democratic use of AI technologies.

AI technologies and Restorative Justice: The Ethics of Care

Most institutions concerned by the debate on the ethics of automatisation today have resorted to the adoption of the “Open World Assumption” principle91Cp. C. Maria Keet, “Open World Assumption”, in Werner Dubitzky, Olaf Wolkenhauer, Kwang-Hyun Cho and Hiroki Yokota (eds.), Encyclopedia of Systems Biology, New York, NY, Springer New York, 2013, pp. 1567–1567. Available at: https://doi.org/10.1007/978-1-4419-9863-7_734 [accessed November 1, 2019]. providing a sort of safety valve: a last-resort civil right to raise a flag and ask for the intervention of a human in the analysis and consideration of judicial decisions: in such a case institutional operators can always override decisions established by automated systems, posing risks of different nature to the integrity of the process.

Having said this, the prospect could be raised that restorative justice might offer “a solution that could deliver more meaningful justice”92Crook, Comparative Media Law and Ethics, p. 310. . With respect to AI technologies, and the potential inherent in them for AI crimes, instead of following a retributive legislative approach, an ethical discourse,93Cp. R. Courtland, “Bias Detectives: The Researchers Striving to Make Algorithms Fair”, Nature, 558, 2018, pp. 357–360. Available at: https://doi.org/10.1038/d41586-018-05469-3 [accessed October 25, 2019]. with a deeper consideration for the sufferers of AI crimes should be adopted.94Cp. H. Fry, “We Hold People With Power to Account. Why Not Algorithms?” The Guardian, September 17, 2018. Available at: https://web.archive.org/web/20190102194739/https://www.theguardian.com/commentisfree/2018/sep/17/power-algorithms-technology-regulate [accessed October 25, 2019]. Acting ethically is more difficult than ever,95Cp. J. Ito, “Resisting Reduction: A Manifesto. Journal of Design and Science”, Journal of Design and Science, 3, Cambridge MA, MIT Press, 2017. Available at: https://doi.org/10.21428/8f7503e4 [accessed October 25, 2019]. due to the hyper expansion of big data and artificial intelligence.96Cp. J. Bridle, “Rise of the Machines: Has Technology Evolved Beyond our Control?” The Guardian, June 15, 2018. Available at: https://web.archive.org/web/20190111222310/https://www.theguardian.com/books/2018/jun/15/rise-of-the-machines-has-technology-evolved-beyond-our-control- [accessed October 25, 2019]; P.J. Singh, “AI Superpower or Client Nation?”, The Hindu, July 27, 2018. Available at: https://www.thehindu.com/opinion/op-ed/ai-superpower-or-client-nation/article24523017.ece … Continue reading Following the semantic slide from the noun of “University” to that of “Enterprise”,97Cp. Giorgio Agamben, The Kingdom and the Glory: For a Theological Genealogy of Economy and Government, Stanford, CA, Stanford University Press, 2011. research into artificial intelligence has gone from being a public service undertaken mainly at universities to being run (and regarded) as businesses, run by big corporations such as Alphabet (parent company of Google) and Facebook, created to generate profit.98Cp. R. Keeble, Ethics for Journalists, London, Routledge, 2008. The companies need to attract a large number of paying customers. AI technologies have become workers in the market economy, rarely following any ethical guidelines.99Cp. M. Kieran, Media Ethics, London, Psychology Press, 1998. We ask: could restorative justice offer an alternative way of dealing with the occurrence of AI crimes?100Cp. O. Etzioni, “How to Regulate Artificial Intelligence”, The New York Times, January 20, 2018. Available at: https://www.nytimes.com/2017/09/01/opinion/artificial-intelligence-regulations-rules.html [accessed October 25, 2019]; A. Goel, “Ethics and Artificial Intelligence”, The New York Times, December 22, 2017. Available at: https://www.nytimes.com/2017/09/14/opinion/artificial-intelligence.html [accessed October 25, 2019].

Dale Millar and Neil Vidmar described two psychological perceptions of justice.101Cp. N. Vidmar and D.T. Miller, “Socialpsychological Processes Underlying Attitudes Toward Legal Punishment”, Law and Society Review, 1980, pp. 565–602. One is behavioural control, following the legal code as strictly as possible, punishing any wrongdoer,102Cp. M. Wenzel and T.G. Okimoto, “How Acts of Forgiveness Restore a Sense of Justice: Addressing Status/Power and Value Concerns Raised by Transgressions”, European Journal of Social Psychology, 40 (3), 2010, pp. 401–417. and second the restorative justice system, which focuses on restoration where harm was done. Thus an alternative approach for the ethical implementation of AI technologies, with respect to legislation, might be to follow restorative justice principles. Restorative justice would allow for AI technologies to learn how to care about ethics.103Cp. N. Bostrom and E. Yudkowsky, “The Ethics of Artificial Intelligence”, in K. Frankish and W.M. Ramsey (ed.), The Cambridge Handbook of Artificial Intelligence, Cambridge, Cambridge University Press, 2014, pp. 316–334; Frankish and Ramsey, The Cambridge Handbook of Artificial Intelligence. Julia Fionda describes restorative justice as a conciliation between victim and offender, during which the offence is deliberated upon.104Cp. J. Fionda, Devils and Angels: Youth Policy and Crime, London, Hart, 2005. Both parties try to come to an agreement on how to achieve restoration for the damage done, to the situation before the crime (here an AI crime) happened. Restorative justice advocates compassion for the victim and offender, and a consciousness on the part of the offenders as to the repercussion of their crimes.

One can argue that these evils are becoming more evident nowadays with the advance of AI technologies. For AI crimes punishment in the classical sense may seem to be adequate.105Cp. R. Montti, “Google’s ‘Don’t Be Evil’ No Longer Prefaces Code of Conduct”, Search Engine Journal, May 20, 2018. Available at: https://www.searchenginejournal.com/google-dont-be-evil/254019/ [accessed September 22, 2018]. Robert Duff argues that using a punitive approach to punish offences educates the public.106Cp. R.A. Duff, Punishment, Communication, and Community, Oxford, Oxford University Press, 2003. Tyler Okimoto and Michael Wenzel refer to Emile Durkheim’s studies on the social function of punishment, serving to establish a societal awareness of what ought to be right or wrong.107Cp. Wenzel and Okimoto, “How Acts of Forgiveness Restore a Sense of Justice”; E. Durkheim, The Rules of Sociological Method, New Delhi, Vani Prakashan, 1960. Nils Christie, however, criticises this form of execution of the law. He argues that, through conflict, there is the potential to discuss the rules given by law, allowing for a restorative process, rather than a process characterised by punishment and a strict following of rules. Christie states that those suffering most from crimes are suffering twice, as although it is the offenders being put on trial, the victims have very little say in courtroom hearings where mainly lawyers argue with one-another. It basically boils down to guilty or not guilty, and no discussion in between. Christie argues that running restorative conferencing sessions helps both sides to come to terms with what happened. The victims of AI crimes would not only be placed in front of a court, but also be offered engagement in the process of seeking justice and restoration.108Cp. Nils Christie, “Conflicts as Property”, The British Journal of Criminology, 17 (1), 1977, pp. 1–15.

Restorative justice might support victims of AI crimes better than the punitive legal system, as it allows for the sufferers of AI crimes to be heard in a personalised way, which could be adopted to the needs of the victims (and offenders). As victims and offenders represent themselves in restorative conferencing sessions, these become much more affordable,109Cp. J. Braithwaite, “Restorative Justice and a Better Future”, in E. McLaughlin and G. Hughes (eds.), Restorative Justice: Critical Issues, London, SAGE, 2003, pp. 54–67. meaning that the barrier to seeking justice due to the financial costs would be partly eliminated, allowing for poor parties to be able to contribute to the process of justice. This would benefit wider society and AI technologies would not only be defined by a powerful elite. Restorative justice could hold the potential not only to discuss the AI crimes themselves, but also to get to the root of the problem and discuss the cause of an AI crime. For John Braithwaite restorative justice makes re-offending harder.110Cp. J. Braithwaite, Crime, Shame and Reintegration, Cambridge, Cambridge University Press, 1989.

In such a scenario, a future AI system capable of committing AI crimes would need to have knowledge of ethics around the particular discourse of restorative justice. The implementation of AI technologies will lead to a discourse around who is responsible for actions taken by AI technologies. Even when considering clearly defined ethical guidelines, these might be difficult to implement,111Cp. A. Conn, “Podcast: Law and Ethics of Artificial Intelligence”, Future of Life, March 31, 2017. Available at: https://futureoflife.org/2017/03/31/podcast-law-ethics-artificial-intelligence/ [accessed September, 22 2018]. due to the pressure of competition AI systems find themselves in.

That said, this speculation is restricted to humanised artificial intelligence systems. The main hindrance for AI technologies to be part of a restorative justice system might be that of the very human emotion of shame. Without a clear understanding of shame it will be impossible to resolve AI crimes in a restorative manner.112Cp. A. Rawnsley, “Madeleine Albright: ‘The Things that are Happening are Genuinely, Seriously Bad’”, The Guardian, July 8, 2018. Available at: https://web.archive.org/web/20190106193657/https://www.theguardian.com/books/2018/jul/08/madeleine-albright-fascism-is-not-an-ideology-its-a-method-interview-fascism-a-warning [accessed October 25, 2019]. Thus one might want to think about a humanised symbiosis between humans and technology,113Cp. D. Haraway, “A Cyborg Manifesto”, Socialist Review, 15 (2), 1985. Available at: http://www.stanford.edu/dept/HPS/Haraway/CyborgManifesto.html [accessed October 25, 2019]; C. Thompson, “The Cyborg Advantage”, Wired, March 22, 2010. Available at: https://www.wired.com/2010/03/st-thompson-cyborgs/ [accessed October 25, 2019]. along the lines of Garry Kasparov’s advanced chess,114Cp. J. Hipp et al., “Computer Aided Diagnostic Tools Aim to Empower Rather than Replace Pathologists: Lessons Learned from Computational Chess”, Journal of Pathology Informatics, 2, 2011. Available at: https://doi.org/10.4103/2153-3539.82050 [accessed October 25, 2019]. as in advanced jurisprudence.115Cp. J. Baggini, “Memo to Those Seeking to Live for Ever: Eternal Life Would be Deathly Dull”, The Guardian, July 8, 2018. Available at: https://web.archive.org/web/20181225111455/https://www.theguardian.com/commentisfree/2018/jul/08/live-for-ever-eternal-life-deathly-dull-immortality [accessed October 25, 2019]. A legal system where human and machine work together on restoring justice, for social justice.

Furthering this perspective, we suggest that reflections brought by new materialism should also be taken into account: not only as a critical perspective on the engendering and anthropomorphic representation of AI, but also to broaden the spectrum of what we consider to be justice in relation to all the living world. Without this new perspective the sort of idealized AI image of a non-living intelligence that deals with enormous amounts of information risks to serve the abstraction of anthropocentric views into a computationalist acceleration, with deafening results. Rather than such an implosive perspective, the application of law and jurisprudence may take advantage of AI’s computational and sensorial enhanced capabilities by including all information gathered from the environment, also that produced by plants, animals and soil.

References
1 Cp. G. Chaslot, “YouTube’s A.I. was divisive in the US presidential election”, Medium, November 27, 2016. Available at: https://medium.com/the-graph/youtubes-ai-is-neutral-towards-clicks-but-is-biased-towards-people-and-ideas-3a2f643dea9a#.tjuusil7d [accessed February 25, 2018]; E. Morozov, “The Geopolitics Of Artificial Intelligence”, FutureFest, London, 2018. Available at: https://www.youtube.com/watch?v=7g0hx9LPBq8 [accessed October 25, 2019].
2 Cp. M. Busby, “Use of ‘Killer Robots’ in Wars Would Breach Law, Say Campaigners”, The Guardian, August 21, 2018. Available at: https://web.archive.org/web/20181203074423/https://www.theguardian.com/science/2018/aug/21/use-of-killer-robots-in-wars-would-breach-law-say-campaigners [accessed October 25, 2019].
3 Cp. A. Hadzi, “Social Justice and Artificial Intelligence”, Body, Space & Technology, 18 (1), 2019, pp. 145–174. Available at: https://doi.org/10.16995/bst.318 [accessed October 25, 2019].
4 Cp. A. Kaplan and M. Haenlein, “Siri, Siri, in my Hand: Who’s the Fairest in the Land? On the Interpretations, Illustrations, and Implications of Artificial Intelligence”, Business Horizons, 62 (1), 2019, pp. 15–25. https://doi.org/10.1016/j.bushor.2018.08.004; S. Legg and M. Hutter, A Collection of Definitions of Intelligence, Lugano, Switzerland, IDSIA, 2007. Available at: http://arxiv.org/abs/0706.3639 [accessed October 25, 2019].
5 N. Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford, Oxford University Press, 2014.
6 Cp. T. King, N. Aggarwal, M. Taddeo and L. Floridi, “Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions”, SSRN Scholarly Paper No. ID 3183238, Rochester, NY, Social Science Research Network, 2018. Available at: https://papers.ssrn.com/abstract=3183238 [accessed October 25, 2019].
7 P. Mason, Clear Bright Future, London, Allen Lane Publishers, 2019.
8 Cp. T. Crook, Comparative Media Law and Ethics, London, Routledge, 2009; T. Crook, “Power, Intelligence, Whistle-blowing and the Contingency of History”, paper presented at the Annual Conference of the Institute of Communication Ethics, The Foreign Press Association, London, November 3, 2010. Available at: https://www.gold.ac.uk/media-communications/staff/crook/ [accessed October 25, 2019].
9 Cp. Mason, Clear Bright Future.
10 Cp. K. Marx, “Estranged Labour. London: Karl Marx Economic and Philosophical Manuscripts”, 1844. Available at: https://www.marxists.org/archive/marx/works/1844/manuscripts/labour.htm [accessed October 25, 2019]. Namely Mason is discussing, in regards to power elites: utilitarianism, social justice, Nietzsche’s ‘higher men’ approach, and finally Aristotle’s virtue ethics. Mason argues that “virtue ethics is the only ethics fit for the task of imposing collective human control on thinking machines”[note]Mason, Clear Bright Future, p. 166.
11 Cp. ibid.
12 S. Omohundro, “Autonomous Technology and the Greater Human Good”, Journal of Experimental & Theoretical Artificial Intelligence, 26 (3), 2014, pp. 303–315, here: p. 303.
13 Cp. P. Parikh, “On Liberalism and Neoliberalism”, Medium, October 21, 2017. Available at: https://medium.com/@pparikh1/on-liberalism-and-neoliberalism-5946523aa2ca [accessed January 4, 2019].
14 Cp. S. Pueyo, “Growth, Degrowth, and the Challenge of Artificial Superintelligence”, Journal of Cleaner Production, 197, 2018, pp. 1731-1736. https://doi.org/10.1016/j.jclepro.2016.12.138 [accessed October 25, 2019].
15 Cp. J. Danaher, “The Threat of Algocracy: Reality, Resistance and Accommodation”, Philosophy & Technology, 29 (3), 2016, pp. 245–268. Available at: https://doi.org/10.1007/s13347-015-0211-1 [accessed October 25, 2019].
16 Cp. J.K. Galbraith, The New Industrial State, Oxford, Princeton University Press, 2015.
17 Cp. S. Poole, “Arabic, Algae and AI: The Truth About ‘Algorithms’”, The Guardian, September 20, 2018. Available at: https://web.archive.org/web/20181119100303/https://www.theguardian.com/books/2018/sep/20/from-arabic-to-algae-like-ai-the-alarming-rise-of-the-algorithm- [accessed October 25, 2019]; D. Roio, “Algorithmic Sovereignty” Thesis, University of Plymouth, 2018. Available at: https://pearl.plymouth.ac.uk/handle/10026.1/11101 [accessed October 25, 2019]; A. Smith, “Franken-Algorithms: The Deadly Consequences of Unpredictable Code”, The Guardian, August 30, 2018. Available at: https://web.archive.org/web/20190105054549/https://www.theguardian.com/technology/2018/aug/29/coding-algorithms-frankenalgos-program-danger [accessed October 25, 2019].
18 Cp. Mason, Clear Bright Future.
19 Cp. C. Cadwalladr, “Elizabeth Denham: ‘Data Crimes are Real Crimes”, The Guardian, July 15, 2018. Available at: https://web.archive.org/web/20181121235057/https:// www.theguardian.com/uk-news/2018/jul/15/elizabeth-denham-data-protection-information-commissioner-facebook-cambridge-analytica [accessed October 25, 2019].
20 Cp. B. Olivier, “Cyberspace, Simulation, Artificial Intelligence, Affectionate Machines and Being Human”, Communicatio, 38 (3), 2012, pp. 261–278. https://doi.org/10.1080/02500167.2012.716763 [accessed October 25, 2019]; E.A. Wilson, Affect and Artificial Intelligence, Washington, University of Washington Press, 2011.
21 Cp. P.G. Ossorio, The Behavior of Persons, Ann Arbor, Descriptive Psychology Press, 2013. Available at: http://www.sdp.org/sdppubs-publications/the-behavior-of-persons/ [accessed October 25, 2019].
22 Cp. J. Jeffrey, “Knowledge Engineering: Theory and Practice”, Society for Descriptive Psychology, 5, 1990, pp. 105–122.
23 Cp. P.G. Ossorio, Persons: The Collected Works of Peter G. Ossorio, Volume I. Ann Arbor, Descriptive Psychology Press, 1995. Available at: http://www.sdp.org/sdppubs-publications/persons-the-collected-works-of-peter-g-ossorio-volume-1/ [accessed October 25, 2019].
24 Cp. M. Mountain, “Lawsuit Filed Today on Behalf of Chimpanzee Seeking Legal Personhood”, Nonhuman Rights Blog, December 2, 2013. Available at: https://www.nonhumanrights.org/blog/lawsuit-filed-today-on-behalf-of-chimpanzee-seeking-legal-personhood/ [accessed January 8, 2019]; M. Midgley, “Fellow Champions Dolphins as ‘Non-Human Persons’”, Oxford Centre for Animal Ethics, January 10, 2010. Available at: https://www.oxfordanimalethics.com/2010/01/fellow-champions-dolphins-as-%E2%80%9Cnon-human-persons%E2%80%9D/ [accessed January 8, 2019].
25 Cp. R. Bergner, “The Tolstoy Dilemma: A Paradigm Case Formulation and Some Therapeutic Interventions”, in K.E. Davis, F. Lubuguin and W. Schwartz (eds.), Advances in Descriptive Psychology, Vol. 9, 2010, pp. 143–160. Available at: http://www.sdp.org/sdppubs-publications/advances-in-descriptive-psychology-vol-9; P. Laungani, “Mindless Psychiatry and Dubious Ethics”, Counselling Psychology Quarterly, 15 (1), 2002, pp. 23–33. Available at: https://doi.org/10.1080/09515070110102305 [accessed October 26, 2019].
26 W. Schwartz, “What Is a Person and How Can We Be Sure? A Paradigm Case Formulation”, SSRN Scholarly Paper No. ID 2511486, Rochester, NY: Social Science Research Network, 2014, pp. 27–34. Available at: https://papers.ssrn.com/abstract=2511486 [accessed October 25, 2019].
27 Ossorio, The Behavior of Persons.
28 W. Schwartz, “What Is a Person and How Can We Be Sure? A Paradigm Case Formulation, p. 30.
29 Cp. Mason, Clear Bright Future.
30 Cp. Aristotle and T.J. Saunders, The Politics, London, Penguin UK, 1981.
31 Cp. A. MacIntyre, Dependent Rational Animals: Why Human Beings Need the Virtues, revised edition, Chicago, Open Court, 2001.
32 Cp. T. Aquinas, Summa Theologiae: Volume 33, Hope: 2a2ae, Cambridge, Cambridge University Press, 2006, pp. 17–22.
33 A. MacIntyre, After Virtue, London, A&C Black, 2013, p. xi.
34 Cp. J. Austin, The Province of Jurisprudence Determined: And, The Uses of the Study of Jurisprudence, Indianapolis, Hackett Publishing, 1998.
35 Cp. H.L.A. Hart, The Concept of Law, Oxford, Oxford University Press, 1961.
36 Cp. R. Dworkin, A Matter of Principle, Oxford, Clarendon Press, 1986.
37 Cp. H. Kelsen, Pure Theory of Law, Los Angeles, University of California Press, 1967; H. Kelsen, General Theory of Law and State, New Jersey, The Lawbook Exchange, Ltd., 2009.
38 Cp. H. Kelsen, General Theory of Norms, Oxford, Clarendon Press, 1991.
39 Cp. W.J. Chambliss and R.B. Seidman, Law, Order, and Power, London, Addison-Wesley Publishing Company, 1982.
40 Cp. M. Weber, Economy and Society: An Outline of Interpretive Sociology, Los Angeles, University of California Press, 1978.
41 Cp. K. Marx, Capital, Vol 1, London, Penguin Books Limited, 1990.
42 Cp. M. Harris, “Glitch Capitalism: How Cheating AIs Explain Our Stagnant Present”, New York Magazine, April 23, 2018. Available at: http://nymag.com/selectall/2018/04/malcolm-harris-on-glitch-capitalism-and-ai-logic.html [accessed May 16, 2018].
43 Cp. C. Sumner, Reading Ideologies: An Investigation Into the Marxist Theory of Ideology and Law, London, Academic Press, 1979.
44 Sumner, Reading Technologies, p. 51.
45 Cp. J. Locke, Political Writings. London, Mentor, 1993.
46 Cp. J. Moor, The Turing Test: The Elusive Standard of Artificial Intelligence, New York, Springer Science & Business Media, 2003.
47 A.M. Turing, “Computing Machinery and Intelligence”, Mind, 59 (236), 1950, pp. 433–460, here: p. 460.
48 Cp. M. Hoffman, and R. Pfeifer, “The Implications of Embodiment for Behavior and Cognition: Animal and Robotic Case Studies”, in W. Tschacher and C. Bergomi (eds.), The Implications of Embodiment: Cognition and Communication, Exeter, Andrews UK Limited, 2015, pp. 31–58. Available at: https://arxiv.org/abs/1202.0440.
49 N.J. Nilsson, The Quest for Artificial Intelligence, Cambridge, Cambridge University Press, 2009.
50 Cp. R. Brooks, Cambrian Intelligence: The Early History of the New AI, Cambridge, MA, A Bradford Book, 1999.
51 Cp. D. Crevier, AI: The Tumultuous History of the Search for Artificial Intelligence, New York, Basic Books, 1993; H.P. Newquist, The Brain Makers, Indianapolis, Ind: Sams., 1994.
52 Cp. R. Brooks, “A Robust Layered Control System for a Mobile Robot”, IEEE Journal on Robotics and Automation, 2 (1), 1986, pp. 14–23. Available at: https://doi.org/10.1109/JRA.1986.1087032 [accessed October 25, 2019].
53 Cp. Brooks, Cambrian Intelligence.
54 Cp. R. Pfeifer, “Embodied Artificial Intelligence”, presented at the International Interdisciplinary Seminar on New Robotics, Evolution and Embodied Cognition, Lisbon, November, 2002. Available at: https://www.informatics.indiana.edu/rocha/publications/embrob/pfeifer.html [accessed October 25, 2019].
55 Cp. T. Renzenbrink, “Embodiment of Artificial Intelligence Improves Cognition”, Elektormagazine, February 9, 2012. Available at: https://www.elektormagazine.com/articles/embodiment-of-artificial-intelligence-improves-cognition [accessed January 10, 2019]; G. Zarkadakis, “Artificial Intelligence & Embodiment: Does Alexa Have a Body?”, Medium, May 6, 2018. Available at: https://medium.com/@georgezarkadakis/artificial-intelligence-embodiment-does-alexa-have-a-body-d5b97521a201 [accessed January 10, 2019].
56 Cp. L. Steels and R. Brooks, The Artificial Life Route to Artificial Intelligence: Building Embodied, Situated Agents, London/New York, Taylor & Francis, 1995.
57 M.B. Paradowski, “Developing Embodied Multisensory Dialogue Agents”, presented at the AISB/IACAP 2012 Symposium, Birmingham, November, 2011. Available at: http://events.cs.bham.ac.uk/turing12/ [accessed October 25, 2019].
58 B. Ladd, “The Military Industrial Complex Is in a Massive Battle Against Big Tech”, Observer, May 24, 2019. Available at: https://observer.com/2019/05/military-industrial-complex-big-tech/ [accessed October 25, 2019].
59 Cp. J. Raz, The Authority of Law: Essays on Law and Morality, Oxford, OUP Oxford, 2009.
60 Cp. M. Foucault, Discipline and Punish: The Birth of the Prison, London, Vintage Books, 1995.
61 S. Chen, “AI Research Is in Desperate Need of an Ethical Watchdog”, Wired, September 18, 2017. Available at: https://www.wired.com/story/ai-research-is-in-desperate-need-of-an-ethical-watchdog/ [accessed October 25, 2019].
62 M. Foucault, The History of Sexuality, Vol. 1, London: Harmondsworth, Penguin, 1981, p. 144.
63 M. Foucault, “Disciplinary Power and Subjection”, in S. Lukes (ed.), Power, New York, NYU Press, 1986, pp. 229–242, here: p. 230.
64 M. Foucault, Power, edited by C. Gordon, London, Penguin, 1980, p. 34.
65 Cp. Foucault, Power.
66 Cp. M. Foucault, “The Subject and Power”, Critical Inquiry, 8 (4), 1982, pp. 777–795.
67 Foucault, “The Subject and Power”, p. 780.
68 A.R. Galloway, Protocol: How Control Exists After Decentralization, Cambridge, MA, MIT Press, 2004, p. 81.
69 Cp. M. Foucault, The Birth of Biopolitics: Lectures at the Collège de France, 1978–1979, London, Pan Macmillan, 2008.
70 Galloway, Protocol, p. 81.
71 Cp. Galloway, Protocol, p. 147.
72 Galloway, Protocol, p. 146.
73 Galloway, Protocol, p. 157.
74 Cp. M. Foucault, Discipline and Punish: The Birth of the Prison, New York, Pantheon, 1977.
75 Cp. M. Zimmer, “The Panoptic Gaze of Web 2.0: How Web 2.0 Platforms Act as Infrastructure of Dataveillance”, Kulturpolitik, 2, July 1, 2009, p. 5. Available at: http://michaelzimmer.org/files/Zimmer%20Aalborg%20talk.pdf [accessed October 26, 2019].
76 Cp. European Commission, “The Anti-Counterfeiting Trade Agreement (ACTA)”, 2007. Available at: http://ec.europa.eu/trade/creating-opportunities/trade-topics/intellectual-property/anti-counterfeiting/ [accessed December 30, 2010]; J. Lambert, “Statement on Adoption of Joint Resolution on ACTA”, November 24, 2010. Available at: http://www.jeanlambertmep.org.uk/news_detail.php?id=620 [accessed December 15, 2010].
77 Cp. C. Fuchs, Social Networking Sites and the Surveillance Society, Vienna, Austria, Verein zur Förderung der Integration der Informationswissenschaften, 2009; A. Medosch, “Post-Privacy or the Politics of Labour, Intelligence and Information”, The Next Layer, January 15, 2010. Available at: http://thenextlayer.org/node/1237 [accessed January 19, 2010]; C. Wolf, The Digital Millennium Copyright Act, Washington, Pike & Fischer – A BNA Company, 2003.
78 Cp. L.B. Movius, “Surveillance, Control, and Privacy on the Internet: Challenges to Democratic Communication”, Journal of Global Communication, 2 (1), 2009, pp. 209–224.
79 Cp. Berkman Klein Center, “Ethics and Governance of AI”, 2018. Available at: https://cyber.harvard.edu/topics/ethics-and-governance-ai [accessed September 22, 2018]; J. Clark, “AI and Ethics: People, Robots and Society”, Washington Post, March 3, 2018. Available at: http://www.washingtonpost.com/video/postlive/ai-and-ethics-people-robots-and-society/2018/03/20/ffdff6c2-2c5a-11e8-8dc9-3b51e028b845_video.html [accessed September 22, 2018]; P. Green, “Artificial Intelligence and Ethics”, Markkula Center for Applied Ethics, Santa Clara University, 2017. Available at: https://www.scu.edu/ethics/all-about-ethics/artificial-intelligence-and-ethics/ [accessed September 22, 2018]; B. Lufkin, “Why the Biggest Challenge facing AI is an Ethical One”, BBC, March 7, 2017. Available at: http://www.bbc.com/future/story/20170307-the-ethical-challenge-facing-artificial-intelligence [accessed September 22, 2018].
80 P. Fitzpatrick, The Mythology of Modern Law, London, Routledge, 2002, p. 147.
81 Fitzpatrick, The Mythology of Modern Law, p. 160.
82 Cp. Dworkin, A Matter of Principle; Hart, The Concept of Law.
83 Fitzpatrick, The Mythology of Modern Law, p. 6.
84 Cp. R. West, Narrative, Authority, and Law, Michigan, MI, University of Michigan Press, 1993.
85 Cp. J.N. Adams and R. Brownsword, Understanding Law, New York, Sweet & Maxwell, 2006, p. 11.
86 Cp. Chamblis and Seidman, Law, Order, and Power.
87 Cp. Nicole Moreham, The Law of Privacy and the Media, Oxford, Oxford University Press, 2016.
88 T. Crook, Comparative Media Law and Ethics, London, Routledge, 2009, p. 115.
89 Crook, Comparative Media Law and Ethics, p. 322.
90 Steve Warner, “Secrets, Spies and Whistleblowers”, Article 19 and Liberty, London, 2000. Available at: https://www.article19.org/data/files/pdfs/publications/secrets-spies-and-whistleblowers.pdf [accessed November 1, 2019].
91 Cp. C. Maria Keet, “Open World Assumption”, in Werner Dubitzky, Olaf Wolkenhauer, Kwang-Hyun Cho and Hiroki Yokota (eds.), Encyclopedia of Systems Biology, New York, NY, Springer New York, 2013, pp. 1567–1567. Available at: https://doi.org/10.1007/978-1-4419-9863-7_734 [accessed November 1, 2019].
92 Crook, Comparative Media Law and Ethics, p. 310.
93 Cp. R. Courtland, “Bias Detectives: The Researchers Striving to Make Algorithms Fair”, Nature, 558, 2018, pp. 357–360. Available at: https://doi.org/10.1038/d41586-018-05469-3 [accessed October 25, 2019].
94 Cp. H. Fry, “We Hold People With Power to Account. Why Not Algorithms?” The Guardian, September 17, 2018. Available at: https://web.archive.org/web/20190102194739/https://www.theguardian.com/commentisfree/2018/sep/17/power-algorithms-technology-regulate [accessed October 25, 2019].
95 Cp. J. Ito, “Resisting Reduction: A Manifesto. Journal of Design and Science”, Journal of Design and Science, 3, Cambridge MA, MIT Press, 2017. Available at: https://doi.org/10.21428/8f7503e4 [accessed October 25, 2019].
96 Cp. J. Bridle, “Rise of the Machines: Has Technology Evolved Beyond our Control?” The Guardian, June 15, 2018. Available at: https://web.archive.org/web/20190111222310/https://www.theguardian.com/books/2018/jun/15/rise-of-the-machines-has-technology-evolved-beyond-our-control- [accessed October 25, 2019]; P.J. Singh, “AI Superpower or Client Nation?”, The Hindu, July 27, 2018. Available at: https://www.thehindu.com/opinion/op-ed/ai-superpower-or-client-nation/article24523017.ece [accessed October 25, 2019].
97 Cp. Giorgio Agamben, The Kingdom and the Glory: For a Theological Genealogy of Economy and Government, Stanford, CA, Stanford University Press, 2011.
98 Cp. R. Keeble, Ethics for Journalists, London, Routledge, 2008.
99 Cp. M. Kieran, Media Ethics, London, Psychology Press, 1998.
100 Cp. O. Etzioni, “How to Regulate Artificial Intelligence”, The New York Times, January 20, 2018. Available at: https://www.nytimes.com/2017/09/01/opinion/artificial-intelligence-regulations-rules.html [accessed October 25, 2019]; A. Goel, “Ethics and Artificial Intelligence”, The New York Times, December 22, 2017. Available at: https://www.nytimes.com/2017/09/14/opinion/artificial-intelligence.html [accessed October 25, 2019].
101 Cp. N. Vidmar and D.T. Miller, “Socialpsychological Processes Underlying Attitudes Toward Legal Punishment”, Law and Society Review, 1980, pp. 565–602.
102 Cp. M. Wenzel and T.G. Okimoto, “How Acts of Forgiveness Restore a Sense of Justice: Addressing Status/Power and Value Concerns Raised by Transgressions”, European Journal of Social Psychology, 40 (3), 2010, pp. 401–417.
103 Cp. N. Bostrom and E. Yudkowsky, “The Ethics of Artificial Intelligence”, in K. Frankish and W.M. Ramsey (ed.), The Cambridge Handbook of Artificial Intelligence, Cambridge, Cambridge University Press, 2014, pp. 316–334; Frankish and Ramsey, The Cambridge Handbook of Artificial Intelligence.
104 Cp. J. Fionda, Devils and Angels: Youth Policy and Crime, London, Hart, 2005.
105 Cp. R. Montti, “Google’s ‘Don’t Be Evil’ No Longer Prefaces Code of Conduct”, Search Engine Journal, May 20, 2018. Available at: https://www.searchenginejournal.com/google-dont-be-evil/254019/ [accessed September 22, 2018].
106 Cp. R.A. Duff, Punishment, Communication, and Community, Oxford, Oxford University Press, 2003.
107 Cp. Wenzel and Okimoto, “How Acts of Forgiveness Restore a Sense of Justice”; E. Durkheim, The Rules of Sociological Method, New Delhi, Vani Prakashan, 1960.
108 Cp. Nils Christie, “Conflicts as Property”, The British Journal of Criminology, 17 (1), 1977, pp. 1–15.
109 Cp. J. Braithwaite, “Restorative Justice and a Better Future”, in E. McLaughlin and G. Hughes (eds.), Restorative Justice: Critical Issues, London, SAGE, 2003, pp. 54–67.
110 Cp. J. Braithwaite, Crime, Shame and Reintegration, Cambridge, Cambridge University Press, 1989.
111 Cp. A. Conn, “Podcast: Law and Ethics of Artificial Intelligence”, Future of Life, March 31, 2017. Available at: https://futureoflife.org/2017/03/31/podcast-law-ethics-artificial-intelligence/ [accessed September, 22 2018].
112 Cp. A. Rawnsley, “Madeleine Albright: ‘The Things that are Happening are Genuinely, Seriously Bad’”, The Guardian, July 8, 2018. Available at: https://web.archive.org/web/20190106193657/https://www.theguardian.com/books/2018/jul/08/madeleine-albright-fascism-is-not-an-ideology-its-a-method-interview-fascism-a-warning [accessed October 25, 2019].
113 Cp. D. Haraway, “A Cyborg Manifesto”, Socialist Review, 15 (2), 1985. Available at: http://www.stanford.edu/dept/HPS/Haraway/CyborgManifesto.html [accessed October 25, 2019]; C. Thompson, “The Cyborg Advantage”, Wired, March 22, 2010. Available at: https://www.wired.com/2010/03/st-thompson-cyborgs/ [accessed October 25, 2019].
114 Cp. J. Hipp et al., “Computer Aided Diagnostic Tools Aim to Empower Rather than Replace Pathologists: Lessons Learned from Computational Chess”, Journal of Pathology Informatics, 2, 2011. Available at: https://doi.org/10.4103/2153-3539.82050 [accessed October 25, 2019].
115 Cp. J. Baggini, “Memo to Those Seeking to Live for Ever: Eternal Life Would be Deathly Dull”, The Guardian, July 8, 2018. Available at: https://web.archive.org/web/20181225111455/https://www.theguardian.com/commentisfree/2018/jul/08/live-for-ever-eternal-life-deathly-dull-immortality [accessed October 25, 2019].

Adnan Hadzi is currently working as resident researcher at the University of Malta. Adnan has been a regular at Deckspace Media Lab, for the last decade, a period over which he has developed his research at Goldsmiths, University of London, based on his work with Deptford.TV / Deckspace.TV. It is through Free and Open Source Software and technologies this research has a social impact. Currently Adnan is a participant researcher in the MAZI/CreekNet research collaboration with the boattr project.

Denis Roio, better known by the hacker name Jaromil, is CTO and co~founder of the Dyne.org software house and think &do tank based in Amsterdam, developers of free and open source software with a strong focus on peer to peer networks, social values, cryptography, disintermediation and sustainability. Jaromil holds a Ph.D on “Algorithmic Sovereignty” and received the Vilém Flusser Award at transmediale (Berlin, 2009) while leading for 6 years the R&D department of the Netherlands Media art Institute (Montevideo/TBA). He is the leading technical architect of DECODE, an EU funded project on blockchain technologies and data ownership, involving pilots in cooperation with the municipalities of Barcelona and Amsterdam.