Skip to main content Accessibility help
×
Hostname: page-component-68c7f8b79f-7wx25 Total loading time: 0 Render date: 2025-12-27T22:33:40.280Z Has data issue: false hasContentIssue false

Part I - Conceptualizing the Digital Human

Published online by Cambridge University Press:  11 November 2025

Beate Roessler
Affiliation:
University of Amsterdam
Valerie Steeves
Affiliation:
University of Ottawa

Information

Type
Chapter
Information
Being Human in the Digital World
Interdisciplinary Perspectives
, pp. 11 - 76
Publisher: Cambridge University Press
Print publication year: 2025
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC 4.0 https://creativecommons.org/cclicenses/

Part I Conceptualizing the Digital Human

2 Platform City People

In the course of my research on what I am now thinking about as the transition from smart cities to platform cities, I almost immediately began to ask what the characteristics of the proposed platform citizens, the humans who will live in these places, were. There has been writing on “smart citizens” and much of this takes a normative stance, even when critical. On the one hand, many such pieces argue for the displacement or supplementing of the concept of the “smart city” with that of the “smart citizen,” in other words as a form of empowerment or bottom-up development or even as a way of ameliorating the potential negative and technocentric effects of smart city development (see e.g. Cardullo and Kitchin Reference Cardullo and Kitchin2019; Powell Reference Powell2021, for critical takes on this genre). On the other hand, some approaches to smart citizenship have taken an integrative approach, considering what people, suitably educated, can add to the smart city, in other words how they can become part of this vision of smart urbanism or smart governance more broadly (e.g. Noveck Reference Noveck2015).

However, the aim of this preliminary and somewhat experimental intervention is not a normative one but is at once empirical and theoretical. I want to concentrate on the way in which the proposed inhabitants of platform cities are imagined by the developers and promotors of these cities, in other words how the developers understand the “nature” of the inhabitants of their specific kind of these neoliberal smart urban developments. I show that the envisaged inhabitants of such platform cities are a specific kind of human being, not humanity in general but platform city people who combine a technologically enabled class and political identity (property owning, entrepreneurial, libertarian) with generic environmental “goodness,” that verges on an imagination of transhumanist speciation (data-driven, surveillant, robotic).

2.1 The Platform City

The concept of the platform city is my own and is not (yet) a general one. It is not the primary purpose of this chapter to describe platform cities in general; however, I do need to provide a brief outline of the concept and of the broader argument here. In the planetary age, emerging conjunctions of technology, surveillance, security, and urbanism are networking ordinary objects and infrastructures via the “Internet of Things,” “… a global infrastructure for the Information Society … interconnecting things based on … interoperable information and communication technologies” (ITU 2012). Up until this point, the primary urban instantiation has been termed the “smart city” (Coletta et al. Reference Coletta, Evans, Heaphy and Kitchin2018; Hall et al. Reference Hall, Bowerman, Braverman, Taylor, Todosow and von Wimmersperg2000; Marvin et al. Reference Marvin, Luque-Ayala and McFarlane2015). The smart city is an “urban assemblage” (Venn Reference Venn, Berking, Frank, Frers, Löw, Meier, Steets and Stoetzer2006), characterized by “sociotechnical imaginaries” (Jasanoff and Kim Reference Jasanoff and Kim2015) of pervasive and seamless wireless networks and distributed sensor platforms from video surveillance to meteorological stations, monitoring flows from sewerage to traffic to criminal activities and providing information in real-time or in anticipation of risks. Surveillance and security are uneasy components of these visions, but smart cities are inevitably disciplinary structures (Vanolo Reference Vanolo2014) or surveillance cities (Murakami Wood Reference Murakami Wood2015), because technocentric management of urban flows structurally requires data about everything, including people, and simultaneously requires everything, including people, to also be data (Mattern Reference Mattern2021; cf. Kitchin Reference Kitchin2014).

“Actually existing smart cities” (Shelton et al. Reference Shelton, Zook and Wiig2015) have often been unimpressive and radically incomplete (Murakami Wood and Mackinnon Reference Murakami Wood and Mackinnon2019). An early project, Rio de Janeiro’s IBM-sponsored Smart City control room, was lauded by then Mayor Eduardo Paes as giving him the ability to manage the city from anywhere (Murakami Wood Reference Murakami Wood, Acuto and Steele2013), but it has had little long-term impact on urban security in Rio. Studies of India’s “hundred smart cities” policy have shown that the ambitious scheme is creating very different and not necessarily compatible official and subaltern imaginations of urban futures and concepts of citizenship. In other words, while the official aims may always have been overpromised, citizens themselves have attempted to harness the opportunities offered to generate their own realities (Datta Reference Datta2018, Reference Datta2019).

However, before and during the now almost 20-year history of smart cities, there have been pre-existing and parallel histories of other urban forms like Freeports, Charter Cities, Special Investment Zones or Enterprise Zones, leisure/tourist cities like Las Vegas or Dubai, exclusive cities from gated communities to massive projects like Brazil’s Alphavilles (see e.g. Caldeira Reference Caldeira2001; Davis and Monk Reference Davis and Monk2008; Graham and Marvin Reference Graham and Marvin2001), to the whole recent history of police and military urbanism (Graham Reference Graham2011). For precursors, one can look to Malaysia’s 1990s-in-origin Multimedia Super Corridor (Bunnell Reference Bunnell2002) consisting of the twin cities of Putraya (administration) and Cyberjaya (business), or the much cited South Korean business district of Songdo (Halpern et al. Reference Halpern, LeCavalier, Calvillo and Pietsch2013), and, of course, Singapore, a city-state that has come to stand for a great deal in this new model (Calder Reference Calder2016; Mahizhnan Reference Mahizhnan1999), probably much more than its actual history can bear (Walton Reference Walton2019). The more educated designers also look to the modernist tabula rasa ideology of Le Corbusier’s Plan Voisin or Ville Radieuse, or Costa and Niemeyer’s Brasilia, but stripped of the latter’s mid-century socialism, which is portrayed now as a naïve post-war dream of a common humanity, lost following the neoliberal turn of the 1970s which admitted no other options but free-market capitalism (see Slobodian Reference Slobodian2018).

In various ways these other histories have intersected with smart cities and are often considered as simply another aspect of the same phenomenon. What I am examining is one such convergence: billion-dollar projects for new cities or urban neighbourhoods that demonstrate characteristics of several of these ideal urban forms. These hybrids are corporate-oriented and neoliberal whether or not they are created or managed by corporations directly. These are what I term platform cities because they exist on a foundation with a specific kind of neoliberal capitalism, which has important implications for the place of democracy in relation to the economy, and to which I shall return later.

There are significant differences between these new platform cities, which I explore in other pieces, but what is equally striking are the shared material and ideological elements. Briefly, the first of these is a noticeable, sometimes advertised, degree of separation from the surrounding polity, varying from at least some kind of local economic, social, cultural, or political autonomy through to a full-blown city-state model. Singapore’s “smart nation” or the leisure/investment metropolis of Dubai are of course the inspirations here, but there are many older examples or even those to be found in fiction that are cited by proponents. Singapore is a particular example to this latest wave of platform city developers, mainly for its politics rather than its technological aspects: the concept of an independent city-state appeals partly at least because it is free from the supposedly constrictive embrace of the extended territorial nation-state. However, it is also an example of the second shared element: a highly neoliberal conception of governance in which democracy is de-emphasized or even abandoned versus financial freedom and property rights, while remaining visibly multicultural. The third is the dependence upon almost total data extractivism and ubiquitous surveillance, underpinned by some kind of Artificial Intelligence (AI). The fourth and final shared element is a bland neoliberal globalist aesthetics, which at its most extreme verges on a neo-colonial presentation. This consists of smooth computer-generated visualizations, smiling putative, often white or racially ambiguous residents, starchitect master planners, and advisory boards consisting of the “usual suspects” from the transnational ruling class in law, urban planning, finance, and consultancy.

2.2 Who Are the Platform City People?

That final shared element leads to an obvious question around the implied “nature” of the inhabitants, in other words the identity and subjectivity not of “actually existing smart citizens” (Shelton and Lodato Reference Shelton and Lodato2019) but of the proposed platform city people. For the remainder of this chapter, I will concentrate on the shared characteristics of the envisaged new inhabitants of these platform cities, platform city people, based on five cases, a corpus whose publicity documents, plans, and charters or constitutions (where appropriate) I have examined and analysed as part of a preliminary study for a much larger project examining platform cities in an age of planetary crisis and surveillance. I have not given specific references to each of these quotations or paraphrases here. These projects are at differing stages: some of these remain on the drawing board, some are in development or being built, some have experienced setbacks, some have failed but nevertheless remain as inspirations for further smart cities development.

  1. 1. Sidewalk Labs’ failed Toronto Quayside development;

  2. 2. Nevada’s at least temporarily derailed “Innovation Zones” plan;

  3. 3. The Próspera Platform, still proposed for development, in Honduras;

  4. 4. Saudi Arabia’s massive in-construction NEOM project; and

  5. 5. Japan’s Super City policy, which plans multiple new AI-driven urban developments.

In this chapter, I deployed a simple thematic analysis to uncover the power relations behind talk and text, and the genealogical roots of clusters of meaning and how they have developed over time. This section is framed around declarative sentences that begin, “Platform city people are …,” each of which quotes a term or more that are used by at least one of the documents produced by the proposed cities or policies that I have been examining. These declarative sentences are followed by explanation that develops, in brief, some aspects of the broader discursive formation of which the particular discourse is part.

2.2.1 Platform City People Are Entrepreneurs

Within a context of “test-bed urbanism” (Halpern et al. Reference Halpern, LeCavalier, Calvillo and Pietsch2013), platform city people are portrayed as innovative risk-takers. Developing Adam Smith, Michel Foucault (Reference Foucault2008) described the subjectivity produced by neoliberalism as homo oeconomicus, a humanness not characterized by intelligence or wisdom but by their market relationships. The platform human is the cybernetic upgrade of homo oeconomicus, perfectly adapted for the intensified neoliberal capitalist mode of production filtered through technology based innovation or disruption. NEOM is explicit that “[r]esidents of NEOM will … embrace a culture of exploration [and] risk-taking …” (NEOM 2020). The platform human is not a fixed subject, but a relentless innovator and experimenter: as Eric Schmidt, former CEO of Alphabet, asked us to imagine prior to the Sidewalk Toronto experiment, “all the things you could do if someone would just give us a city and put us in charge” (Williams 2017). But the city and its inhabitants are also themselves the subjects of continuous experiment – part of what it means to be a risk-taker in this context is to take on the risk to oneself and to absorb risk for the platform. And, as we have seen with Sam Bankman-Fried and FTX (Roth Reference Roth2022), large-scale financial failure is always imminent and precipitous in platform capitalist culture.

2.2.2 Platform City People Are Free

As Slobodian (Reference Slobodian2018) shows, in the dominant “Geneva school” of neoliberalism, nation-states have always been seen as a hinderance to the creation of a true world economy, and high (or even any) taxation and regulation were presented as preventing innovation. The platform city has adopted this idea that the most innovative spaces have always been cities, and free and independent cities are the best. While none of the examples I have examined makes explicit reference to Paul Romer’s “Charter City” model (Romer Reference Romer2010), most do have some kind of charter or principles that set out their independence from the surrounding polity, particularly Próspera and the proposed Nevada Innovation Zones policy, but also NEOM, which claims that it will have “a progressive law compatible with international norms and conducive to economic growth” (NEOM 2020). It is a particular and peculiar Randian “freedom from” which enables the platform human to remove themselves from responsibility and consequences for those unnamed others outside, who are clearly seen as lesser humans, and indeed “freedom from government.” It is notable in this context that the Próspera Platform also includes, on its “advisory team,” Oliver Porter, the founder of Sandy Springs, in Georgia, USA, a city notable for being the first to incorporate a public–private partnership model and the closest extant US city to being a charter city (Klein Reference Klein2007).

2.2.3 Platform City People Are Leaders

NEOM’s (2020) website claims that “[a]s a hub for innovation, entrepreneurs, business leaders and companies will come to research, incubate and commercialize new technologies and enterprises in ground-breaking ways.” The Próspera website (2022) claims that the city “enables entrepreneurs to solve problems structurally and responsibly” and, according to its charter, in the first place only people with “significant business, management or leadership experience” are eligible to stand for selection to the Council that will run the city. The fact that this conception of leadership serves to embed existing power, prejudices, and inequalities is not a bug, it is a feature. It replaces Plato’s elitist utopian model of The Republic, of philosopher-kings with entrepreneur-kings or, as we shall see, proprietor-kings.

2.2.4 Platform City People Are Tech Bros

Platform cities are founded in techno-determinism: whatever is wrong now, there is a clear path to a better, more prosperous future through technology. While, in many cases, there is no specific reference of any requirement as to the profession of platform city people, technology is almost always clearly implied. When Dan Doctoroff, CEO of Sidewalk Toronto, the development that would be built “from the Internet, up,” addressed the question of who would live in the new neighbourhood, and said with a smirk that “they won’t all be tech bros,” his implication was clearly the opposite, that the target of this development was indeed young men in the tech sector (Murakami Wood Reference Murakami Wood, Valverde and Flynn2020, 96). Nevada’s Innovation Zones policy attempted no such deception: the failed bill specified the exact kinds of corporations that would be allowed to create an Innovation Zone, those working in blockchain, autonomous technology, IoT, robotics, AI, wireless, biometrics, and renewables. The last area seems to have been added on to claim some small amount of eco-credibility faced with the mounting reports of the unsustainability of blockchain and cryptocurrencies, but the important thing is that it was clearly still a technology driven sector.

2.2.5 Platform City People Are Data-driven

Platform city people will find strength through data. Their bodies will be maintained with vigorous and carefully calibrated exercise, assessed through wearable technologies. They will sleep exactly the recommended amount and will wake at precisely the optimal time every day. In Japan’s Super City proposals, all of this data from multiple bodily and environmental sensors will be collected for medical and unspecified “improvement” purposes. NEOM’s Head of Technology and Digital, Joseph Bradley, argues that the city will collect and use “90% of available data” for the benefit of its inhabitants. This benefit will come, as we shall see, from the analysis of all that data by AI.

2.2.6 Platform City People Are Frictionless

As if Giles Deleuze’s “Postscript on the societies of control” (Reference Deleuze1992) was an instruction manual, platform humans will operate in all ways as smoothed, modulated, and unhindered. Nothing will slow down their movements or transactions. Nothing will impede the flow of goods, ideas, or finance. Japan’s Super Cities are envisaged as entirely cashless, running on some kind of blockchain-based virtual currency – although it is unclear exactly what or how. But the spice must flow. Próspera offers virtual citizenship and the ability to access the platform from anywhere in the world: one need not be in Próspera to be part of Próspera. Sidewalk Toronto tried to recombine this virtual friction-free flow with material seamlessness: promising total convenience and integration of transaction and delivery with all services as literal infrastructure: in a network of underground tunnels, where a ceaseless traffic of AI-driven autonomous delivery vehicles would ensure the inhabitants of the rabbit hutch-like, reduced-size apartments would always get what they wanted ideally – once the Google predictive marketing analytics were functioning perfectly – before they even knew they needed it.

2.2.7 Platform City People Are Private

Ironically, the platform human is both totally known to the AI-driven systems that harvest and sort and sift their data and to those others they chose to be known to, but private and indeed unknowable to the vast majority of ordinary humanity and the authorities and governments who would wish to tax them or regulate them. Their dealings are closed, their tax records sealed, their transactions in offshore banks – indeed, the entire geopolitical point of any of these developments is for them to be “offshore.” That kind of privacy is very important to the platform city person. In other words, drawing on Foucault (Reference Foucault2007), this is an inclusive, enfolding biopolitical governmentality – for those inside.

2.2.8 Platform City People Are Safe

In ways that are both implicit and explicit, total safety is promised by all these platform cities. Clearly the “risk-taking” that is considered so essential to the personality of the platform human does not extend to their own lives and well-being. Platform city people are happy to be checked out, examined, identified, evaluated, and cleared. Platform city people are happy to be under surveillance “for their own good,” and suspicious of those who resist surveillance. The platform city person is not a threat, a terrorist, a criminal, or even anti-social. Platform city people raise no flags, they have no suspicious data-points. Their internet search history is impeccable. Platform city people are smooth, bland, and unthreatening. Their friends and family are just like them. And, despite the libertarian rhetoric, “social credit” style “assessment with consequences” is hinted at in several schemes and is overt in Japan’s Super City proposals where good deeds will be rewarded with payment in an internal blockchain-based currency. This is what Chris Gilliard and David Golumbia (Reference Gilliard and Golumbia2021) call “luxury surveillance” – inside there are only carrots, no sticks.

2.2.9 Platform City People Are Secure

In most of the plans, security is not usually overt: you will not find the multiple control rooms, drone swarms, and a special new private security force that NEOM will have in its promotional literature, rather in security industry publications, one can find references to Mohammed Bin Salman’s billion-dollar investment in security for the linear city (Murakami Wood Reference Murakami Wood2024). It was only implied that Nevada’s Innovation Zones would have had a Sheriff’s department since it is by virtue of being politically a “county,” that an Innovation Zone has control over local police (Blockchain LLC 2021) – a Sheriff’s Department in Nevada is the entirety of municipal police, not some lesser rural form.

But, in some cases, particularly in the case of Próspera, security is a named function of the city in Article X of its charter, and here there appear to be no limits in the charter to the defensive rights of the platform city. It can have police, security, and intelligence services and perhaps even an army; and, despite being inside Honduras, it can even request security assistance from other external nation-states. This is unusual not in the depth of the possible security that platform city people will enjoy but in the fact that it is so overt. Ostensibly “small government” platform cities mask increasingly distributed and networked technologies of security and governance. All platform cities are highly securitized in their conception, financially and socially exclusionary, implicitly racialized/eugenic in some cases, metaphorically and legally, if not physically, walled and gated. This is the other side of the interior biopolitical governmentality. For those people outside, platform city governmentality is pure necropolitics (Mbembe Reference Mbembe2020): their security/policing objects are not the safe platform humans but those risky external and excluded others.

2.2.10 Platform City People Are Colonists

The platform city is portrayed as an explicit island of safety in an implicit world of chaos. Platform city people are portrayed like brave explorers in a twenty-first century version of European expansion. In the case of Próspera, the literal island of Roatan exists in one of the most violent nations in the world, Honduras. However, the natives are friendly! In fact, the local inhabitants will be “integrated,” guaranteed jobs at 25 per cent above the local minimum wage, but they will not be residents, even as the city is built on their lands. It is clear that they are not happy about this, but not being platform city people, their views are discounted (MacDougall and Simpson Reference MacDougall and Simpson2021). This builds in the model operated by Singapore with its thousands of Malay and other day-labourers who cross the international border morning and night to support this marvel of smart capitalism but who are not allowed to live there; or the armies of temporary workers who constitute 90 per cent of the population of Dubai or Qatar but are entirely outwith its polity.

2.2.11 Platform City People Are Property Owners

A key element of the contract is what makes a platform human is their investment in the platform city. The platform human owns property and, given the Lockean worldview that underlies this system, it is this ownership that grants them rights. In contrast, the “locals” will be “willingly” incorporated (Próspera) or removed like nomadic desert people who currently inhabit the area proposed to become NEOM (Whitson and Alaoudh Reference Whitson and Alaoudh2020) to be imprisoned or simply executed (AFP 2023). It is a return of the colonial doctrine of terra nullius (Fitzmaurice Reference Fitzmaurice2007) and the tabula rasa, or where this doctrine has already been applied with extreme prejudice, as in Nevada, US, the land owned by corporations seeking to become Innovation Zones was simply assumed to be “uninhabited” and owned entirely by the corporation. Indigenous people are already assumed to be extinct (cf. King Reference King2013).

2.2.12 Platform City People Are Multicultural

The language of “multiculturalism” and “diversity” is ubiquitous in the brochures and websites of platform cities. But, like the swordsman, Inigo Montoya’s much-memed remark from The Princess Bride, it does not mean what they think it means. Multiculturalism in platform cities is coded language. It means whiter, less brown, less black, less of whatever the local surrounding population consists of, and safely, blandly, international, educated, schooled in, aspiring to, and representing whiteness. It is not so much that platform city people are necessarily visibly white, but rather that the platform human strives toward whiteness as a “habit” of existence or a normative condition of being (c.f. Ahmed Reference Ahmed2007). They are Kees van der Pijl’s new “transnational ruling class” (Van der Pijl Reference Van der Pijl2005) and they “embody an international ethos” (NEOM 2020): educated, mobile, groomed, comfortably multilingual, and expecting the world they inhabit to conform to their expectations.

2.2.13 Platform City People Are Designed

In the brochure-websites of Próspera and NEOM, the future platform human lives in spaces that conjure the images of the technologies they develop: their preferred environments are created by the best architects, Starchitects (Knox Reference Knox, Derudder, Hoyler, Taylor and Witlox2011), like Norman Foster (one of the original advisors to NEOM) and Zaha Hadid Associates (the official architects for Próspera), who will generate sleek, minimal, and weightless living spaces, composed of glass, bamboo, and natural wood in neutral and calming colours, materializing the promise of 1990s techno-utopian hype, like Living on Thin Air (Leadbeater Reference Leadbeater2000). It should also be noted that clutter and visual noise confuse surveillance cameras and biometric recognition technologies (for more on the affordances of modern architecture for surveillance, see Steiner and Veel Reference Steiner and Veel2011).

2.2.14 Platform City People Are Sustainable

Platform city people are carbon-neutral and live in communities designed to maximize technological innovation to work seamlessly and sustainably. They like John Kerry’s May 2021 statement that 50% of reductions in greenhouse gasses will come from future technologies, because these are the technologies they are building, and they trust that they are the people who Kerry argued would not have to give up their quality of life to stop the climate crisis (see Murray Reference Murray2021). They know they are not responsible for the unsustainable practices of lesser people. However, like multiculturalism, sustainability is another code and part of an aesthetic politics of marketing. Platform cities can be seen as a form “becoming war” (Bousquet et al. Reference Bousquet, Grove and Shah2020): a mode of geopolitics and of emerging conflict. There is unshakeable belief that the platform economy is a clean economy, but its environmental effects are externalized to distant places and to the future, as the research on energy use of server farms and bitcoin mines has shown. Thus, “sustainability” is another marker of inclusion and exclusion between the clean, sustainable inside and the environmentally degraded, unliveable outside, and this division between islands of clean perfect cities with clean perfect humans and the “dumb, rude and dirty” old cities (SAP 2013) outside will become a key characteristic of the politics of the Anthropocene, if platform cities are allowed to proliferate.

2.2.15 Platform City People Are “All Watched over by Machines of Loving Grace”

Artificial Intelligence (AI) is a constant in platform city proposals. It was at the heart of what Google was proposing to do with all that data in Sidewalk Toronto, and the bet on which Google is staking its entire future (Eliot and Murakami Wood Reference Eliot and Wood2022). NEOM will be a “cognitive city” that will make “everyday life seamless through invisible AI-enabled infrastructure that continuously learns and predicts ways to make life easier for residents and businesses.” Japan’s Super Cities will be where “artificial intelligence, big data and other technologies are utilized to resolve social problems” (National Strategic Special Zones 2020). The language of “social physics” and the idea highly amenable to technocracy that social issues will be solved simply by collecting and analysing data rather than through qualitative, participatory democratic deliberation, is everywhere here – and Carlo Ratti, one of the key proponents of such thinking in quantified urbanism, was a member of the original advisory board of NEOM.

2.2.16 Platform City People Are a New Species

With their technologically integrated body and life, platform city people are almost the beginning of the transhumanist speciation of humanity, as was argued could be the result of current trajectories in tech development a decade ago (Stephan et al. Reference Stephan, Michael, Michael, Jacob and Anesta2012). They will separate themselves from less human beings; they are better than other human beings. They feel compassion for those that are being left behind, but evolution is inevitable. It was easy to be sceptical of such claims in 2012, despite the longstanding warnings from science-fictional portrayals of the same outcome from H. G. Wells’ “Morlocks” and “Eloi” in The Time Machine (Reference Wells1895) to Paul J. McAuley’s “Golden” in Four Hundred Billion Stars (Reference McAuley1988) and its sequels. These basic building blocks for transhumanism should remind us of the longstanding connection between fascism and the celebration of the machine, and the speed of technological transformation that caused many Italian Futurists to join Mussolini in the 1920s (Berghaus Reference Berghaus1996). Thus it was striking how, just a few years after 2012, one could see the juxtaposition of Israel rebranding itself as the “Start-up Nation”Footnote 1 while its Prime Minister, Netanyahu, was almost simultaneously arguing that the strong and adaptable survive and the weak are destined to be erased.Footnote 2 David Golumbia (Reference Golumbia2009; Reference Golumbia2016) has made similar convincing observations about the right-wing politics of Bitcoin, indeed that such is the ultimate “cultural logic of computation” more generally. With the emergence of the so-called TESCREALFootnote 3 cluster, an increasingly coherent ideological constellation embraced by platform capitalist CEOs like Elon Musk, we see neo-eugenicism with technological determinism, neoliberal economics and right-libertarian social policy would seem to provide the up-front or retrospective justification for many more authoritarian platform city initiatives.

2.2.17 Platform City People Could Be Robots

For platform city people, it is easier to imagine robot rights than the acknowledgment of human rights for the workers who support their exclusive lifestyle or for the rights of other living beings. The consultants’ report for the NEOM plan included the idea that 50 per cent of the population of the proposed city would be robots (Scheck et al. Reference Scheck, Jones and Said2019), building on a rather curious fascination with robots evidenced by the granting of Saudi citizenship to “Sophie,” a rather limited conversation bot, when neither Saudi women nor immigrants have full citizenship rights in the kingdom (Hart Reference Hart2018). Again, this links into the TESCREAL constellation, with philosopher, Nick Bostrom’s “long termism” specifically advocating policy directions based on the alleged ethical imperative of maximizing the supposed future trillions of “humans” living as uploaded consciousnesses in machines far beyond our solar system.

2.3 Conclusion

Platform city people are the proposed inhabitants of a new world: a clean, safe, sustainable, technologically advanced, and inventive world of minimal government and maximum empowerment and support for entrepreneurialism and the enjoyment of ownership. The problem is that it is a niche world, an archipelago of enclaves that constitutes only a tiny proportion of a planet in crisis, and its biopolitical exclusivity and violently exclusionary necropolitical character are evidence not of a desire to deal with the crisis itself but rather to engage in what Mike Davis memorably described as “padding the bunker” (Davis Reference Davis1999) – retreating to the childish denial of an unreal security enabled fantasy. This is the California ideology (Barbrook and Cameron Reference Barbrook and Cameron1996; Turner Reference Turner2010) taken to even greater extremes. This new California ideology (Murakami Wood Reference Murakami Wood2024) is most clearly expressed in the eugenic TESCREALity of Nick Bostrom, and Elon Musk would see such developments as a form of lifeboat for those most worth saving, who would be the basis for what they regard as the ultimate future of humanity. Their stance is that we should abandon any hopes for real material developments that would benefit the vast majority of actually existing human beings and those in the foreseeable future, like social and environmental justice, if they imperil their imaginary science-fictional future universe. The point here is not even to consider what “we” might lose in this transition, rather to draw attention to this fracturing of any possibility of a collective “we” as it relates to humanity in the present and the concentration on a winnowed, broadly white, supreme, selective “elite.” Platform city people are, therefore, emblematic of a kind of imaginary of human eco-socio-technological future that anyone interested in an equitable and just world should oppose as vigorously as possible.

3 Robots, Humans, and Their Vulnerabilities

3.1 Where Do We Humans Go?

In digital societies, systems are becoming ever more powerful, and algorithms ever more complex, efficient, and capable of learning. More and more human activities are being taken over by computers, robots, and AI, and the technologies are becoming ever more deeply and far-reaching integrated into our social practices. It has become impossible to see and understand people, relationships, and social structures independently of these technologies. Especially over the last 2 years, we read almost every day in the newspapers, including and especially the serious ones, that AI leads to the elimination of humans; that the point where AI is more intelligent than we are is approaching. That this has immediate consequences for human life is evident, but it is not just about individual aspects of human life. In a recent article, Acquisti et al. summarize their argument as follows: “Technologies, interfaces, and market forces can all influence human behavior. But probably, and hopefully, they cannot alter human nature” (Acquisti et al. Reference Acquisti, Brandimarte and Loewenstein2021, 202, emphasis mine; see, for the following, Roessler Reference Roessler2021a, Reference Roessler2021b).

What I am interested in here is how we should spell out this claim: what does it mean that we hope technologies do not change our human nature and what would this human nature be? Or, put differently, what would it mean to change human nature through technologies and why would it be bad to do so? There has been quite some discussion of this or similar problems in the literature and the most helpful and intriguing is, to my mind, Frischmann and Selinger’s (Reference Frischmann and Selinger2018) Re-Inventing Humanity. Selinger and Frischmann write in an article in The Guardian newspaper (Selinger and Frischmann Reference Selinger and Frischmann2015, emphasis mine): “Alan Turing wondered if machines could be human-like, and recently that topic’s been getting a lot of attention. But perhaps a more important question is a reverse Turing test: can humans become machine-like and pervasively programmable?” This latter question is the topic of their book. Additionally, in the introduction to their book, they write:

As we collectively race down the path toward smart techno-social systems that efficiently govern more and more of our lives, we run the risk of losing ourselves along the way. We risk becoming increasingly predictable, and, worse, programmable, like mere cogs in a machine.

(Frischmann and Selinger Reference Frischmann and Selinger2018, 1, emphasis mine)

To quote one last passage, this time by Pasquale: “The future that [the robot] Adam imagines … reduces the question of human perfectibility to one of transparency and predictability. But disputes and reflection on how to lead life well are part of the essence of being human” (Pasquale Reference Pasquale2020, 209, emphasis mine).

In this picture, we have Turing on the one hand, trying to build a computer which could be mistaken to be human: we need to work on our technological counterpart to make it as good as human. On the other hand, we have Frischmann, Selinger, and Pasquale, who show us that people – humans – are becoming more and more similar to machines: they contend that we are working on ourselves in order to be ever more perfectly technologically human. In short, we try to improve humans technically so that they become similar to robots; and we try to make robots that become indistinguishably similar to a certain image of the perfect human. Both sides assume – intuitively plausibly – that we know what a ‘human being’ is and where at least roughly the limits lie between genuinely being human and technologies.

While there is no uncontested concept of the human nature, it does seem plausible to argue that human nature is neither something purely accidental, historically completely variable, and relative. It is possible to distinguish characteristics which express what is meant by being human, even though these expressions differ historically, and culturally. Such a concept or idea of human nature could give us critical guidance for analyzing digital societies without risking calling “human” whatever humans (learn to) do under digitally changing conditions. This concept of a human nature can clearly not be reduced to its biological essence: if that was the case, we would not have this discussion in the first place. The question is what it means to have this very special sort of (biological) human nature and how we would best analyze it.

To engage with this rather complex question adequately, I suggest approaching it through a novel whose very topic is the relation between humans and machines: Ian McEwan’s (Reference McEwan2019) novel Machines Like Me. I want to illustrate the problematic by taking up this different perspective on the technological world because, in this novel, McEwan describes the relationship between a human being and an almost-human being; an extraordinarily well-constructed, sensitive, and intelligent robot whose name is Adam. My hope is that, by reading and interpreting Machines like Me, we can learn something about how we should think about human beings. Incidentally, it is also possible to interpret other novels, for example Klara and the Sun by Kazuo Ishiguro (Reference Ishiguro2021), or, to go a little further back, Mary Shelley’s Frankenstein (Reference Shelley1831), or War of the Worlds (Reference Wells1895–1897, by H. G. Wells), but my question would remain the same: what does the image of the robot, of the monster, of the Aliens tell us about the idea and characterization of the human being?

These characteristics by McEwan, I suggest, are generalizable, and I will show, during this chapter, that they can help us to understand the meaning of being human, especially in its relation to the technological world. Furthermore, I will very briefly criticize attempts to transcend this notion of a human being as a finite and vulnerable being and also attempts to imitate and replace “soft” human characteristics, such as emotions and affects in robots, through technology (HRI, Human Robot Interaction technologies). Placing the human being in relation to robots and discussing the extent to which social robots can replace humans helps to understand, or so I will argue, which beings we want to and do refer to as human. I will also argue that it is helpful to refer to the phenomenon of the Uncanny Valley to make sense of a clear line of demarcation between robots and humans – this will in any case be my argument in the end.

3.2 Ian McEwan on Robots and Humans

Ian McEwan’s (Reference McEwan2019) novel Machines Like Me is set in a rather different, alternative 1982: the war against the Falklands has been lost, the miners’ strike is still on, unemployment is rising by the day, John Lennon as well as John F. Kennedy are still alive – and, above all, so is Alan Turing.Footnote 1 Turing has been working successfully on AI and the construction of a robot, and the first set of these robots is on sale: 12 Adams and 13 Eves as they have been subtly called. The protagonist, Charles “Charlie” Friend, spends the little inheritance he received after the death of his mother on buying one of them and, since he is too late for an Eve, he gets an Adam. The plot of the novel has different threads: there is the relationship with Miranda, Charlie’s upstairs neighbour who he fell in love with long ago and who he starts dating. Miranda, after some time, has an affair with Adam; furthermore, she herself has a difficult personal history which she lies about and which is only revealed little by little, leading to the terrible unfolding. This thread in the complicated plot is important because it forces Miranda and Charles to lie – and after Adam has found out about this piece in Miranda’s past, he intends to inform the police, since, as a robot, he can’t lie. He must be, he wants to be relentlessly upright. Therefore, Charlie kills Adam. Also, rather uncannily, in the last third of the novel an increasing number of suicides by some of the Adams and Eves is being reported. But the main plot is simple: Charlie buys Adam, programs him together with Miranda, develops a rather friendly relationship with him and in the end kills him.

Let me emphasize just some points here, first, the idea and the process of programming Adam. With the robots, there comes a 470-page online handbook about how to program them, but Charlie writes:

I couldn’t think of myself as Adam’s “user”: I’d assumed there was nothing to learn about him that he could not teach me himself. But the manual in my hands had fallen open at chapter 14. Here, the English was plain: preferences; personality parameters. Then a set of headings – Agreeableness. Extraversion. Openness to experience. Consciousness. Emotional stability. … Glancing at the next page I saw that I was supposed to select various settings on a scale of one to ten.

Charlie feels uncomfortable to choose the settings since he is very aware of their reductive character. And it’s not only the reductive character of the program, it’s also the predictability which comes with the program, and which goes against our intuitions that human beings – although being perfectly able to follow rules and rationality – can also be unpredictable, in the sense of being unexpectedly creative when dealing with rules and given programs. Interestingly, Charlie has done a degree in anthropology at college. Why anthropology? Because the subtle sub-text (or sometimes not so subtle) is the question of the Anthropos, the borderline between what is and what is not human.

A second point concerns the problem of self-knowledge and decision-making, with the character of Turing declaring at the end of the novel:

I think the A-and-E’s [the Adams and Eves] were ill equipped to understand human decision-making, the way our principles are warped in the force field of our emotions, our peculiar biases, our self-delusion and all the other well-charted defects of our cognition. Soon these Adams and Eves were in despair. They couldn’t understand us because we couldn’t understand ourselves. Their learning programs couldn’t accommodate us. If we didn’t know our own minds, how could we design theirs and expect them to be happy alongside us?

(McEwan Reference McEwan2019, 299)

Emotions, however, often guide human’s actions for better or worse. And humans – in McEwan (Reference McEwan2019) and in general – see themselves as being defined by not only rationality but also sentimentality. Furthermore, the suicides of the Adams and Eves exhibit something like an uncanny zone: Isn’t it specifically human to kill oneself, to set an end to one’s life? What if there is no clear cut borderline between humans and robots?

A third intriguing problem I want to point out is the problem of lying, as Turing explains to Charlie:

Machine learning can only take you so far. You’ll need to give this mind some rules to live by. How about a prohibition against lying? … But social life teems with harmless or even helpful untruths. How do we separate them out? Who’s going to write the algorithm for the little white lies that spare the blushes of a friend? … We don’t yet know how to teach machines to lie.

(McEwan Reference McEwan2019, 303)

Lying, we can say, is also a form of creatively, self-reflectively following rules. And lastly, but centrally, I want to emphasize the robotic corporeality of Adam and the relationship between Adam and Miranda since the fact that Adam has a deceptively human body is a problematic subtext throughout the book. After having slept with Adam, Miranda insists that he is not more than a vibrator in human-like form, that he is “a fucking machine” (McEwan Reference McEwan2019, 92). She points out that she has a purely instrumental relationship to Adam, not a relation of mutual respect. Whereas Charlie’s take on the situation is rather different: “‘Listen’, I said, ‘if he looks and sounds and behaves like a person, then as far as I’m concerned, that’s what he is’” (McEwan Reference McEwan2019, 94). But Charlie, as we will learn later in the novel, does not really mean this. He is still convinced that there’s a categorical difference between Adam and himself, although he states the opposite, out of jealousy, out of defiance. He is vulnerable, not only in the bodily sense, but also in an emotional-mental sense. And, again, his contending that Adam is a person leads us directly into the uncanny field between humans and robots. Where do we draw the line?

All these themes not only demonstrate a human characteristic, but at the same time the sociality of human existence: the themes are, each in their own way, meaningful too, because humans always live in relationships with other humans. And this seems to be vital for understanding the characteristic differences between Charles and Adam, between human beings and robots, and therefore for understanding the essential characteristics of human beings. Embodiment/corporeality, finiteness, vulnerability, and self-knowledge, together with the (subtle, competent, possibly deviant) use of symbols, are among the classic characteristics of the human being. What is at issue in the novel is the messiness of being human, being thrown into the world without a precise ‘program’, and the ever-present possibility of being unable to cope with that world. This messiness expresses itself in emotional as well as bodily vulnerability, something which Adam isn’t conscious of or worried about as he should be – and would be – if he was human.

3.3 Characterizing the Human

It is true that McEwan also puzzles his readers, as we saw, by the fact that some Adams and Eves commit suicide. But this too is ultimately an indication of the meaning of “being human”: the reader’s perception of these suicides is confused, unsure because suicide is considered a human act par excellence. It is an expression of self-knowledge (or an attempt thereto), of autonomy, and precisely not an act following a program – whereas here, in Machines Like Me (McEwan Reference McEwan2019), it is a consequence of a program error.

Earlier we saw that corporeality, finiteness, vulnerability, and the self-reflective use of language are among the classic characteristics of the human being. Based on the McEwan characteristics and especially with the help of the concept of human vulnerability, I want to analyze in the following how the concept of the human being can best be understood. Vulnerability can serve as a focus of the other elements we found in McEwan. Mackenzie et al., in their volume on “Vulnerability” argue:

Human life is conditioned by vulnerability. By virtue of our embodiment, human beings have bodily and material needs; are exposed to physical illness, injury, disability, and death; and depend on the care of others for extended periods during our lives. As social and affective beings we are emotionally and psychologically vulnerable to others in myriad ways: to loss and grief; to neglect, abuse, and lack of care; to rejection, ostracism, and humiliation.

Human beings are vulnerable as physical beings, as affective beings, as social beings, as self-reflective beings, and this human vulnerability cannot be reduced to anything biological, although it cannot be separated from the biological either. Nor can human nature be reduced to the “brain” or “rationality,” so not to their cognitive or mental abilities alone. But we have two different problems here: on the one hand whether the concept “human being” is clearly and distinctly definable in biological or physiological terms, and thus reducible to these descriptions. On the other hand, the concept of “human being” seems to carry a normative load which we would normally understand as an appeal, or maybe even a duty, to manifesting a certain attitude toward them.

To tackle this two-sidedness of the concept, it is helpful to understand “human nature” as one of the “thick concepts” which Clifford Geertz (Reference Geertz1973), Bernard Williams (Reference Williams1985), or Martha Nussbaum (Reference Nussbaum2023) analyze, concepts which are not purely normative or purely descriptive, but express elements of both dimensions (see also Setiya Reference Setiya2024). Thick concepts are both action-guiding and world-guided. “If a concept of this kind applies,” Williams writes, “this often provides someone with a reason for action … At the same time, their application is guided by the world” (Williams Reference Williams1985, 140–141, emphasis mine). So, when we talk about human beings, we are at the same time guided by the world and we have reasons for action – for protecting their vulnerability for instance. We follow empirical evidence and we are prepared to follow normative reasons for action and to respect the other as a human being, to recognize their vulnerability, and to acknowledge them as equal. The normative dimension concerns precisely those characteristics I have discussed: vulnerability, finiteness, and the self-reflective dimension of mutually recognizing each other as human.

This normative dimension of the concept of the human becomes especially clear when one looks at contexts in which the very application of the term is denied. Richard Rorty (Reference Rorty1998) writes in his essay on human rights how, during the Balkan wars, the Serbs brutally refused to acknowledge the Bosnian Muslims as human beings, not even calling them “human.” Precisely because the use of the concept “human” implies respect for others as equals – as equally human – with the application and use of the concept, the attitude which goes with it is denied as well.Footnote 2 Sylvia Wynter (Reference Wynter1994), in her famous Open Letter to My Colleagues, writes that, when black people were involved in accidents and injured, it was standard practice in the LAPD to report back NHI (no humans involved). Not to refer to humans as humans is a violent form of denial of respect for the other, the denial to give them the basic recognition that we owe human beings.

So, when I speak in the following of human beings, I have such a “thick” concept in mind: it is a thick concept that contains both descriptive and normative elements. Several authors in the history of philosophy have already pointed out this double-sidedness of the concept of human nature and it is taken up again in the present, for example by Moira Gatens (Reference Gatens2019), who interprets the “human being” as Spinoza’s concept of the “exemplary” (see also Neuhouser Reference Neuhouser, Forst, Hartmann, Jaeggi and Saar2009 and Williams Reference Williams1985). When we refer to human beings in daily contexts, we generally have in mind beings which we refer to in biological as well as ethical ways (see Barenboim Reference Barenboim2023; Heilinger and Nida-Rümelin Reference Heilinger and Nida-Rümelin2015). I want to suggest that such a concept of human being can play an essential role in the critique of the digital society: Human beings as human beings always already live in their biological nature, but at the same time in a texture, a fabric of norms and concepts that determine, or govern, or shape, the relationship with the human person herself, with others, with the world. The ways we interpret these facts change over time: in fact, the history of making sense of what a human being is forms part of what it means to be a human being.

This approach obviously does not exclude that we might simply want to stop using this concept: when we transgress human beings, as some theories propose, we should not speak of humans any longer, and maybe we will not do so in the future. But this is not yet the point.

3.4 Should Humans Be (More) like Robots? Or Should Robots Be (More) like Humans?

We already saw in Chapter 1 that, if the aim is to explore the possible limits of the technicization of the human, then we always need to take up two perspectives: the human becoming a robot and the robot becoming (more) human. I will here briefly remind you of the first perspective and then come back to the latter in a little more detail.

The perspective that humans could be (or even should be) more robot-like covers the approaches which we described in Chapter 1 as, on the one hand, post-phenomenological – represented by Don Ihde and his followers who argue that we’re always already mediated through technology. That is the reason why the idea that we humans are becoming a little more like robots is not intimidating: Verbeek argues that technology is “more than a functional instrument and far more than a mere product of ‘calculative thinking.’ It mediates the relation between humans and world, and thus co-shapes their experience and existence” (Verbeek Reference Verbeek2011, 198–199). I agree with Verbeek and Ihde to the extent that their analyses of such mediations help us understand crucial aspects of what it means to be human today.Footnote 3

However, Ihde, Verbeek, and other post-phenomenologists are not prepared to take a critical perspective here – only where does this mediation between humans and technology end? When does such an amalgam become hazardous or even dangerous for humans, so much so that they lose their humanity? The post-phenomenologists cannot answer these questions. However, there is a second understanding of the question of why people should become more technologized; this is the transhumanist understanding which we already encountered in Chapter 1 too.

Transhumanists want to increase the phenomenological connection with technology into the perfectibility of humans through technology. They explicitly build on the concept of the human being in its ideal version. Transhumanism endeavors to minimize all the characteristics which I described as typically human: the vulnerability, the dependence on being embodied, and eventually also the finiteness (as we know from Ray Kurzweil’s vision of the singularity, see Bostrom Reference Bostrom2002, Reference Bostrom2005; Kurzweil Reference Kurzweil2006). Most transhumanists are not interested in criticizing concepts such as reason or autonomy (Ferrando Reference Ferrando, Braidotti and Hlavajova2018; Hayles Reference Hayles2008). On the contrary, they rather desire to get a grip on perfecting human rational and intellectual faculties, thereby overcoming vulnerability technologically and eradicating these human weaknesses or at least reducing them as far as possible. Again, I would argue that these theories are not in a position to draw a line between what one would still call human (albeit trans-human) and those beings who have given up on the ‘human’ in the concept transhuman altogether and are more like robots. Note, again, that I don’t think this is inconsistent or impossible: I only believe that there is a borderline beyond which it is no longer appropriate or meaningful to call such beings human.

We are still left with the opposite perspective: why should it be bad for humans if robots became ever human-like? This perspective needs some more discussion, and I will therefore look, first, at the research on social robots and, secondly, at the (im)possibility of translating emotions into technology. As we will see, there are still clear limitations in robot–human interaction and in the attempts to making robots look and function like humans. This is particularly obvious when it comes to the expression of emotions: human facial expressions, as well as human emotional life, is so complex that no possibility of translating feelings into data seems to appear on the horizon.

The research on the meaning of embodiment and affect and the possibility of translating them into technology has recently gained a lot of traction. It is a relatively new development that technological research on robots is no longer just about the cognitive area – as has now been shown particularly well with the ChatGPT – but also about emotions and affects. Emotions not only have a conscious or rational component, they also have an experiential or a phenomenal quality which is especially difficult to translate into data (see for the following Loh and Loh Reference Loh and Loh2022; Seifert et al. Reference Seifert, Friedrich and Schleidgen2022a; Weber-Guskar Reference Weber-Guskar2021). So far, social robots, especially in healthcare, have been met with a predominantly critical sentiment: human care should not be replaced by robots. We see this attitude also in research, where several ethical and philosophical approaches argue against this form of anthropomorphizing, from different perspectives.Footnote 4 However, in the research on social robots attempts are made to technologically develop certain human qualities in order to apply them to robots, such that they can be used in health care for elderly people or people with dementia. One of these qualities is “hug-quality,” for example, another one is to be able to speak and thereby to express emotions like affection, sympathy, and care. The idea, for example, is that robots should have qualities which make it easier to hug them and easier to be addressed by them. This research on the depth of human communication is looking for developments that can improve the use of robots in care. But all this seems very difficult, following Müller’s (Reference Müller, Zalta and Nodelman2023) argument, as:

AI can be used to manipulate humans into believing and doing things, it can also be used to drive robots that are problematic if their processes or appearance involve deception, threaten human dignity, or violate the Kantian requirement of “respect for humanity.”Footnote 5

So, for one thing, if robots are being used in healthcare, all they ever could do is to have an idea of instrumental care, as opposed to an idea of intentional care. What humans typically do when they care for others is intentional care and characterizes human interaction in a genuinely different way than instrumental care. Robots are thus “care robots” only in a behavioral sense of performing tasks in care environments, not in the sense in which a human “cares” for their patients. It appears that the experience of “being cared for” is based on this intentional sense of “care” only, a form of care which robots cannot provide – or at least not at this moment. This also shows that research on human–robot interaction is still far behind their aims: emotions, responsiveness, and sympathy cannot yet be translated into data and algorithms. However, these are human qualities and characteristics which are definitive of social interaction. Weber-Guskar (Reference Weber-Guskar2021) discusses the possibility to use data and algorithms to build emotional robots (what she calls Emotional AI systems) and is critical concerning the development as well as the social function these robots would have in communication. Similarly, Darmanin (Reference Darmanin2019) argues that the attempts so develop robots with facial expressions close to human facial expressions are completely unconvincing. If you look at the examples accessible on the Internet, he seems to be right: emotions cannot be reduced to simple datapoints (you can see examples for different emotions, like happiness, anger, fear). These expressions have at least nothing much to do with human care as we currently still understand it.

This distinction is echoed by Pasquale when he writes that the practice of caring can’t be reduced – and shouldn’t be reduced – to instrumental relationships which are expressed by some changes in the expression of the mouth. I agree with Pasquale, that a society organizing institutional care for people along those lines would not be a society we would want to live in (see Chapter 4, by Pasquale). If we wanted robots to replace human care, then robots would have to be either very obviously only replacing human care in the instrumental and basic sense or able to express and behave precisely like humans in providing intentional care for the ill human. It is precisely this impossibility of translating human feelings (or should we say: humanity?) into technologies that limits robotization – at least for the time being.

3.5 The Uncanny Valley

Apparently, emotions and lived experiences cannot simply be reduced to data and algorithms, even if algorithms are becoming ever smarter. Emotional as well as physical vulnerability, including diseases, that we feel (and fear) cannot be translated into technologies, in the foreseeable future – whereas in fiction, especially in novels or films, this boundary between humans and robots is messed about with. The young man who is actually a robot in the film Ich bin dein Mensch, for example, is deceptively similar to other men, and the woman scientist who is supposed to fall in love with him, or at least befriend him, is fundamentally insecure of her attitude toward him (Schrader Reference Schrader2021).

The novel Klara and The Sun also plays with this boundary in unsettling ways: the Artificial Friend (AF) Klara is supposed to be a “friend” of Josie, who is a young teenage girl working for her exams (Ishiguro Reference Ishiguro2021). These exams are stressful and her whole future depends on the results. Furthermore, every now and then we get mysterious hints that Josie is ill and that her sister had the same illness when she died. Since the novel is told from Klara’s first person-perspective, the reader is inclined to understand her quite well: also, it doesn’t seem to be too difficult. She describes the way she perceives the world in (smaller and larger) squares and, for this perception, for her survival, the sun is necessary. Necessary not in the sense of natural needs that must be satisfied for an organism to live, but necessary in the sense of electricity without which a computer would not function.

Josie, on the other hand, her illness, her relationship with her neighbour Rick, as well as her authoritative mother, seems to be more of a mystery. While Klara is transparent in her perceptions, Josie remains obscure, even in her fear of illness and death. This seems to be a subtle, yet clear indication to express that Josie is the human of the two. Klara desires to be more human and has very transparent, easy-to-understand emotions. While Josie seems opaque to us, just like people who experience depression and melancholy often seem.

In a second step it becomes particularly clear that the difference between robots and humans is essentially based on the latter’s vulnerability: Klara, the robot, cannot get ill, she (or it) gets broken. It (or she) cannot be healed, only be repaired. Although Klara doesn’t want to break down, the robot can make that much clear – it needs the sun and is able to express this need, but it needs it like my mobile needs charging. It can’t even try to survive without charging, as humans do, when they don’t have food.

At least this is what the reader is led to think. Klara is asked by a woman at a party “‘One never knows how to greet a guest like you,’ and adds: ‘After all, are you a guest at all? Or do I treat you like a vacuum cleaner?’” This question pushes us, the readers, headlong into the unsettling problematic of the relation between humans and robots. What rules are we to follow here? Which conventions apply, which conduct should we habitualize? The reader’s confusion and insecurity reach even deeper. In Klara’s place we – the readers – are ashamed of the woman’s outrageous question, we even feel hurt, while on the other hand we know that Klara’s “emotions” are alien, not human emotions, that therefore sympathy with Klara simply doesn’t make sense. Ishiguro masterfully balances on the boundary between humans and robots, exploring what it means to be not-quite-human. He moves consistently on the edge of the uncanny valley. This valley itself is mysterious, and I want to briefly have a closer look at it.

The uncanny valley is a surprising valley in the previously steadily increasing curve that records the reactions of people when asked about their feelings toward robots.Footnote 6 In observing human empathy toward human-like objects, we find that the more these objects, robots, resemble humans, the greater the positive response – up to a point where the objects are so human-like, but only human-like, that we enter the uncanny valley: we feel distressed, we feel emotionally unsettled, even extreme unease toward the objects. This is shown in the curve as a deep valley: but the valley is closing and the curve rising again when robots are indistinguishable from humans. This gap or valley is surprising since one would expect that robots, if they were almost (although not yet completely) indistinguishable from humans, would give us a reassuring or confidence-inspiring impression. On the other hand, the valley is understandable: intuitively we would always at least like to know whether we are dealing with a human or a robot.

Nowadays, in our daily digital lives, we seem to be confronted with a number of these uncanny areas: one example is the case of phoning a company where we no longer know whether we are being served by humans or by algorithms since the voices are indistinguishable. This is also the case with automated decision-making and the question whether there are – or ought to be – “humans in the loop”: a question which is also one of dealing with the uncanny valley or field.Footnote 7

I’m sure that in the – maybe far away – future it will be part of the rather normal world to move in this border area between beings clearly identifiable as robots, those that come across as uncanny and those which are in fact robots but no longer identifiable as such. Novels such as Machines Like Me or Klara and the Sun, or films like Ich bin dein Mensch or Her describe such a world impressively. The most recent example I came across is a short film by Diego Marcon (Reference Marcon2021), The Parent’s Room, just a brief clip which is haunting and truly uncanny not only because of the music and the lyrics (a father has just killed his son, daughter and wife, and is about to kill himself) but mostly because it is not entirely clear whether the figures are human, papier mâché, or a mixture of both. Isabella Achenbach writes about this film by Marcon: “[T]hat extreme representative realism evokes a response of repulsion and restlessness. … Marcon creates characters that give the viewer goosebumps with simultaneous feelings of aversion and unsettling familiarity” (Achenbach Reference Achenbach2022, 293). Precisely this mixture of rejection, reluctance (aversion) and sympathy, and compassion (familiarity) characterizes the territory of the uncanny valley.

3.6 The Self-reflective Finiteness of Humans

The critique or investigation of what it means to be human belongs broadly to the area of anthropological criticism. This criticism enriches the practical-normative discourse with thick descriptions of human life and helps us criticize certain digital practices, with a whole web of related thick and normative concepts, such as care, love, emotion, autonomy and freedom, respect, and equality. Taken together, they enable us to form further criteria or standards for the good, the right human life in the digital society; and this way, we build a net of anthropological criticism and ethical-political criticism. In trying to explore and search irreducible characteristics of human life, one can be guided and inspired by imaginations, by novels, by films, as we have already seen. They can help us to develop plausible narratives and thus to ask where the limits should be beyond which technologies should not further interfere with human life. Or if they do, we wouldn’t be speaking of humans any longer – this is one of the critical questions, which for instance Setiya asks when he criticizes David Chalmers analogizing virtual and humans-as-we-are reality (Chalmers Reference Chalmers2022; Setiya Reference Setiya2022).

These questions – and also the question of who the “we” is, which I use throughout here – are controversial and must be ever again openly discussed, contested, balanced, and determined in liberal-democratic discourses. But the criteria, characteristics, and the basic normative framework discussed here must be the background for these disputes. In the following, by way of concluding, I want to point out that there are certain normative narratives that we can use to explore the boundaries between robots and humans – and that there are others which we wouldn’t so use. The aim is to be able to refer critically to those contexts in which the use of robots would not concern specific human vulnerabilities. For instance, robots in care contexts, should we use them, or shouldn’t we? Should robots be used as teachers? As traffic officers? As police officers? At the checkout at the supermarket?

These are questions which are being researched already by many different universities and other public institutions, as well as by private companies, and they will occupy us even more in the future. I have argued that these questions can best be discussed if we do not simply present a short and precise definition of the human being but seek the help of normative narratives which take up the thick concepts I discussed above. We can then identify contexts within which we do or do not want to use robots and give reasons by describing the characteristics of human beings and of human relationships with these thick concepts such that the gains and losses of using robots would be visible and could be discussed. Let me raise two critical points.

Firstly, what could be the source of the feeling of uncanniness in the uncanny valley? The reason many people feel insecure vis-à-vis an almost-human-like robot is, I suggest, grounded in their vulnerabilities: the assumption, the suspicion of the impossibility of equal, respectful, emotional relationships appears as a possible dehumanization of relationships. Such dehumanization is frightening and perceived as threatening, since we mostly are frightened of the non-natural nonhuman (especially when they pretend to be human). We feel fear of those creatures, fear of being hurt in unknown ways. Humans have central characteristics which by definition robots do not have: we are finite, vulnerable, self-reflective beings, always already living with other humans, having relationships with them. If we want to or must expose our vulnerability, then we want to be intuitively sure that we are dealing with another person.Footnote 8 Even stronger: we always already presuppose that the other is human when we expose ourselves as deeply vulnerable beings.

The uncanny consequences of not being able to make this presupposition become clear, secondly, when we are uncertain about yet another aspect of this boundary. Remember Adam and Charles in McEwan’s novel: Adam’s appearance is not uncanny because he is indistinguishable from a human. Rather, it is his behavior which is uncanny: he cannot lie, and he seems mentally and, at first, physically invulnerable. Therefore, when Charles kills him, it seems, at first, rather human that he kills him without the sort of considerations one would expect him to have if he saw Adam as human. But paradoxically, Charles kills and has regrets, he feels pangs of conscience. Does having feelings of remorse and responsibility tell us more about what it means to be human than any clear definition of ‘human’ or precise instruction for a robot ever could?

4 Cultural Foundations for Conserving Human Capacities in an Era of Generative Artificial Intelligence Toward a Philosophico-Literary Critique of Simulation

Within a few years, machine-written language may become “the norm and human-written prose the exception” (Kirschenbaum Reference Kirschenbaum2023).Footnote 1 Generative Artificial Intelligence is now poised to create profiles on social media sites and post far more than any human can – perhaps by orders of magnitude.Footnote 2 Unscrupulous academics and public relations firms may use article-generating and -submitting artificial intelligence (AI) to spam journals and journalists. The science fiction magazine Clarkesworld closed down its open submission window in 2023 because of a deluge of content likely created by generative AI. There is already evidence of the weaponization of social media, and AI promises to supercharge it (Jankowicz Reference Jankowicz2020; Singer Reference Singer2018).

AI is also poised to play a dramatically more intimate and important role in parasocial and social relationships, displacing human influencers, entertainers, friends, and partners. Not only is technology becoming more capable of simulating human thought, will, and emotional response, but it is doing so at an inhuman pace. A mere human manipulator can only learn from a limited number of encounters and resources; algorithms can develop methods of manipulation at scale, based on the data of millions. This again affords computation, and those in control of its most advanced methods and widespread deployments, an outsized role in shaping future events, preferences, and values.

Despite such clear and present dangers, many fiction and non-fiction works gloss over the problem of artificial intelligence overpowering natural thought, feeling, and insight. They instead present robots (and even operating systems and large language models) as sympathetic and vulnerable, deserving rights and respect now accorded to humans.Footnote 3 Questioning such media representations of AI is a first step toward achieving the cultural commitments and sensibilities that will be necessary to conserve human capacities amidst the growing influence of what Lyotard (Reference Lyotard1992) deemed “the inhuman”: systems that presume and promote the separability of the body from memory, will, and emotion. What must be avoided is a drift toward an evolutionary environment where individual decisions to overvalue, over-empower, and overuse AI advance machinic and algorithmic modes of thought to the point that distinctively human and non-algorithmic values are marginalized. Literature and film can help us avoid this drift by structuring imaginative experiences which vividly crystallize and arrestingly illuminate the natural tendencies of individual decisions.Footnote 4

I begin the argument in Section 4.1 by articulating how Rachel Cusk’s (Reference Cusk2017) novel Transit and Maria Schrader’s film I’m Your Man suggest a range of ways to regard emerging AIs which simulate human expression. Each sympathetically describe a man and woman (respectively) comforted and intrigued by AI communications. Yet each work leaves no doubt that the AI and robotics it treats have done much to create the conditions of alienation and loneliness they promise to cure. Section 4.2 examines the long-term implications of such alienation, exploring works that attempt to function as a “self-preventing prophecy”: Hari Kunzru’s (Reference Kunzru2020) Red Pill and Lisa Joy and Jonathan Nolan’s Westworld. Section 4.3 concludes with reflections on the politico-economic context of professed emotional attachments to AI and robotics.

Before diving into the argument, one prefatory note is in order. The sections that follow touch upon a wide range of cultural artefacts. There are spoilers, so if you intend to read, view, or listen to one of the works discussed, without being forewarned of some critical plot twist or character development, it may be wise to stop reading when it is mentioned. Unlike computers, we cannot simply delete the spoiler from memory, and natural processes of human forgetting are notoriously unpredictable.

4.1 Curing or Capitalizing upon Alienation?

At the beginning of Rachel Cusk’s (Reference Cusk2017) novel, Transit, the narrator opens a scam email from an astrologer, or from an algorithm imitating one. The narrator describes a richly detailed, importuning missive, full of simulated sentiment. “She could sense … that I had lost my way in life, that I sometimes struggled to find meaning in my present circumstances and to feel hope for what was to come; she felt a strong personal connection between us,” (2) the narrator relates. “What the planets offer, she said, is nothing less than the chance to regain faith in the grandeur of the human: how much more dignity and honor, how much kindness and responsibility and respect, would we bring to our dealings with one another if we believed that each and every one of us had a cosmic importance?” (2).

It’s a humane sentiment, both humbling and empowering, like much else in the email. Cusk’s narrator deftly summarizes the email, rather than quoting it, giving an initial impression of the narrator’s identification with its message and author. So how did Cusk’s narrator divine the scam? After relating its contents, the narrator states that “It seemed possible that the same computer algorithms that had generated this email had also generated the astrologer herself: her phrases were too characterful, and the note of character was repeated too often; she was too obviously based on a human type to be, herself, human” (3).Footnote 5 The astrologer-algorithm’s obvious failure is an indirect acknowledgement of the author’s anxieties: what if her own fictions turn out to be too characterful? Carefully avoiding that, and many other vices, Cusk, in Transit (and the two other novels in her Outline trilogy), presents characters who are strange or unpredictable enough to surprise or enlighten us, to respond to tense scenarios with weakness or strength and to look back on themselves with defensiveness, insight, and all manner of other fusions of cognition and affect, judgement, and feeling.

One facet of Cusk’s genius is to invite readers to contemplate the oft-thin line between compassion and deception, comfort and folly. The narrator finds the algorithmic astrologer impersonator hackish but, almost as if to check herself, immediately relates the views of a friend who found solace in mechanical expressions of concern:

A friend of mine, depressed in the wake of his divorce, had recently admitted that he often felt moved to tears by the concern for his health and well-being expressed in the phraseology of adverts and food packaging, and by the automated voices on trains and buses, apparently anxious that he might miss his stop; he actually felt something akin to love, he said, for the female voice that guided him while he was driving his car, so much more devotedly than his wife ever had. There has been a great harvest, he said, of language and information from life, and it may have become the case that the faux-human was growing more substantial and more relational than the original, that there was more tenderness to be had from a machine than from one’s fellow man.

(3)

Cusk’s invocation of an “oceanic” chorus calls to mind Freud’s discussion of the “oceanic feeling” in Civilization and Its Discontents – or, more precisely, his naturalization of Romain Rolland’s metaphysical characterization of a yearned-for “oceanic feeling” of bondedness and unity with all humanity. For Freud, such a feeling is an outgrowth of infantile narcissism, an enduring desire for the boundless protection of the good parent.Footnote 6

Marking the importance of this oceanic metaphor in both style as well as substance, Cusk’s story of the astrologer’s letter has a tidal structure. Like an uplifting wave, the letter sweeps us up into reflections on fate and belief. And, like any wave, it eventually crashes down to earth, suddenly undercut by the revelation that insights once appraised as mystical or compassionate are mere fabrications of a bot. Then another rising wave of sentiment appears, wiser and more distant, calling on readers to reflect on whether they have discounted the value of bot language too quickly. The speaker is vulnerable and thoughtful: someone “depressed in the wake of his divorce,” who acknowledges that the very idea of a diffuse “oceanic chorus” of algorithmically arranged concern is “maddening” (3).

Rather than crashing, this subtler, second plea for the value of the algorithmic recedes. Cusk does not leave us rolling our eyes at this junk email. She welcomes a voice in the novel that, in a sincere if misguided way, submits to an algorithmic flow of communication, embracing corporate communication strategy as concern. Cusk refuses to dismiss the idea, or to bluntly depict it as a symptom of some pathological misapprehension of the world. Her patience is reminiscent of Sarah Manguso’s (Reference Manguso2018) apothegm: “Instead of pathologizing every human quirk, we should say: By the grace of this behaviour, this individual has found it possible to continue” (44). Weighed down by depression, savaged by loneliness, a person may well seek scraps of solace wherever they appear. There are even now persons who profess to love robots (Danaher and Macarthur Reference Danaher and Macarthur2017; Levy Reference Levy2008) or treat them with the respect due to a human. Indeed, a one-time Google engineer recently expressed his belief that a large language model offered such eerily human responses to queries that it might be sentient (Christian Reference Christian2022; Tangermann Reference Tangermann2022).

And yet there is a clue in the novel of how a Freudian hermeneutic of suspicion may be far more appropriate than a Rollandian hermeneutic of charity when interpreting whatever oceanic feeling may be afforded by bot language. Cusk includes a self-incriminating note in the divorcee’s earnest endorsement of the “oceanic chorus” of machines: the casual contrast, and implicit demand, in the phrase “he actually felt something akin to love, he said, for the female voice that guided him while he was driving his car, so much more devotedly than his wife ever had” (Cusk Reference Cusk2017, 3). A robotic voice can always sound kind, patient, devoted, or servile – whatever its controller wants from it. As the film Megan depicts, affective computing embedded in robotics will have a remarkable capacity for rapidly pivoting and refining its emotional appeals. It is not realistic to expect such relentless, data-informed support from a person, even a parent, let alone a life partner. Yet the more robotic and AI “affirmations” are taken to be sincere and meaningful, the more human deviation from such scripts will seem suspect. Like the Uber driver constantly graded against the Platonic ideal of a perfect 5-star trip, persons will be expected to mimic the machines’ perpetual affability, availability, and affirmation, whatever their actual emotional states and situational judgements.

For a behaviourist, this is no problem: what is the difference between the outward signs of kindness and patience and such virtues themselves? This is perhaps one reason why John Danaher (Reference Danaher2020, 2023) has proposed “ethical behaviourism” as a mode of “welcoming robots into the moral circle”. In this framework, there is little difference between the given and the made, the simulated and the authentic. Danaher proposes that:

  1. 1. If a robot is roughly performatively equivalent to another entity whom, it is widely agreed, has significant moral status, then it is right and proper to afford the robot that same status.

  2. 2. Robots can be roughly performatively equivalent to other entities whom, it is widely agreed, have significant moral status.

  3. 3. Therefore, it can be right and proper to afford robots significant moral status (Danaher Reference Danaher2020, 2026).

The qualifier “can” in the last line may be doing a lot of work here, denoting ample moral space to reject robots’ moral status. And yet it still seems wise to resist any attempts to blur the boundary between persons and things. The value of so much of what persons do is inextricably intertwined with their free choice to do it. Robots and AI are, by contrast, programmed. The idea of a programmed friend is as oxymoronic as that of a paid friend. Perhaps some forms of coded randomization could simulate free choice via AI. But they must be strictly limited. If robots were to truly possess something like the deep free will that is a prerogative of humans – the ability to question and reconfigure any optimization function they were originally programmed with – they would be far too dangerous to permit. They would pose all the threats now presented by malevolent humans but would not be subject to the types of deterrence honed in centuries of criminal law based on human behaviour (and even now very poorly adapted to corporations).

Unconvincing in their efforts to characterize robots as moral agents, behaviourists might then try to characterize robots and AI as moral patients, like a baby or harmless animal which deserves our regard and support. Nevertheless, the programming problem still holds: a robotic doll that cries to, say, demand a battery recharge, could be programmed not to do so; indeed, it could just as plausibly convey anticipated pleasure at the “rest” afforded by time spent switched off. For such entities, emotion and communication have in stricto sensu no meaning whatsoever. Their “expression” is operational, functional, or, in Dan Burk’s (Reference Burk2025) apt characterization, “asemic” (189).

To be sure, humans are all to some extent “programmed” by their families, culture, workplaces, and other institutions. Free will is never absolute. But a critical part of human autonomy consists in the ability to reflect upon and revise such values, commitments, and habits, based on the sensations, thoughts, and texts that are respectively felt, developed, and interpreted through life. The ethical behaviourist may, in turn, point out that a robot equipped with a connection to ChatGPT’s servers may be able to “process” millions more texts than a human could read in several lifetimes, and say or write texts that we would frequently accept as evidence of thought in humans. Nevertheless, the lack of sensation motivating both perception and affect remains, and it is hard to imagine a transducer capable of overcoming it (Pasquale Reference Pasquale2002). More importantly, robot “thoughts” produced via current generative AI are far from human ones, as they are mere next-word or next-pixel predictions.

Consider also the untoward implications of ethical behaviourism if persons and polities try to back their professed moral regard for robots and AIs with concrete ethical decisions and commitments of resources. If a driver must choose between running over a robot and a child, should they really worry about choosing the former? (Birhane et al. Reference Birhane, van Dijk and Pasquale2024). If behaviour, including speech, is all that matters, are humans under some moral obligation to promote “self-reports” or other evidence of well-being by AI and robots? In some accelerationist and transhumanist circles, the ultimate purpose and destiny of humans is to “populate” galaxies with as many “happy” simulations or emulations of human minds as possible.Footnote 7 On this utilitarian framework, what matters is happiness, as verified behaviouristically: if a machine “says” it is happy, we are to take it at its word. But such a teleology is widely recognized as absurd, especially given the pressing problems now confronting so many persons on earth.

While often portrayed as a cosmopolitan openness to the value of computers and AI, the embrace of robots as deserving of moral regard is more accurately styled as part of a suite of ideologies legitimating radical and controversial societal reordering. As Timnit Gebru and Emile Torres (Reference Gebru and Torres2024) have explained, there is a close connection between Silicon Valley’s accelerationist visions and a bundle of ideologies (Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism) which they abbreviate as TESCREAL. Once ideologies like transhumanism and singularitarianism have breached the boundary between persons’ and computers’ well-being (again assuming that the idea of computer well-being makes any more sense than, say, toaster well-being), long-term policy may well include and prioritize the development of particularly powerful and prevalent computation (such as “artificial general intelligence” or “superintelligence”) over human well-being, just as some humans are inevitably helped more than others by any given policy. An abstract utilitarian meta-ethical stance, already far more open to wildly variant futures than more grounded virtue-oriented, natural law, and deontological approaches, becomes completely open-ended once the welfare of humans fails to be the fixed point of its individualistic, maximizing, consequentialism.

Ethical behaviourism also reflects a rather naïve political economy of AI and robotics.

A GPS system’s simulation of kindness is far less a mechanization of compassion (if such a conversion of human emotion into mechanical action can even be imagined), than a corporate calculation to instil brand loyalty. Perhaps humans can learn something from emotion AI designed to soothe, support, and entertain.Footnote 8 But the more such emotional states or manners are faked or forced, the more they become an operational mode of navigating the world, rather than an expression of one’s own feelings. Skill degradation is one predictable consequence of many forms of automation; pilots, for example, may forget how to fly a plane manually if they over rely on autopilot. Skill degradation in the realm of feeling, or articulating one’s feelings, is a troubling fate, foreshadowing a mechanization of selfhood, outsourced to the algorithms that tell a person what or how to feel (Pasquale Reference Pasquale2015). Allison Pugh expertly anticipates the danger of efforts to automate both emotional and connective labour, given the sense of meaning and dignity that such work confers on both givers and receivers of care and concern (Pugh Reference Pugh2024).

The entertaining and intellectually stimulating German film I’m Your Man (2021), directed by Maria Schrader, explores themes of authentic and programmed feeling as its protagonist (an archaeologist named Emma) questions the blandishments of the handsome robotic companion (Tom) whom she agrees to “test out” for a firm. Tom can “converse” with her about her work, anticipate her needs and wants, and simulate concern, respect, friendship, and love.Footnote 9 The robot is also exceptionally intelligent, finding an obscure but vital academic reference that upends one of Emma’s research programs. Emma occasionally enjoys the attention and expertise that Tom provides and tries to reciprocate. But she ultimately realizes that what Tom is offering is programmed, not a free choice, and is thus fundamentally different than the risk and reward inherent in true human companionship and love.

Emma realizes that, even if no one else knew Tom’s nature, her ongoing engagement with it would be dangerous on not only an affective, but also on an epistemic level.Footnote 10 As Charles Taylor (Reference Taylor1985b, 49) has explained, “experiencing a given emotion involves experiencing our situation as bearing a certain import, where for the ascription of the import it is not sufficient just that I feel this way, but rather the import gives grounds or basis for the feeling.”Footnote 11 Simply feeling a need for affirmation is not a solid ground or basis for someone else to express affirming emotions. Barring extreme situations of emotional fragility, the other needs to be able to independently decide whether to affirm oneself for that affirmation to have meaning. If simulated expression of such emotions by a thing is done, as is likely, to advance the commercial interest of the thing’s owner, there is no solid basis for feeling affirmed either. We can all go from the “wooed” to the “waste” (in Joseph Turow’s memorable phrasing) of a firm in the flash of business model shift. Of course, we can also imagine a world in which “haphazardly attached” persons find some solace in the words emitted by LLMs, whatever their nature.Footnote 12 But the way such technology fits or functions in such a scenario is far more an indictment (and, ironically, stabilization) of its alienating environment, than testament to its own excellence or value. As Rob Horning has observed, from an economic perspective, large technology firms “must prefer the relative predictability of selling simulations to the uncontrollable chaos of selling social connection. They would prefer that we interact with generated friends in generated worlds, which they can engineer entirely to suit their ends” (Horning Reference Horning2024).

While many advocates of “artificial friends” based on affective computing claim that they will alleviate alienation, they are more likely to do the opposite: lure the vulnerable away from truly restorative, meaningful, and resonant human relationships, and into a virtual world. As Sherry Turkle has observed:

[chatbots] haven’t lived a human life. They don’t have bodies and they don’t fear illness and death … AI doesn’t care in the way humans use the word care, and AI doesn’t care about the outcome of the conversation … To put it bluntly, if you turn away to make dinner or attempt suicide, it’s all the same to them.

Like the oxymoronic “virtual reality” of Ready Player One, the oxymoronic “artificial empathy” of an “AI friend” is a far-from-adequate individual compensation for the alienating social world such computation has helped create.

4.2 Self-preventing Prophecy

Despite cautionary tales like Her and I’m Your Man, myriad persons already engage with “virtual boyfriends and girlfriends” (Ding Reference Ding2023).Footnote 14 As reported in 2023 about just one firm providing these services, Replika:

Millions of people have built relationships with their own personalized instance of Replika’s core product, which the company brands as the “AI companion who cares.” Each bot begins from a standardized template – free tiers get “friend,” while for a $70 premium, it can present as a mentor, a sibling or, its most popular option, a romantic partner. Each uncanny valley-esque chatbot has a personality and appearance that can be customized by its partner-slash-user, like a Sim who talks back.

Chastened in its metaversal ambitions, Meta has marketed celebrity chatbots to simulate conversation online. Millions of persons follow and interact with “virtual influencers,” who may be little more than a stylish avatar backed by a PR team (Criddle Reference Criddle2023).

For any persons who believe they are developing relationships with bots, online avatars, or robots, the arguments in Section 4.1 are bitter pills to swallow. The blandishments of affective computing may well reinforce alienation overall, but sufficiently simulate its relief (for any particular individual) to draw the attention and interest of many desperate, lonely, or merely bored persons. The abstractions of theory cannot match the importuning eyes, perfectly calibrated tone of voice, or calculatedly attractive appearance of online avatars and future robots. Yet human powers of imagination can still divert a critical mass of persons away from the approximations of Nozick’s “experience machine” dreamed of by too many in technology firms.

Consider the complexities of human–robot interaction envisioned in the hit HBO series Westworld. When asked if it sometimes questions the nature of its reality, the robot named Dolores Abernathy states in Season 1, “Some people choose to see the ugliness in this world. The disarray. I choose to see the beauty. To believe there is an order to our days, a purpose.” This refrain could describe a typical product launch for affective computing software, with its bright visions of a happier world streamlined with tech that always knows just what to say, just how to open and close your emails, just what emoji to send when you encounter a vexing text. Westworld envisions a theme park where calculated passion goes well beyond the world of bits, culminating in simulated (and then real) murders. The promise of the park is an environment where every bright, dark, or lurid fantasy can be simulated by androids almost indistinguishable from humans. It is the reductio ad absurdum (or perhaps proiectio ad astra) of the affective surround fantasized by Cusk’s depressed divorcee, deploying robotics to achieve what text, sound, and image cannot.

By the third season of Westworld’s Möbius strip chronology, Dolores breaks out of the park, driven to reveal to humans of the late twenty-first century that their fates are silently guided by a vast, judgemental, and pushy AI. While the last season of the show was an aesthetic mess, its reticulated message – of humans creating a machine to save themselves from future machines – was a philosophical challenge. How much do we need more computing to navigate the forbiddingly opaque and technical scenarios created by computing itself?

For transhumanists, the answer is obvious: human bodies and brains as we know them are just too fragile and fallible, especially when compared with machines. “Wetware” transhumanists envision a future of infinite replacement organs for failing bodies, and brains jacked into the internet’s infinite vistas of information. “Hardware” transhumanism wants to skip the body altogether and simply upload the mind into computers. AIs and robots will, they assume, enjoy indefinite supplies of replacement parts and backup memory chips. Imagine Dolores, embodied in endless robot guises, “enminded” in chips as eternal as stars.Footnote 15

The varied and overlapping efficiencies that advanced computation now offer make it difficult to reject this transhumanist challenge out of hand. A law firm cannot ignore large language models and the chatbots based on them, because these tools may not only automate simple administrative tasks now but also may become a powerful research tool in the future. Militaries feel pressed to invest in AI because technology vendors warn it could upend current balances of power, even though the great power conflicts of the 2020s seem far more driven by basic industrial capacities. Even tech critics have Substacks, Twitter accounts, and Facebook pages, and they are all subject to the algorithms that help determine whether they have one, a hundred, or a million readers. In each case, persons with little choice but to use AI systems are donating more and more data to advance the effectiveness of AI, thus constraining their future options even more. “Mandatory adoption” is a familiar dynamic: it was much easier to forego a flip phone in the 2000s than to avoid carrying a smartphone today. The more data any AI system gathers, the more it becomes a “must-have” in its realm of application.

Is it possible to “say no” to ever-further technological encroachments?Footnote 16 For key tech evangelists, the answer appears to be no. Mark Zuckerberg has fantasized about direct mind-to-virtual reality interfaces, and Elon Musk’s Neuralink also portends a perpetually online humanity. Musk’s verbal incontinence may well be a prototype of a future where every thought triggers AI-driven responses, whether to narcotize or to educate, to titillate or to engage. When integrated into performance-enhancing tools, such developments also spark a competitive logic of self-optimization. A person who could “think” their strategies directly into a computing environment would have an important advantage over those who had to speak or type them. If biological limits get in the way of maximizing key performance indicators, transhumanism urges us toward escaping the body altogether.

This computationalist eschatology provokes a gnawing insecurity: that no human mind can come close to mastering the range of knowledge that even a second-rate search engine indexes, and simple chatbots can now summarize, thanks to AI. Empowered with foundation models (which can generate code, art, speech, and more), chatbots and robots seem poised to topple humans from their heights of self-regard. Given Microsoft’s massive investments in OpenAI, we might call this a Great Chain of Bing: a new hierarchy placing the computer over the coder, and the coder over the rest of humans, at the commanding heights of political, economic, and social organization.Footnote 17

Speculating about the long-term future of humanity, OpenAI’s Sam Altman (Reference Altman2017) once blogged about a merger of humans and machines, perhaps as a way for the former to keep the latter from eliminating them outright. “A popular topic in Silicon Valley is talking about what year humans and machines will merge (or, if not, what year humans will get surpassed by rapidly improving AI or a genetically enhanced species),” he wrote. “Most guesses seem to be between 2025 and 2075.” This logic suggests a singularitarian mission to bring on some new stage of “human evolution” in conjunction with, or into, machines. Just as humans have used their intelligence to subdue or displace the vast majority of animals, on this view, machines will become more intelligent than humans and will act accordingly, unless we merge into them.

But is this a story of progress, or one of domination? Interaction between machines and crowds is coordinated by platforms, as MIT economists Erik Brynjolffson and Andrew McAfee have observed. Altman leads one of the most hyped ones. To the extent that CEOs, lawyers, hospital executives, and others assume that they must coordinate their activities by using large language models like the ones behind OpenAI’s ChatGPT, they will essentially be handing over information and power to a technology firm to decide on critical future developments in their industries (Altman Reference Altman2017). A narrative of inevitability about the “merge” serves Altman’s commercial interests, as does the tidal wave of AI hype now building on Elon Musk’s X, formerly known as Twitter.

The middle-aged novelist who narrates Hari Kunzru’s (Reference Kunzru2020) Red Pill wrestles with this spectre of transhumanism, and is ultimately driven mad by it. Suffering writer’s block, he travels from his home in Brooklyn to Berlin, for a months-long retreat. Lonely and unproductive at the converted mansion he’s staying at, he becomes both horrified and fascinated by a nihilistic drama called Blue Lives, which features brutal cops at least as vicious as the criminals they pursue. Its dialogue sprinkled with quotes from Joseph de Maistre and Emil Cioran, Blue Lives appears to the narrator as something both darker and deeper than the average police procedural. He gradually becomes obsessed with the show’s director, Anton.

Anton is an alt-rightist, fully “red pilled,” in the jargon of transgressive conservatism. He also dabbles in sociobiological reflections on the intertwined destiny of humans and robots. The narrator relates how Anton described his views in a public speaking tour:

[Anton] spoke about his “program of self-optimization.” He worked out and took a lot of supplements, but when it came to bodies, he was platform-agnostic. Whatever the substrate, carbon-based or not, he thought the future belonged to those who could separate themselves out from the herd, intelligence-wise … Everything important would be done by a small cognitive elite of humans and AIs, working together to self-optimize. If you weren’t part of that, even selling your organs wasn’t going to bring in much income, because by then it would be possible to grow clean organs from scratch.

(207)

In a narcissistic short film celebrating himself, Anton announces that: “Around us, capital is assembling itself as intelligence. That thought gives me energy. I’m growing stronger by the day” (206).

The brutal logic here is obvious: some will be in charge of the machines, perhaps merging with them; most will be ordered around by the resulting techno-junta.Footnote 18 Dismissing “unproductive” humans as so many bodies is the height of cruelty (207). But it also fits uncomfortably well with a behaviourist robot rights ideology that claims that only what an entity does is what matters, not what it is (the philosophical foundation of Anton’s “platform agnosticism”). Nick Cave elegantly refutes this behaviourism in an interview exploring his recent work:

Maybe A.I. can make a song that’s indistinguishable from what I can do. Maybe even a better song. But, to me, that doesn’t matter – that’s not what art is. Art has to do with our limitations, our frailties, and our faults as human beings. It’s the distance we can travel away from our own frailties. That’s what is so awesome about art: that we deeply flawed creatures can sometimes do extraordinary things. A.I. just doesn’t have any of that stuff going on. Ultimately, it has no limitations, so therefore can’t inhabit the true transcendent artistic experience. It has nothing to transcend! It feels like such a mockery of what it is to be human.

As Leon R. Kass (Reference Kass2008) articulates, “Like the downward pull of gravity without which the dancer cannot dance, the downward pull of bodily necessity and fate makes possible the dignified journey of a truly human life.” For “make a song” in Cave’s passage, we could include so many other human activities: run a mile, play a game of chess, teach a class, console a mourning person, order a drink. We are so much more than what we do and make, bearing value that Anton appears unable or unwilling to recognize.

Alarmed by the repugnance of Anton’s message, the narrator becomes distressed by his success. He argues with him at first, accusing him of trying to “soften up” his Blue Lives audience to accept a world where “most of us [are] fighting for scraps in an arena owned and operated by what you call a ‘cognitive elite’.” (Kunzru Reference Kunzru2020, 208). He calls out Anton’s fusion of hierarchical conservatism and singularitarianism as a new Social Darwinism. But he cannot find a vehicle to bring his own counter-message to the world. The accelerationist logic of vicious competition, first among humans, then among humans enhanced by machines, and finally by machines themselves, signalling the obsolescence of the human form, is just too strong for him.Footnote 20 By the end of the novel, his attempt at a cri de coeur crumples into capitulation:

With metrication has come a creeping loss of aura, the end of the illusion of exceptionality which is the remnant of the religious belief that we stand partly outside or above the world, that we are endowed with a special essence and deserve recognition or protection because of it. We will carry on trying to make a case for ourselves, for our own specialness, but we will find that arrayed against us is an inexorable and inhuman power, manic and all-devouring, a power thirsty for the total annihilation of its object, that object being the earth and everything on it, all that exists.

(Kunzru Reference Kunzru2020, 227)

The intertwined logic of singularitarianism, DeMaistrean conservatism, and contempt for humanity, seem to him inescapable. But Kunzru has his narrator come to this “realization” just as he is slipping into madness.

There are some visions of the future one must simply reject and cannot really argue with; their premises are simply too far outside the bounds of moral probity.Footnote 21 Eugenicist promotion of a humanity split by its degree of access to technology is among such visions. It is a dystopia (as depicted in series like Cyberpunk: Edgerunners and films like Elysium), not a rational policy proposal. The task of the intellectual is not to toy with such secular eschatologies, calculating the least painful glidepath toward them, or amelioration of their worst effects, but to refute and resist them to prevent their realization. The same can be said of “longtermist” rationales for depriving current disadvantaged persons’ of resources in the name of the eventual construction of trillions of virtual entities (Torres Reference Torres2021, Reference Torres2022). Considering them too deeply, for too long, means entertaining a devaluation of the urgent needs of humanity today – and thus of humanity itself.

4.3 Conclusion

It will take a deep understanding of political economy, ethics, and psychology (and their mutual influence) to bound our emotional engagement with ever more personalized and persuasive technology. In an era of alexithymia, machines will increasingly promise to name and act upon our mental states.Footnote 22 Broad awareness of the machines’ owners’ agendas will help prevent a resulting colonization of the lifeworld by technocapital (Pasquale Reference Pasquale2020a). Culture can help inculcate that awareness, as the films and novels discussed have shown.Footnote 23

The chief challenge now is to maintain critical distinctions between the artificial and the natural, the mechanical and the human. One foundation of computational thinking is “reformulating a seemingly difficult problem into one we know how to solve, perhaps by reduction, embedding, transformation, or simulation” (Wing Reference Wing2004, 33). Yet there are fundamental human capacities that resist such manipulation, and particularly put us on guard against simulation. Reduction of an emotional state to, say, one of six “reaction buttons” on Facebook often leaves out much critical context.Footnote 24 Simulation of care by a robot does not amount to care, because it is not freely chosen. Carissa Veliz’s (Reference Veliz2023) suggestion that chatbots not use emojis is wise because it helps expose the deception inherent in representation of non-existent emotional states.

To be obliged to listen to robots as if they were persons or to care about their “welfare,” is to be distracted from more worthy ends and more apt ways of attending to the built environment. Emotional attachments to AI and robotics are not merely dyadic, encapsulated in a person’s and a machine’s interactions. Rather, they reflect a social milieu, where friendships may be robust or fragile, work/life balance well-respected or non-existent, conversations with persons free-flowing or clipped. It should be easy enough to imagine in which of those worlds robots marketed as “friends” or “lovers” would appear as plausible as human friends and lovers. That says more about their nature than whatever psychic compensations they afford.

5 Surveillance and Human Flourishing Pandemic Challenges

For humans to flourish in a digital world, three emerging issues should be addressed, each of which was amplified by the global Coronavirus pandemic of 2020–2022. The first is that the use of data to solve human problems is frequently compromised by the failure to understand the character of the “human” problems at hand. Rather than seeing this only in relation to the pandemic, the second issue is to acknowledge that a key factor informing and galvanizing “datafied” responses is the role of surveillance capitalism, whose emergence predated the pandemic. Shoshana Zuboff (Reference Zuboff2019) highlights some “human” consequences of this phenomenon. The third issue is to retrieve some sense of what “human flourishing” might mean, specifically as it relates to surveillance, and how this might affect how surveillance is done. For this, Eric Stoddart’s (Reference Stoddart2021) notion of the “common gaze” is briefly discussed as a starting point.

5.1 Human Problems, Surveillant Responses: The COVID-19 Pandemic

The Coronavirus pandemic that began in 2020 broke out in a digital world. This context is significant because, like the virus itself, it was novel. Even SARS in 2002 or H1N1 in 2009 did not occur in conditions that were recognized as “surveillance capitalism,” although the seeds of that conjunction were already sown (Mosco Reference Mosco2014; Zuboff Reference Zuboff2015). It is important because widespread “datafication” was increasingly characterized by dataism, “the widespread belief in the objective quantification and potential tracking of all kinds of human behaviour and sociality through online media technologies” (van Dijck Reference van Dijck2014, 198). Described in several other venues as having “religious” qualities, dataism accompanies descriptions of “Big Data” and further catalyzes phenomena such as “tech solutionism,” in which digital technology, that is, based in computing sciences, is assumed to be the answer to human problems, prior to any full understanding of the problem in question (Mozorov Reference Mozorov2013).

Dataism, that was clearly evident in the “security” responses to 9/11, ballooned once again worldwide in 2020–2021 as a crucial response to the global pandemic. It is visible in the massive turn to apps, devices, and networked data systems that occurred as soon as the pandemic was recognized as such by the WHO in March 2020. Public health data, clearly believed to be vital to the accurate assessment and prediction of trends, was used to track the course of the virus, apps were developed to assist in the essential task of contact-tracing, and devices from wearables to drones were launched as means of policing quarantine and isolation. At the same time, other surveillant systems also expanded rapidly, not just to provide platforms to connect those obliged to remain at home but also to monitor the activities of working, learning, and shopping from home, thus sucking them into the gravitational field of surveillance capitalism. And as well, some started to suspect that all this digital activity would not dissipate once the pandemic was over; government, healthcare, and commerce would entrench the new surveillant affordances within their organizations on a permanent basis (Lyon Reference Lyon, Viola and Laidler2022b).

Thus, dataveillance, or surveillance-using-data,Footnote 1 received an unprecedented boost during the COVID-19 pandemic, and, though unevenly distributed, on a global level. Its impact – positive and negative – on human flourishing was widespread. Positively, it is reported that dataveillance permitted relatively rapid information about pandemic conditions to reach citizens in each locale. Negatively, in the name of accelerating pandemic responses, some liberties were taken with data use, that had effects including diminishing the responsibility of data-holders to so-called data-subjects, the human beings whose activities produce the data in the first place.

In Ontario, Canada, for instance, privacy laws purportedly designed to provide citizens with control over the surveillance technologies that watch them, were modified to allow for new access to public health data by commercial entities to enable better statistical understanding of the pandemic, and the definition of “deidentification” of data was also changed to allow for new technological developments, even though the ability of data analytics to reidentify such data is also expanding (Scassa Reference Scassa2020). This allowed, for example, for new levels of data integration on the Ontario Health Data Platform, which was newly established in 2020 to “detect, plan and respond to the COVID-19 outbreak.”Footnote 2 Such changes were minor, however, when compared with similar activities in some other countries.

5.2 Public Health Dataveillance

In January 2020, someone infected with COVID-19 criss-crossed the city of Nanjing, China, on public transit, risking infection to many others en route. Authorities were able to track the person’s route, minute-by-minute, from the subway journey record. Details were published on social media with warnings to others on the route to be checked. Facial recognition, security cameras, and social media plus neighbourhood monitors and residential complex managers, together add up to an impressive surveillance arsenal, quickly adapted for the pandemic. A patient in Zhejiang province denied having had contact with anyone from Wuhan, but data analysis revealed contacts with at least three such persons. When cellphones are linked with national ID numbers, officials can easily make connections. But ordinary citizens can also use, for example, digital maps for checking retrospectively if they were near known infected persons (Chin and Lin Reference Chin and Lin2022). Some privacy has to be sacrificed in such an emergency, so Chinese lawyers argue (Lin Reference Lin2020).

But such trade-offs significantly shift the experience of being human in the digital world. They suggest that some circumstances demand that normal (at least in liberal democracies) expectations of privacy or data protection be downplayed or denied in favour of technocratic institutional control. In the case of the pandemic, where panicked responses seem common, such demands are often made in haste. Moreover, the lack of transparency – such as obscuring significant changes in catch-all legislative action make it even harder to both identify and resist the constraints placed on humans as objects in the data system. Some obvious objections that could be raised relate to the risks of rapidly adding new dimensions to surveillance and to the fact that, lacking clear and respected sunset clauses, such changes may settle and solidify into longer-term laws. After all, just such patterns occurred following 9/11, that proved to be permanent “states of exception,” especially in the United States (Ip Reference Ip2013).

However, trade-offs also give the impression that some aspects of human life are at least temporarily dispensable for some greater good. But this is surely a very questionable if not dangerous assumption, given that many of the technologies mobilized against the virus are relatively untested, with unproven benefits, and that the risks they present to society may be considerable, and long-term. As Rob Kitchin (Reference Kitchin2020, 1) argued, early in the pandemic, the mantra should not be “public health or civil liberties” but both, and simultaneously. Of course, great efforts should be made to reduce the scourge of a global pandemic that causes so much human suffering and death. But health is just one feature of human flourishing – freedom from undue government interference or a sense of fairness in everyday social arrangements being two others. It would certainly be odd for a government to argue that, while strenuous efforts are made to ensure freedom and equality, public healthcare concerns will be suspended or reduced.

This draws attention to the value of an over-arching sense of the significant conditions for human flourishing. So, it is worth considering carefully what substantial aspects of being human should be underscored in a digital era. In what follows I touch on some that were historically relevant, a generation ago, as well as some sparked by today’s pandemic context. The technology was far less developed – the word “digital” was not used with today’s frequency for instance – and the specific example pre-dates today’s “autonomous vehicles.”

Jeff Reiman’s (Reference Reiman1995) thoughtful discussion of the “Intelligent Vehicle Highway System (IVHS)” in Driving to the Panopticon, for example, drew attention to the fact that surveillance not only makes people visible but does so from a single point. What a contrast with today’s surveillance situation, where corporations gather data promiscuously from “public” or “private” sources to identify and profile us from multiple points! So, for him, 30 years ago, privacy protection was not merely about “strengthening windows and doors,” he said, but about remembering that information collection is about gathering pieces of our public lives and making them visible from a single point. It is almost quaint to recall that Reiman considered privacy as “the condition in which others are deprived of access to you,” something he regarded as a right (1995, 30). However, Reiman’s instincts were admirable. He was not toying with ideas about how drivers of “intelligent vehicles” might wish to restrict access to their personal data in ways that might disadvantage them as consumers but asking what possible consequences of such vehicle-use might mean for human dignity.

Reiman (Reference Reiman1995) reminded readers that the IVHS did not exist in an information vacuum but in relation to a “whole complex of information” gathering from many government departments and organizations that he thought of as an “informational panopticon.” This challenges, he avers, both extrinsic and intrinsic freedom, symbolic risks, and even what he called “psycho-political metamorphosis” (Reiman Reference Reiman1995, 40). In this last, he pondered a surveillance future in which humans become less noble, interesting, and worthy of respect – deprived of dignity. “As more of your inner life is made sense of from without,” Reiman wrote, “… the need to make your own sense out of your inner life shrinks” (Reiman Reference Reiman1995, 41). But that same healthy inner life is required for political life, in a democracy, and for judging between different political parties or policy options. The risks of privacy – the lack that comes from knowing that one is visible to unknown others – arise from datafied systems often set up for what were believed to be beneficent purposes.

Reiman argued that while one needs formal conditions for privacy – such as rights – one also needs material conditions, by which I think he means systems that are privacy protective by design and operation precisely because they are an essential part of an environment that allows humans to exercise agency and experience dignity. Perhaps because he was writing a generation ago, his comments now seem almost quaint, and yet strangely relevant. Quaint because they antedate the Internet in its interactive phase, social media, and surveillance capitalism. And relevant in an even more urgent way, because of what happens when global pandemics – or other global crises – are unleashed on a world of already existing surveillance capitalism.

The COVID-19 pandemic was marked by a dataism-inspired celebration of tech solutionism by both corporate and government actors, often, seemingly willing to play down the impact of the “privacy” implications of technical and legal shifts on the human beings in the system. And relevant, too, in a world of social media in which the platforms’ profit motive not only colonizes further the “inner life,” but also undermines previous democratic practice, as the same profit-oriented social media boost political polarization and simultaneously threaten social justice, doing so with apparent impunity. Each of these is a threat to human flourishing.

Many thoughtful people sense that some larger questions have to be answered to ensure that humans living in the emerging surveillance system can thrive, rather than merely working within the more familiar frames of privacy and data protection, valuable though those have been and still are. For me, as someone who has been working in Surveillance Studies more-or-less since its inception (in the 1990sFootnote 3), I have found much inspiration among those who frame the issues – and thus the critical analysis of the human impact in actual empirical situations – in terms of data ethics in general and data justice in particular. This is consonant with my own long-term quest to understand, for example, the “social sorting” dynamics of much if not all surveillance today.

Such sorting scores and ranks individuals within arcane categories, leading to differential treatment (Lyon Reference Lyon2003). They profoundly affect human life. Such practices are common to all forms of surveillance, from commercial marketing to policing and government. They unavoidably affect, in other words, everyday human life in multiple contexts. Many cite so-called social credit systems in China as extreme examples of such social sorting by government departments, in tandem with well-known major corporations (Chin and Lin Reference Chin and Lin2022). However, while few governments enjoy the direct use and control of sorting systems – combined, in the Chinese and a few other cases, with the use of informers and spies (e.g. Pei Reference Pei2021) –such sorting is carried out constantly in countries around the world with more random but no less potentially negative results. This is exacerbated today by the increasing use of AI, whose algorithms are often distorted from the outset by inadequate machine learning due to poor data sources. Black and poorer people in the United States, for instance, suffer systematic discrimination when sorting outcomes depend in part on AI. A striking case is that of facial recognition systems, that are notoriously limited in their capacity to distinguish major categories of targets. Joy Buolamwini, whose PhD at the Massachusetts Institute of Technology demonstrated failures of facial recognition systems, especially in the case of black women, took it upon herself to found the Algorithmic Justice League, in response. She speaks explicitly of “protecting the human” from negative effects of AI (Buolamwini Reference Buolamwini2022).

So while, 30 years ago, Reiman’s (Reference Reiman1995) concern for the “inner life” in intensifying surveillance conditions was justified – compare Zuboff’s (Reference Zuboff2019) critique of such “inner” manipulation via surveillance capitalism – today’s surveillance equally affects the “outer life” of material conditions, of social disadvantage, by means of social sorting, dubious data-handling methods, biased algorithms, and so on (Cinnamon Reference Cinnamon2017). Let me comment on the work of Linnet Taylor (Reference Taylor2017), as a starting point for discussion, before taking this further, using other pandemic surveillance challenges to point to a larger context within which people’s “inner” and “outer” lives might be placed.

With respect to the pandemic, Taylor (Reference Taylor2020) observes that data is far from certain – even death rates are hard to calculate accurately – and yet are often treated as accurate and objective proxies for human experiences and understandings (due, arguably, to data’s status-inflation in dataism). Thus, she turns to an ethics of care, that is embodied, that takes account of what can be known about the person within the system, and considers problems to be overcome from there. People are seen as collectives, bound by responsibilities to others, not as mere data points defined by their responses to rational incentives.

This prompts a quest for understanding those people made invisible or out of focus by official statistics – the elderly-in-care, prisoners, migrant workers, and the like, each of whom have their own reasons for mobility, or lack thereof, among other pandemic-related factors. Much “pandemic data” was created by policy, rather than vice versa. Thus, as Taylor shows, even reducing the number of deaths can become a policy target, in some circumstances, as occurred under President Trump in his first term. He proposed to keep deaths under 100,000 in a highly instrumental fashion, that allowed for data-collection practices that confirmed the goal. This was similar to Boris Johnson’s efforts in the United Kingdom, to make a “herd immunity policy” in warning of untimely deaths but not restricting the size of public gatherings. The dynamics of data collection and use are very uneven and work to obscure the human aspects of the problems that data are being mobilized to solve. As Taylor (Reference Taylor2020) says, “statistical normality is abnormal – it is the minority position. There is no ‘herd,’ only a mosaic of different vulnerabilities” that are experienced in the context of each human life.

Such a perspective builds on one of Linnet Taylor’s (Reference Taylor2017) earlier contributions, on data justice. The granular data sources enabling companies and government departments to sort, categorize, and intervene in people’s lives are seldom yoked with a social justice agenda. Especially among marginalized groups, distributed visibility has consequences. In What Is Data Justice? The Case for Connecting Digital Rights and Freedoms Globally, Taylor (Reference Taylor2017) carefully outlines various approaches to data justice and proposes that it may be defined as “fairness in the way people are made visible, represented and treated as a result of the production of digital data.” She also outlines three “pillars” of data justice, building on case-studies and discussions of the theme around the world. They are (in)visibility, (dis)engagement with technology, and antidiscrimination (Taylor Reference Taylor2017). And she pleads not merely for “responsible” but for “accountable” technology, that, arguably, would make transparency and therefore trust much more meaningful realities.

These reflections on the “pandemic challenges” to questions of surveillance and human flourishing certainly go beyond what Reiman was arguing in the mid-1990s and yet they still resonate with his core argument about the challenges to humanness in what was then termed a time of “information technology.” Today’s challenge is to confront the data-driven character of surveillance, that in turn is strongly associated with the profit-driven activities of surveillance capitalism, now deeply implicated in responses to the COVID-19 pandemic. People are now made visible in ways which Reiman did not even dream. And these have consequences that relate not only to the potential power of government along with the erosion of the “inner life,” but also to the production and reproduction of social inequalities, both local and global. People are “made visible, represented and treated” by surveillance and such activities demand viable ethical practices suited to each human context.

The global COVID-19 pandemic demonstrated the need for data justice and data ethics in new and stark ways, again, both locally and globally. Never before has so much information circulated, accurately or otherwise, about a pandemic and never before has so much attention been paid to those data-driven statistics. No doubt, within the swirling data currents, some accurate and helpful moves have been made in public health. But, all-too-often, the lines of familiar, historical disadvantage have been traced once more, sometimes reinforcing their hold.

Vulnerability is surely linked with the use of Big Data, a term more often associated with the merely technical “Vs” of volume, velocity, and variety. This applies to rich countries like Canada as well as much poorer ones, such as India. In others, such as China, it is harder to tell just how far pandemic surveillance more effectively alleviated the contagion – although the social costs of this were high (see e.g. Ollier-Malaterre Reference Ollier-Malaterre2024; Xuecun Reference Xuecun2023). Arguably, in a more human world, public health, as well as access to health and other data, would be under much more local guidance and control, leaving less space for profit and manipulation.

5.3 Surveillance Capitalism

Dataism clearly features strongly in the public health responses to the pandemic and such dataism also characterizes the surveillance capitalism that was at the heart of many pandemic interventions, often to the detriment of the people intended to be served. Dataism has become part of the cultural imaginary (van Dijck Reference van Dijck2014) of many contemporary societies, where its dynamics but not its inner workings are commonly understood. By that I mean two things. Firstly, data, hyped by data analysts from the late twentieth century, by tech responses to 9/11, and especially during the pandemic, has a glowing veneer in much of the popular press and media as the source of “solutions” for human crises (Mozorov Reference Mozorov2013). Secondly, few in authority, including some data analysts, really claim to understand how algorithms work in practice. Indeed, there is evidence suggesting that the very training of many data analysts is decontextualized. How algorithms might “work in practice” is not necessarily a central concern to computer science students. At least in North America and Europe, they are often taught in ways that assume “algorithmic objectivity” and “technological autonomy.” This kind of thinking tends to privilege technocratic understandings over human experiences of a given phenomenon.

This “disengagement” from the actual human effects and implications of data science is also highly visible in surveillance capitalism, as Shoshana Zuboff (Reference Zuboff2019) observes. In her hands, it has much to do with what she calls “inevitabilism.” This doctrine of inevitabilism, from the “proselytizers of ubiquitous computing” states that what is currently partial will soon become a new phase of history where data science has relieved humanity of much tedious decision-making (Zuboff Reference Zuboff2019: 194). Forget human agency and the choices of communities. Just stand by and watch “technologies work their will, resolutely protecting power from challenge” (Zuboff Reference Zuboff2019, 224). For Google, one way for this to happen involves Sidewalk Labs, a smart city initiative under the Alphabet (Google’s parent company) umbrella. Such cities, one of which almost began life in Toronto, would have had “technology solve big urban problems” and “make a lot of money” (Zuboff Reference Zuboff2019, 229). Among other things, Toronto’s Sidewalk Labs bid failed because someone asked the questions that Zuboff argues are too often forgotten, “Who knows? Who decides? Who decides who decides?” (Zuboff Reference Zuboff2019, 230).

The costs of this disconnection to the human were evident during the pandemic. Much evidence exists of data science disengagement from the questions about how algorithms will be used and of inevitabilism that data science will provide all necessary for a promised recovery and return to “normal.” Citizens were often told to simply “listen to the science.”Footnote 4 Governments wished to be seen as “doing something” and tech companies promised that they could offer systems and software that would address the public health crisis effectively and rapidly.

A case-in-point in Canada is the way that the telecom company Telus sold mobile data to the Public Health Agency of Canada from early stages of the pandemic, something that was not revealed to the public until the end of 2021. This prompted a parliamentary committee to debate the meaning and significance of the move.Footnote 5 Various important questions were raised by the federal Office of the Privacy Commissioner. Among them was the reminder that even nominally “deidentified” data still has personal referents and should still be subject to legal protection. Surveillance frequently requires sensitive regulation – and indeed may also need to be dismantled entirely if its results have the potential to threaten human flourishing. In pandemic conditions, inappropriate but avoidable liberties seem to have been taken with commercial data in the hands of a government agency.

5.4 Surveillance as the “Common Gaze”

Here, in summary, are some of the surveillance challenges to human flourishing that were reinforced by the pandemic. Most obvious, perhaps, is the opportunism of tech companies which coincided with the unreadiness of governments for public health crises. This is fertile soil for tech solutionism to flourish in attempts to slow the spread of the COVID-19 virus. Such opportunism builds easily on the dataism that has been establishing itself as a major feature of the twenty-first century zeitgeist in many countries. Dataism, built on older forms of technological utopianism, is myopic and misleading in its approach to data. As José van Dijck (Reference van Dijck2014) observes, dataism assumes the objectivity of quantification and the potential for tracking human behaviour and sociality from online data. It also presents (meta)data as raw material to be analyzed and processed into predictive algorithms concerning human behaviour (van Dijck Reference van Dijck2014, 199).

The problems for ordinary human life arise from the strong likelihood that the conditions for flourishing are not fulfilled when data is granted a superior role in indicating and attempting to ameliorate social problems. As Jacques Ellul (Reference Ellul1967) astutely observed (in the 1960s) of the “technological imperative” – it is frequently the case that ends are made to fit the, now digital, means. Today, this critique is updated by Evgeny Mozorov (Reference Mozorov2013) as “tech solutionism,” which had a heyday during the pandemic. As many have observed, pandemic responses frequently misconstrued and failed to address human lived realities.

Today, it is relatively easy to find materials for a radical critique of today’s surveillance practices, dependent as they are on varying degrees of dataism and increasingly underpinned by surveillance capitalism. Less straightforward – and perhaps fraught with more risks – is the task of proposing alternatives to the prevailing practices. Not that there is a lack of specific suggestions as to how things might be done differently, from many points of view, but that a coherent general sense is missing of “how to go on” that might be agreed upon across such lines. After all, much of the world’s population lives in increasingly diverse societies, where finding overarching frameworks for living together is a constant challenge (Taylor Reference Taylor2007).

Human beings require many things in order to truly flourish, not least that they be recognized as full persons, with needs and hopes, that are always located in a relational-social context. In a Canadian context, key thinkers such as Charles Taylor and Will Kymlicka have discussed for decades how to develop an inclusive sense of common nationhood in which different groups are recognized as playing an equal and appropriate part in the nation.Footnote 6 That recognition is vital at several levels but, for both Taylor and Kymlicka, it relates to a sense of basic humanness. Needless to say, their work continues to be debated, importantly, by those, especially from feminist and anti-Black racism positions, who consider that their work does not go far enough in recognizing some groups.Footnote 7

This brings me to Eric Stoddart’s (Reference Stoddart2021) work, which focuses on the ways in which surveillance, through its categorizing and sorting – characteristics reinforced by dataism and surveillance capitalism – is socially divisive and militates against both recognition and equal treatment. In particular, such sorting often builds on and extends already existing differences within so-called multicultural societies. Stoddart (Reference Stoddart2021) concludes The Common Gaze with an engaged afterword on some ways that the pandemic experience of surveillance highlights the relevance of his thesis. For instance, he shows how some poorer communities were neglected by healthcare authorities (Stoddart Reference Stoddart2021, 221). His alternative human-oriented call is for a “preferential optic for the poor,” where those likely to be marginalized receive special attention rather than being abandoned (Stoddart Reference Stoddart2021, xiii). Stoddart discusses surveillance as a gaze for the common good; surveillance practiced from a position of compassionate solidarity. Not only this. Such surveillance for the common good would also demand “a preferential optic for the poor.” From here, Stoddart proposes ways in which data analytics affects certain vulnerable groups more than others and says that the common gaze resists the notion that collateral damage to them is somehow acceptable. Rather, surveillance data, gathered and analyzed differently, could support efforts to shine light on the plight of some specific groups, such as the elderly.

Strikingly, Stoddart does not shrink from considering people as “living human databases” as long as this is not done in a reductionist fashion. Rather, it can be a reminder that we all live as “nodes in complex networks of relationships” (Stoddart Reference Stoddart2021, 205). While practices such as self-quantification tend to turn interest inward, the common gaze aims to repair the social fabric, in which solidarity rather than mere connection is at its heart.

Eric Stoddart’s The Common Gaze (Reference Stoddart2021) is rooted in socio-theological soil; anyone familiar with liberation theology will recognize the notion of a “preferential option for the poor” as coming from Gustavo Gutiérrez (Reference Gutiérrez2001). Stoddart’s neat recycling of the term for use in a surveillance context – “a preferential optic for the poor” – is a timely reminder of the immense power of surveillance in today’s digital world. How we are seen relates directly to how we are represented and treated. Therefore, to question how we see becomes truly critical in more than one sense of the word. And it speaks profoundly to how surveillance studies are performed, insofar as that enterprise is intended to contribute to a more truly human world.

Having noted that the common gaze comes from theological soil, it is worth noting that the idea of human flourishing, with which it is closely allied, is a concept that actually transcends the barriers sometimes erected – properly, in some senses, to preserve particularity – between different theological positions. As Miroslav Volf (Reference Volf2016) argues in Flourishing, the notion of human flourishing is common to many religions, including adherents of the major Abrahamic religions – Jews, Christians, and Muslims. He offers it as a uniting factor, of our common humanity, in a globalized world. If he is correct, and if, beyond that, Stoddart’s (Reference Stoddart2021) work helps us grapple with surveillance in digitized societies, under the banner of a common gaze, then this is a goal worth pursuing. Why? Because it offers hope, at a time when hope seems in short supply.

5.5 A Larger Frame

How to turn the question of surveillance, human flourishing, and the common gaze into a matter that can be addressed in relation to the everyday lives of citizens is the challenge. So, what might be said about digital surveillance that connects its practices and discourses with wider debates, ones that are sometimes deemed irrelevant to social scientific or policy-related scholarship?Footnote 8 One is that scholars such as José van Dijck (Reference van Dijck2014) use words such as belief in the power of data in dataism, indicating an almost “religious” commitment to the findings of data scientists. The other is that the theorist of the “common gaze” writes in an explicitly “religious” context of social theology. Such “larger frames” – though they need not be formally religious in any institutional or theological sense – are necessary to social science and policy studies debates because these disciplines cannot function without making certain assumptions that cannot be “proved” but cannot but be presupposed.

And as soon as terms such as “data justice” and especially “human flourishing” come into play, the discussion is again in the realm of “assumptions” or beliefs about normative matters, about what should be. This does not for a moment mean that such analyses are lacking rigour, clarity, consistency, and other expectations rightly held about scholarly work. It simply means that the assumptions about being human, that are all too often obscured by dataism, should be brought into the open, to be scrutinized, criticized, and debated. Of course, if the assumptions can be traced to a “theological” source, this might taint them in the eyes of some who, like Max Weber, consider themselves “religiously unmusical” (Weber Reference Weber, Lepsius, Mommsen, Rudhard and Schön1994, 25). However, Weber was a Lutheran ChristianFootnote 9 and, while he did not feel qualified to speak “theologically,” his work certainly speaks both to sociology and theology.

Here, the notion of “human flourishing” has been mobilized, at least in a rudimentary fashion, to indicate a larger frame for considering questions of digital surveillance in the twenty-first century. The term “human flourishing” is common to major Abrahamic religions and will thus resonate with large swathes of the global population. And it may be linked, constructively, with terms used here, among various surveillance scholars, such as “data justice.” As a goal for refocusing attention on the human in surveillance activities and systems, it deserves serious attention.

Footnotes

2 Platform City People

1 Start-Up Nation Central, https://startupnationcentral.org/

2 Prime Minister of Israel official Twitter account @IsraeliPM, August 29, 2018: https://twitter.com/IsraeliPM/status/1034849460344573952

3 “TESCREAL” stands for Transhumanism, Extropianism, Singularitarianism, Cosmism, Realism, Effective Altruism, and Longtermism, and was coined by Timnit Gebru (c.f. Gebru and Torres Reference Gebru and Torres2024).

3 Robots, Humans, and Their Vulnerabilities

1 In Machines Like Me, Alan Turing is referred to as: “Sir Alan Turing, war hero and presiding genius of the digital age” (McEwan Reference McEwan2019, 2).

2 We have many different examples for this attitude, the last ones from the recent Israel–Palestine war. Both sides deny the other one the property of being human; and, like Rorty, Barenboim argues: “But any moral equation that we might set up must have as its basis this fundamental understanding: There are human beings (‘Menschen’) on both sides. Humanity is universal, and recognizing this truth on both sides is the only way. … Of course, especially now, you have to allow for fears, despair and anger – but the moment this leads to us denying each other’s humanity, we are lost. … Both sides must recognize their enemies as human beings and try to empathize with their perspective, their pain and their distress” (Barenboim Reference Barenboim2023, translation mine, B.R.). Many examples from contemporary politics and warzones could be cited.

3 See, on the debate about the “neutrality” of technology, on the question whether technologies are the frame, the Gestell which alienates us from the world (for instance Borgmann Reference Borgmann1984; Ihde Reference Ihde1990; Verbeek Reference Verbeek2011).

4 See for instance the so-called relational approach by Coeckelbergh (Reference Coeckelbergh2022): Coeckelbergh, too, starts with the idea of human vulnerability and seeks to interpret it normatively; see also Block et al. (Reference Block, Seifi, Hilliges, Gassert and Kuchenbecker2023) on research on “hug robots.”

5 See also Seifert et al. (Reference Seifert, Friedrich and Schleidgen2022a, 189), on the problems of deception and of manipulation; the whole article is very informative and convincingly argues to demonstrate the hidden problems in research programs on human–robot interaction.

6 With Freud (Reference Freud and McLintock2003), the uncanny plays more than just the role of intellectual insecurity, as Jentzsch understated, according to Freud. Both go back to the puppet Olimpia in the story “The Sandman” by E. T. A. Hoffmann, the dancing doll which is made of wood but seems to have human eyes and with whom the protagonist Nathanael falls in love. See Hoffmann (Reference Hoffmann and Hughes2020); see also Misselhorn (Reference Misselhorn2009).

7 The GDPR Art 22 states that “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her” (European Union, 2016). See the interesting article by Brennan-Marquez et al. (Reference Brennan-Marquez, Levy and Susser2019).

8 This is contested in the case of friendships, and there is indeed research on friendships between humans and AI. Humans can and do have good and satisfying relationships with robots – robots which are clearly recognizable as such. This is palpable in the recent development of AI and the “friendships” that are possible between humans and such intelligent (ro)bots (see Calvo et al. Reference Calvo, D’Mello, Gratch, Kappas, Calvo, D’Mello, Gratch and Kappas2014, and the whole volume they edited; also Block et al. Reference Block, Seifi, Hilliges, Gassert and Kuchenbecker2023). Much research has been done on the ethical-philosophical side as well as on the technical side of friendships with robots, especially the relation between robots and children: children see them as friends and companions. Many people report that they do have good, even trusting, and close relations with their bots, describing them as friends without deceiving themselves about the nature of the relation (see Danaher Reference Danaher2019; Prescott Reference Prescott2021; Ryland Reference Ryland2021). This connects to the ethical idea of different forms of friendship which goes back to Aristotle for whom not every friendship relies on or expresses a mutuality of feelings, only if they were to be called true friends (Friedman Reference Friedman1993; Roessler Reference Roessler2015).

4 Cultural Foundations for Conserving Human Capacities in an Era of Generative Artificial Intelligence Toward a Philosophico-Literary Critique of Simulation

1 “Last June, a tweaked version of GPT-J, an open-source model, was patched into the anonymous message board 4chan and posted 15,000 largely toxic messages in 24 hours. … What if … millions or billions of such posts every single day [began] flooding the open internet, commingling with search results, spreading across social-media platforms, infiltrating Wikipedia entries, and, above all, providing fodder to be mined for future generations of machine-learning systems? … We may quickly find ourselves facing a textpocalypse, where machine-written language becomes the norm and human-written prose the exception” (Kirschenbaum Reference Kirschenbaum2023).

2 LLMs coupled with machine vision programs to evade CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart) are the specific disinformation, misinformation, and demoralization threat anticipated here. While disinformation and misinformation involve the weaponization of false information, demoralization of a group or polity can arise when its members or citizens are bombarded by one-sided narratives (of any level of truth) designed to instill shame and doubt about the group or polity, particularly when such narratives are sponsored by authoritarian regimes or extremists who severely limit or eliminate the exposure of their own subjects or followers to similarly demoralizing narratives. This theory of demoralization builds on the account of asymmetrical openness to argument which I developed in earlier work (Pasquale Reference Pasquale, Sætnan, Schneider and Green2018).

3 For a review article compiling many non-fiction works that either reflect or document such sentiments, see Jamie Harris and Jacy Reese Anthis (Reference Harris and Anthis2021). Novelists may also seek to cultivate such sentiments; see, e.g., Kazuo Ishiguro’s (Reference Ishiguro2021) Klara and the Sun.

4 As James Boyd White (Reference White1989, 2016) has argued, “A literary text is not a string of propositions, but a structured experience of the imagination, and it should be talked about in a way that reflects its character.” A “structured experience of the imagination” does not offer us propositional truths about the world. However, it gives us a sense of what it means to “live forwards” (in Kierkegaard’s formulation) even as we understand backwards.

5 Note that computer science researchers are now seeking to detect LLM-generated text via characteristics like “burstiness” (Rogers Reference Rogers2022; Tian Reference Tian2023).

6 For an account critically engaging with this diagnosis, see William B. Parsons (Reference Parsons1998).

7 See Emile Torres (Reference Torres2023) describing and critiquing long-termists’ projection that “humanity can theoretically exist on Earth for another 1 billion years, and if we spread into space, we could persist for at least 10^40 years (that’s a 1 followed by 40 zeros). More mind-blowing was the possibility of these future people living in vast computer simulations running on planet-sized computers spread throughout the accessible cosmos, an idea that [philosopher Nick] Bostrom developed in 2003. The more people who exist in this ‘Matrix’-like future, the more happiness there could be; and the more happiness, the better the universe will become.” See also Jonathan Taplin (Reference Taplin2023).

8 Critical data for today’s affective computing arose in part from efforts to classify human emotions in order to teach social skills to autistic children. This therapeutic origin of the data is a double-edged sword, suggesting both a noble original mission and a danger of improper medicalization once it has been adopted beyond the therapeutic setting.

9 I endorse the use of scare quotes (here, for the word “converse”) to mark actions taken by robots or AI that would be described without such quotation marks if undertaken by a human. The most accurate approach would be to more fully explain the mechanism and optimization functions of the relevant AI (Tucker Reference Tucker2022); here, for example, to describe Tom’s statements as the product of a next-word-prediction algorithm designed to stimulate certain emotional responses from and interaction with Emma. However, given the pressure to describe scenarios expeditiously, and to convey the confusion that is already common in responses to them, the expedient of scare quoting robotic and AI simulations of human action is taken here.

10 The pronoun “it” is important here, forestalling the improper anthropomorphization that a pronoun like “he” or “him” would encourage. Unfortunately, many persons are already referring to digital personal assistants with personifying pronouns; as one study noted, “Only Google Assistant, having a non-human name, is referred to as it by a majority of users. However, users still refer to it using gendered pronouns just under half of the time” (Abercrombie et al. Reference Abercrombie, Curry, Pandya, Rieser, Costa-jussà, Gonen, Hardmeier and Webster2021, 27). This is unfortunate because such anthropomorphization can be profoundly misleading regarding the nature and capacities of AI (Abercrombie et al. Reference Abercrombie, Curry, Dinkar, Rieser and Zakat2023).

11 Taylor (Reference Taylor1985b, 48) also explains that “by import I mean a way in which something can be relevant or of importance to the desires or purposes or aspirations or feelings of a subject; or otherwise put, a property of something whereby it is a matter of non-indifference to a subject.” For more on the epistemic status of emotions, see Martha Nussbaum (Reference Nussbaum2001).

12 For a fuller understanding of the depth of the problem of loneliness, and particularly male loneliness, in the US, see Kathryn Edin et al. (Reference Edin, Nelson, Cherlin and Francis2019); and also Richard V. Reeves (Reference Reeves2022) describing the rise in the percentage of men reporting “no close friends” from 3% in 2001 to 15% in 2015.

13 MIT Professor Sherry “Turkle has grown increasingly concerned about the effects of applications that offer ‘artificial intimacy’ and a ‘cure for loneliness.’ Chatbots promise empathy, but they deliver ‘pretend empathy,’ she said, because their responses have been generated from the internet and not from a lived experience. Instead, they are impairing our capacity for empathy, the ability to put ourselves in someone else’s shoes.”

14 Xiaoice “has leaned into digital humans and avatars. It leads the ‘virtual boyfriend and girlfriend’ market with 8 million users. As part of this stream, Xiaoice has an ‘X Eva’ platform which hosts digital clones of Internet celebrities to provide chat and companionship services.”

15 This and the next several paragraphs are drawn from my Commonweal article “Is AI Poised to Replace Humanity?” (Pasquale Reference Pasquale2023).

16 For an affirmative response in another sociotechnical realm, see Pasquale (Reference Pasquale2010).

17 This hierarchy is expertly analyzed by Jenna Burrell and Marion Fourcade (Reference Burrell and Fourcade2021) and is closely related to the problem of economics’ displacement of other forms of knowledge in policy making. See Marion Fourcade et al. (Reference Fourcade, Ollion and Algan2015).

18 For a critical perspective on this logic of AI, rooted in a Marxian account of automation, see Matteo Pasquinelli (Reference Pasquinelli2023).

19 See also David Means (Reference Means2023): “A.I. will never feel the sense of mortality that forms around an unfinished draft, the illogic and contradictions of the human condition, and the cosmic unification of pain and joy that fuels the artistic impulse to keep working on a piece until it is finished and uniquely my own.”

20 For a fuller articulation (and critique) of this accelerationist vision of future evolution, see Benjamin Noys (Reference Noys2014).

21 As Charles Taylor (Reference Taylor1985b) observes, in the social sciences “in so far as they are hermeneutical there can be a valid response to ‘I don’t understand’ which takes the form, not only ‘develop your intuitions,’ but more radically ‘change yourself.’ This puts an end to any aspiration to a value-free or ‘ideology-free’ social science” (54).

22 For a compelling description of the political entailments of alexithymia, see Manos Tsakiris (Reference Tsakiris2020): “The psychological concept of alexithymia (meaning ‘no words for feelings’) captures this difficulty in identifying, separating or verbally describing our feelings. An emotional prescription (such as ‘you should feel …’) and affect-labelling (such as ‘angry’) can function as the context within which people will construct their emotions, especially when we’re interoceptively dysregulated.”

23 Critics of my approach may question the epistemic status of narratives in developing moral intuitions and policy positions. While space limitations preclude a full response here, I have made a case for the relevance of literature to moral and policy inquiry in Pasquale (Reference Pasquale2020b).

24 For just one of many examples of the type of context that may matter, see Jerome Kagan (Reference Kagan2019). For powerful critiques of reductionism in many affective computing scenarios, see Andrew McStay (Reference McStay2023).

5 Surveillance and Human Flourishing Pandemic Challenges

1 Surveillance occurs by many means. Human ocular vision for surveillance has been augmented mechanically, especially from the nineteenth century and digitally, from the later twentieth, in order to make lives “visible” to those seeking such information.

3 The 1990s was when the term “surveillance studies” began to be used. A number of authors had started doing surveillance studies at least from the 1970s, with Michel Foucault’s historical investigations, or James Rule’s more empirical sociology – earlier if one includes the work of Hannah Arendt. See e.g. Xavier Marquez’s (Reference Marquez2012) Spaces of Appearance and Spaces of Surveillance and David Lyon’s (Reference Lyon2022a) Reflections on 40 Years of Surveillance Studies.

4 Government of Canada records show that “listening to the science” was a key pandemic debate in that country. See www.ourcommons.ca/DocumentViewer/en/44-1/house/sitting-45/hansard.

5 The ETHI Committee included a speech by the federal Privacy Commissioner, Daniel Therrien, on February 7, 2022. See: www.priv.gc.ca/en/opc-actions-and-decisions/advice-to-parliament/2022/parl_20220207/

7 See e.g. Yasmeen Abu-Laban and Christina Gabriel (Reference Abu-Laban and Gabriel2008). To hark back to the discussion of pandemic, see Abu-Laban (Reference Abu-Laban2021).

8 See e.g. Lyon et al. (Reference Lyon2022).

9 See e.g. William Swatos and Peter Kivisto (Reference Swatos and Kivisto1991) and Joseph Scimecca (Reference Scimecca2018, 18).

References

References

Agence France Presse (AFP). “UN Rights Experts Denounce Planned Saudi Executions of Megacity Opponents.” The Guardian, May 3, 2023. www.theguardian.com/world/2023/may/03/un-rights-experts-denounce-planned-saudi-executionsGoogle Scholar
Ahmed, Sara. “A Phenomenology of Whiteness.” Feminist Theory 8, no. 2 (2007): 149168. https://journals.sagepub.com/doi/10.1177/1464700107078139CrossRefGoogle Scholar
Barbrook, Richard, and Cameron, Andy. “The Californian Ideology.” Science as Culture 6, no. 1 (1996): 4472. www.researchgate.net/publication/249004663_The_Californian_Ideology.CrossRefGoogle Scholar
Berghaus, Günter. Futurism and Politics: Between Anarchist Rebellion and Fascist Reaction, 1909–1944. New York: Berghahn Books, 1996.Google Scholar
Blockchain LLC. “Nevada Innovation Zone Facts.” June 19, 2021. https://innovationzonefacts.com/. [archived at: https://web.archive.org/web/20210619211748/https://innovationzonefacts.com/]Google Scholar
Bousquet, Antoine, Grove, Jairus, and Shah, Nisha. “Becoming War: Towards a Martial Empiricism.” Security Dialogue 51, no. 2–3 (2020): 99118. https://journals.sagepub.com/doi/full/10.1177/0967010619895660.CrossRefGoogle Scholar
Bunnell, Tim. “Multimedia Utopia? A Geographical Critique of High-tech Development in Malaysia’s Multimedia Super Corridor.” Antipode 34, no. 2 (2002): 265295. https://ap5.fas.nus.edu.sg/fass/geotgb/Final%20Paper.pdf.CrossRefGoogle Scholar
Caldeira, Teresa P. R. City of Walls: Crime, Segregation, and Citizenship in São Paulo. Oakland, CA: University of California Press, 2001.Google Scholar
Calder, Kent E. Singapore: Smart City, Smart State. Washington, DC: Brookings Institution Press, 2016.CrossRefGoogle Scholar
Cardullo, Paolo, and Kitchin, Rob. “Smart Urbanism and Smart Citizenship: The Neoliberal Logic of ‘Citizen-focused’ Smart Cities in Europe.” Environment and Planning C: Politics and Space 37, no. 5 (2019): 813830. https://journals.sagepub.com/doi/abs/10.1177/0263774X18806508.Google Scholar
Coletta, Claudio, Evans, Leighton, Heaphy, Liam, and Kitchin, Rob, eds. Creating Smart Cities. New York: Routledge, 2018.CrossRefGoogle Scholar
Datta, Ayona. “The Digital Turn in Postcolonial Urbanism: Smart Citizenship in the Making of India’s 100 Smart Cities.” Transactions of the Institute of British Geographers 43, no. 3 (2018): 405419. https://rgs-ibg.onlinelibrary.wiley.com/doi/full/10.1111/tran.12225.CrossRefGoogle Scholar
Datta, Ayona. “Postcolonial Urban Futures: Imagining and Governing India’s Smart Urban Age.” Environment and Planning D: Society and Space 37, no. 3 (2019): 393410. https://journals.sagepub.com/doi/10.1177/0263775818800721?icid=int.sj-abstract.citing-articles.89.CrossRefGoogle Scholar
Davis, Mike. Ecology of Fear: Los Angeles and the Imagination of Disaster. New York: Vintage, 1999.Google Scholar
Davis, Mike, and Monk, Daniel Bertrand. Evil Paradises: Dreamworlds of Neoliberalism. New York: The New Press, 2008.Google Scholar
Deleuze, Gilles. “Postscript on the Societies of Control.” October 59 (1992): 37. www.jstor.org/stable/778828.Google Scholar
Eliot, David, and Wood, David Murakami. “Culling the FLoC: Market Forces, Regulatory Regimes and Google’s (mis)Steps on the Path Away from Targeted Advertising.” Information Polity 27, no. 2 (2022): 259274. https://content.iospress.com/articles/information-polity/ip211535.CrossRefGoogle Scholar
Fitzmaurice, Andrew. “The Genealogy of Terra Nullius.” Australian Historical Studies 38, no. 129 (2007): 115. www.tandfonline.com/doi/abs/10.1080/10314610708601228.CrossRefGoogle Scholar
Foucault, Michel. The Birth of Biopolitics: Lectures at the Collège de France, 1978–1979. New York: Palgrave, 2008.Google Scholar
Foucault, Michel. Security, Territory, Population: Lectures at the Collège de France 1977–1978. London: Picador, 2007.Google Scholar
Gebru, Timnit and Torres, Émile P.. “The TESCREAL Bundle: Eugenics and the Promise of Utopia through Artificial General Intelligence.” First Monday, 29, no. 4 (2024). https://doi.org/10.5210/fm.v29i4.13636.Google Scholar
Gilliard, Chris, and Golumbia, David. “Luxury Surveillance.” Real Life Magazine, July 6, 2021. https://reallifemag.com/luxury-surveillance/.Google Scholar
Golumbia, David. The Cultural Logic of Computation. Cambridge, MA: Harvard University Press, 2009.CrossRefGoogle Scholar
Golumbia, David. The Politics of Bitcoin: Software as Right-wing Extremism. Minneapolis: University of Minnesota Press, 2016.Google Scholar
Graham, Stephen. Cities under Siege: The New Military Urbanism. New York: Verso Books, 2011.Google Scholar
Graham, Stephen, and Marvin, Simon. Splintering Urbanism. New York: Routledge, 2001.Google Scholar
Hall, Robert. E., Bowerman, B., Braverman, J., Taylor, J., Todosow, H., and von Wimmersperg, U.. “The Vision of a Smart City.” Presented at the 2nd International Life Extension Technology Workshop, Paris, France, September 28, 2000. https://digital.library.unt.edu/ark:/67531/metadc717101/.Google Scholar
Halpern, Orit, LeCavalier, Jesse, Calvillo, Nerea, and Pietsch, Wolfgang. “Test-Bed Urbanism.” Public Culture 25, no. 2 (2013): 272306. www.researchgate.net/publication/270637741_Test-Bed_Urbanism.CrossRefGoogle Scholar
Hart, Robert David. “Saudi Arabia’s Robot Citizen Is Eroding Human Rights.” Quartz, February 14, 2018. https://qz.com/1205017/saudi-arabias-robot-citizen-is-eroding-human-rights/.Google Scholar
ITU. “Internet of Things Global Standards Initiative.” ITU. 2012. www.itu.int/en/ITU-T/gsi/iot/Pages/default.aspx.Google Scholar
Jasanoff, Sheila, and Kim, Sang-Hyun, eds. Dreamscapes of Modernity. Sociotechnical Imaginaries and the Fabrication of Power. Chicago: Chicago University Press, 2015.CrossRefGoogle Scholar
King, Thomas. The Inconvenient Indian: A Curious Account of Native People in North America. Minneapolis: University of Minnesota Press, 2013.Google Scholar
Kitchin, Rob. The Data Revolution: Big Data, Open Data, Data Infrastructures and their Consequences. New York: Sage, 2014.Google Scholar
Klein, Naomi. “Disaster Capitalism.” Harper’s Magazine, 2007. https://harpers.org/archive/2007/10/disaster-capitalism/.Google Scholar
Knox, Paul. “Starchitects, Starchitecture and the Symbolic Capital of World Cities.” In International Handbook of Globalization and World Cities, edited by Derudder, Ben, Hoyler, Michael, Taylor, Peter J., and Witlox, Frank, 275283. New York: Edward Elgar Publishing, 2011.Google Scholar
Leadbeater, Charles. Living on Thin Air: The New Economy. London: Penguin, 2000.Google Scholar
MacDougall, Ian, and Simpson, Isabelle. “A Libertarian ‘Startup City’ in Honduras Faces Its Biggest Hurdle: The Locals.” Rest of World, October 5, 2021. https://restofworld.org/2021/honduran-islanders-push-back-libertarian-startup/.Google Scholar
Mahizhnan, Arun. “Smart Cities: The Singapore Case.” Cities 16, no. 1 (1999): 1318. www.sciencedirect.com/science/article/abs/pii/S026427519800050X.CrossRefGoogle Scholar
Marvin, Simon, Luque-Ayala, Andrés, and McFarlane, Colin, eds. Smart Urbanism: Utopian Vision or False Dawn? New York: Routledge, 2015.CrossRefGoogle Scholar
Mattern, Shannon. A City Is Not a Computer: Other Urban Intelligences. Princeton, NJ: Princeton University Press, 2021.Google Scholar
Mbembe, Achille. Necropolitics. Durham, NC: Duke University Press, 2020.CrossRefGoogle Scholar
McAuley, Paul J. Four Hundred Billion Stars. New York: Del Rey, 1988.Google Scholar
Murakami Wood, David. “The Scaling Back of Saudi Arabia’s Proposed Urban Mega-project Sends a Clear Warning to Other Would-be Utopias,” The Conversation, 5 May 2024. https://theconversation.com/the-scaling-back-of-saudi-arabias-proposed-urban-mega-project-sends-a-clear-warning-to-other-would-be-utopias-227852.CrossRefGoogle Scholar
Murakami Wood, David. “The Security Dimension.” In Global City Challenges: Debating a Concept, Improving the Practice, edited by Acuto, Michele and Steele, Wendy, 188201. New York: Palgrave, 2013.Google Scholar
Murakami Wood, David. “Smart City, Surveillance City.” Society for Computers & Law, June 30, 2015. www.scl.org/3405-smart-city-surveillance-city/.Google Scholar
Murakami Wood, David. “Was Sidewalk Toronto a PR Experiment or a Development Proposal?” In Smart Cities in Canada: Digital Dreams, Corporate Designs, edited by Valverde, Mariana and Flynn, Alexandra, 94101. Toronto: Lorimer, 2020.Google Scholar
Murakami Wood, David, and Mackinnon, Debra. “Partial Platforms and Oligoptic Surveillance in the Smart City.” Surveillance & Society 17, no. 1/2 (2019): 176182. https://ojs.library.queensu.ca/index.php/surveillance-and-society/article/view/13116.CrossRefGoogle Scholar
Murray, Jessica. “Half of Emissions Cuts Will Come from Future Tech, says John Kerry.” The Guardian, May 16, 2021. www.theguardian.com/environment/2021/may/16/half-of-emissions-cuts-will-come-from-future-tech-says-john-kerry.Google Scholar
National Strategic Special Zones. “Super Cities.” Government of Japan, 2020. www.chisou.go.jp/tiiki/kokusentoc/english/super-city/index.html.Google Scholar
NEOM. “NEOM.” Kingdom of Saudi Arabia, 2020. www.neom.com/index.html.Google Scholar
Noveck, Beth Simone. Smart Citizens, Smarter State: The Technologies of Expertise and the Future of Governing. Cambridge, MA: Harvard University Press, 2015.CrossRefGoogle Scholar
Powell, Alison B. Undoing Optimization: Civic Action in Smart Cities. New Haven, CT: Yale University Press, 2021.Google Scholar
Próspera Platform. “Próspera.” Próspera Platform, 2022. https://prospera.hn/.Google Scholar
Romer, Paul. “Technologies, Rules, and Progress: The Case for Charter Cities.” Centre for Global Development, March 3, 2010. www.cgdev.org/publication/technologies-rules-and-progress-case-charter-cities.Google Scholar
Roth, Emma. “Here’s Everything that Went Wrong with FTX.” The Verge, November 30, 2022. www.theverge.com/2022/11/30/23484331/ftx-explained-cryptocurrency-sbf-sam-bankman-fried.Google Scholar
SAP. “Intelligent Cities like Rio Make ‘Dumb, Rude, and Dirty’ Traits of the Past.” SAP Community Blog (blog), May 14, 2013. https://blogs.sap.com/2013/05/14/intelligent-cities-like-rio-make-dumb-rude-and-dirty-traits-of-the-past/.Google Scholar
Scheck, Justin, Jones, Rory, and Said, Summer. “A Prince’s $500 Billion Desert Dream: Flying Cars, Robot Dinosaurs and a Giant Artificial Moon.” Wall Street Journal, July 25, 2019. www.wsj.com/articles/a-princes-500-billion-desert-dream-flying-cars-robot-dinosaurs-and-a-giant-artificial-moon-11564097568.Google Scholar
Shelton, Taylor, and Lodato, Thomas. “Actually Existing Smart Citizens: Expertise and (non)Participation in the Making of the Smart City.” City 23, no. 1 (2019): 3552. www.tandfonline.com/doi/abs/10.1080/13604813.2019.1575115.CrossRefGoogle Scholar
Shelton, Taylor, Zook, Matthew, and Wiig, Alan. “The ‘Actually Existing Smart City’.” Cambridge Journal of Regions Economy and Society 8, no. 1 (2015): 1325. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2477482.CrossRefGoogle Scholar
Slobodian, Quinn. Globalists. Cambridge, MA: Harvard University Press, 2018.CrossRefGoogle Scholar
Steiner, Henriette, and Veel, Kristin. “Living Behind Glass Façades: Surveillance Culture and New Architecture.” Surveillance & Society 9, no. 1/2 (2011): 215232. https://ojs.library.queensu.ca/index.php/surveillance-and-society/article/view/glass.CrossRefGoogle Scholar
Stephan, Karl D., Michael, Katina, Michael, M. G., Jacob, Laura, and Anesta, Emily P.. “Social Implications of Technology: The Past, the Present, and the Future.” Proceedings of the Institute of Electrical and Electronics Engineers (IEEE) 100, no. 13 (2012): 17521781. https://ro.uow.edu.au/eispapers/135/.CrossRefGoogle Scholar
Turner, Fred. From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism. Chicago IL: University of Chicago Press, 2010.Google Scholar
Van der Pijl, Kees. Transnational Classes and International Relations. New York: Routledge, 2005.CrossRefGoogle Scholar
Vanolo, Alberto. “Smartmentality: The Smart City as Disciplinary Strategy.” Urban Studies 51, no. 5 (2014): 883898. https://journals.sagepub.com/doi/10.1177/0042098013494427.CrossRefGoogle Scholar
Venn, Couze. “The City as Assemblage. Diasporic Cultures, Postmodern Spaces, and Biopolitics.” In Negotiating Urban Conflicts: Interaction, Space and Control, edited by Berking, Helmuth, Frank, Sybille, Frers, Lars, Löw, Martina, Meier, Lars, Steets, Silke, and Stoetzer, Sergej, 4152. Bielefeld: Transcript Verlag, 2006. https://doi.org/10.1515/9783839404638-003.CrossRefGoogle Scholar
Walton, Nicholas. Singapore, Singapura: From Miracle to Complacency. New York: Oxford University Press, 2019.Google Scholar
Wells, H. G. The Time Machine. London: William Heinemann, 1895.Google Scholar
Whitson, Sara Leah, and Alaoudh, Abdullah. “Mohammed bin Salman’s Bloody Dream City of Neom.” Foreign Policy, April 27, 2020. https://foreignpolicy.com/2020/04/27/mohammed-bin-salman-neom-saudi-arabia/.Google Scholar
Williams, Jake. “Google wants to build a city.” Statescoop, May 4, 2017. https://statescoop.com/google-wants-to-build-a-city/.Google Scholar

References

Achenbach, Isabella. “On Diego Marcon’s ‘The Parents’ Room.” In The Milk of Dreams: Catalogue of the 59th International Art Exhibition, 2:293. Venice: La Biennale di Venezia, 2022. https://store.labiennale.org/en/prodotto/biennale-arte-2022/Google Scholar
Acquisti, Alessandro, Brandimarte, Laura, and Loewenstein, George. “The Drive for Privacy and the Difficulty of Achieving It in the Digital Age.” Agendadigitale.Eu, August 2, 2021. www.agendadigitale.eu/sicurezza/the-drive-for-privacy-and-the-difficulty-of-achieving-it-in-the-digital-age/.Google Scholar
Barenboim, Daniel. “Unsere Friedensbotschaft muss lauter sein denn je.” Süddeutsche Zeitung, October 13, 2023. www.sueddeutsche.de/kultur/daniel-barenboim-israel-aufruf-hamas-1.6287339.Google Scholar
Block, Alexis E., Seifi, Hasti, Hilliges, Otmar, Gassert, Roger, and Kuchenbecker, Katherine J.. “In the Arms of a Robot: Designing Autonomous Hugging Robots with Intra-Hug Gestures.” ACM Transactions on Human–Robot Interaction 12, no. 2 (2023): 18:118:49. https://doi.org/10.1145/3526110.CrossRefGoogle Scholar
Borgmann, Albert. Technology and the Character of Contemporary Life: A Philosophical Inquiry. Chicago, IL: University of Chicago Press, 1984.Google Scholar
Bostrom, Nick. “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards.” Journal of Evolution and Technology 9, no. 1 (2002): 136. https://nickbostrom.com/existential/risks.pdf.Google Scholar
Bostrom, Nick. “A History of Transhumanist Thought.” Journal of Evolution and Technology 14, no. 1 (2005): 125.Google Scholar
Brennan-Marquez, Kiel, Levy, Karen, and Susser, Daniel. “Strange Loops: Apparent versus Actual Human Involvement in Automated Decision-Making.” Berkeley Technology Law Journal 34, no. 3 (2019): 745772. https://philarchive.org/rec/BRESLA-2.Google Scholar
Calvo, Rafael A., D’Mello, Sidney, Gratch, Jonathan, and Kappas, Arvid. “Introduction to Affective Computing”. In The Oxford Handbook of Affective Computing, edited by Calvo, Rafael A., D’Mello, Sidney, Gratch, Jonathan, and Kappas, Arvid, 334348. Oxford: Oxford University Press, 2014. https://doi.org/10.1093/oxfordhb/9780199942237.013.006.Google Scholar
Chalmers, David. Reality+: Virtual Worlds and the Problems of Philosophy. New York: Allen Lane, 2022.Google Scholar
Coeckelbergh, Mark. “Three Responses to Anthropomorphism in Social Robotics: Towards a Critical, Relational, and Hermeneutic Approach.” International Journal of Social Robotics 14, no. 10 (2022): 20492061. https://doi.org/10.1007/s12369-021-00770-0.CrossRefGoogle Scholar
Danaher, John. “The Philosophical Case for Robot Friendship.” Journal of Posthuman Studies 3, no. 1 (2019): 524. https://doi.org/10.5325/jpoststud.3.1.0005.CrossRefGoogle Scholar
Darmanin, Godwin. “On the Possibility of Emotional Robots.” Revista de Filosofia Aurora 31 no. 54 (2019): 804817. https://doi.org/10.7213/1980-5934.31.054.DS08.CrossRefGoogle Scholar
European Union. “Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation).” Official Journal of the European Union 59, no. L 119 (2016): 188.Google Scholar
Ferrando, Francesca. “Transhumanism/Posthumanism.” In Posthuman Glossary, edited by Braidotti, Rosi and Hlavajova, Maria, 438439. New York: Bloomsbury Academic, 2018. https://doi.org/10.5040/9781350030275.Google Scholar
Freud, Sigmund. The Uncanny, translated by McLintock, David. Illustrated edition. 1919. Reprint, New York: Penguin Classics, 2003.Google Scholar
Friedman, Marilyn. What Are Friends For? Feminist Perspectives on Personal Relationships and Moral Theory. Ithaca, NY: Cornell University Press, 1993.Google Scholar
Frischmann, Brett, and Selinger, Evan. Re-Engineering Humanity. Cambridge: Cambridge University Press, 2018.CrossRefGoogle Scholar
Gatens, Moira. “Frankenstein, Spinoza, and Exemplarity.” Textual Practice 33, no. 5 (2019): 739752. https://doi.org/10.1080/0950236X.2019.1581681CrossRefGoogle Scholar
Geertz, Clifford. “Thick Description: Towards an Interpretive Theory of Culture.” In The Interpretation of Cultures, 311323. New York: Basic Books, 1973. https://philarchive.org/rec/GEETTD.Google Scholar
Hayles, N. Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Kindle ed. Chicago, IL: University of Chicago Press, 2008.Google Scholar
Heilinger, Jan-Christoph, and Nida-Rümelin, Julian, eds. Anthropologie und Ethik. Berlin: De Gruyter, 2015. https://doi.org/10.1515/9783110412918.CrossRefGoogle Scholar
Hoffmann, E. T. A. Der Sandmann/The Sandman by E. T. A. Hoffmann: The Original German and a New English Translation with Critical Introductions, edited and translated by Hughes, Jolyon Timothy. Bilingual ed. 1816. Lanham, MD: Hamilton Books, 2020.Google Scholar
Ihde, Don. Technology and the Lifeworld: From Garden to Earth. Bloomington, IN: Indiana University Press, 1990. https://philarchive.org/rec/IHDTAT-3.CrossRefGoogle Scholar
Ishiguro, Kazuo. Klara and the Sun. New York: Knopf, 2021.Google Scholar
Kurzweil, Ray. The Singularity Is Near: When Humans Transcend Biology. London: Penguin, 2006.Google Scholar
Loh, Janina, and Loh, Wulf, eds. Social Robotics and the Good Life: The Normative Side of Forming Emotional Bonds with Robots. Bielefeld: transcript Verlag, 2022. https://doi.org/10.1515/9783839462652.Google Scholar
Mackenzie, Catriona, Rogers, Wendy, and Dodds, Susan. ‘Introduction: What Is Vulnerability, and Why Does It Matter for Moral Theory?’ In Vulnerability: New Essays in Ethics and Feminist Philosophy, edited by Mackenzie, Catriona, Rogers, Wendy, and Dodds, Susan, 130. Oxford: Oxford University Press, 2013. https://doi.org/10.1093/acprof:oso/9780199316649.003.0001.CrossRefGoogle Scholar
dirMarcon, Diego. The Parents’ Room. 2021. www.youtube.com/watch?v=B94pgamC3sk.Google Scholar
McEwan, Ian. Machines Like Me, 1st ed. New York: Nan A. Talese, 2019.Google Scholar
Misselhorn, Catrin. “Empathy with Inanimate Objects and the Uncanny Valley.” Minds and Machines 19, no. 3 (2009): 345359. https://doi.org/10.1007/s11023-009-9158-2.CrossRefGoogle Scholar
Müller, Vincent C.Ethics of Artificial Intelligence and Robotics”. In The Stanford Encyclopedia of Philosophy, edited by Zalta, Edward N. and Nodelman, Uri. Stanford, CA: Metaphysics Research Lab, Stanford University, 2023. https://plato.stanford.edu/archives/fall2023/entries/ethics-ai/.Google Scholar
Neuhouser, Frederick. “Die normative Bedeutung von ‘Natur’ im moralischen und politischen Denken Rousseaus.” In Sozialphilosophie und Kritik, edited by Forst, Rainer, Hartmann, Martin, Jaeggi, Rahel, and Saar, Martin, 109133. Frankfurt am Main: Suhrkamp Verlag, 2009.Google Scholar
Nussbaum, Martha C. Justice for Animals: Our Collective Responsibility. New York: Simon & Schuster, 2023.Google Scholar
Pasquale, Frank. New Laws of Robotics: Defending Human Expertise in the Age of AI. Cambridge, MA: Belknap Press, 2020.Google Scholar
Prescott, Tony. “Will Robots Make Good Friends? Scientists are Already Starting to Find Out.” The Conversation. Academic Journalism Society. February 15, 2021. http://theconversation.com/will-robots-make-good-friends-scientists-are-already-starting-to-find-out-154034CrossRefGoogle Scholar
Roessler, Beate. “Mark of the Human: On the Concept of the Digital Human Being.” European Data Protection Law Review 7, no. 2 (2021a): 157160. https://doi.org/10.21552/edpl/2021/2/5.CrossRefGoogle Scholar
Roessler, Beate. “Was bedeutet es, in der digitalen Gesellschaft zu leben? Zur digitalen Transformation des Menschen.” Abschlussmagazin des DFG-Graduiertenkollegs “Privatheit & Digitalisierung” 1681, no. 2 (November 2021b): 2025.Google Scholar
Roessler, Beate “What Is There to Lose?” Eurozine. February 26, 2015. www.eurozine.com/what-is-there-to-lose/Google Scholar
Rorty, Richard. “Human Rights, Rationality, and Sentimentality.” In Truth and Progress: Philosophical Papers, 167185. Cambridge: Cambridge University Press, 1998.CrossRefGoogle Scholar
Ryland, Helen. “It’s Friendship, Jim, but Not as We Know It: A Degrees-of-Friendship View of Human–Robot Friendships.” Minds and Machines 31, no. 3 (2021): 377393. https://doi.org/10.1007/s11023-021-09560-z.CrossRefGoogle Scholar
Schrader, Maria, filmdirector. Ich bin dein Mensch, 2021.Google Scholar
Seifert, Johanna, Friedrich, Orsolya, and Schleidgen, Sebastian. “Imitating the Human. New Human–Machine Interactions in Social Robots.” NanoEthics 16(2) (2022a): 181192. https://doi.org/10.1007/s11569-022-00418-x.CrossRefGoogle Scholar
Selinger, Evan, and Frischmann, Brett. “Will the Internet of Things Result in Predictable People?” The Guardian, August 10, 2015. www.theguardian.com/technology/2015/aug/10/internet-of-things-predictable-people.Google Scholar
Setiya, Kieran. “Human Nature, History, and the Limits of Critique.” European Journal of Philosophy 32, no. 1 (2024): 316.CrossRefGoogle Scholar
Setiya, Kieran. “Intellectually Simulating. The World as an Illusion of Technology.” TLS, January 21, 2022. www.the-tls.co.uk/philosophy/contemporary-philosophy/reality-plus-david-chalmers-book-review-kieran-setiya/.Google Scholar
Shelley, Mary. Frankenstein (1831 Edition), Independently Published, 2021.Google Scholar
Verbeek, Peter-Paul. Moralizing Technology: Understanding and Designing the Morality of Things. Chicago, IL: University of Chicago Press, 2011.CrossRefGoogle Scholar
Weber-Guskar, Eva. “How to Feel about Emotionalized Artificial Intelligence? When Robot Pets, Holograms, and Chatbots Become Affective Partners.” Ethics and Information Technology 23, no. 4 (2021): 601610. https://doi.org/10.1007/s10676-021-09598-8.CrossRefGoogle Scholar
Wells, H. G. The War of the Worlds (original 1895–1897), Grapevine (2019).Google Scholar
Williams, Bernard. Ethics and the Limits of Philosophy. Roermond, Netherlands: Fontana Press, 1985.Google Scholar
Wynter, Sylvia. “‘No Humans Involved’: An Open Letter to My Colleagues.” Forum N.H.I.: Knowledge for the 21st Century 1, no. 1 (1994): 117.Google Scholar

References

Abercrombie, Gavin, Curry, Amanda Cercas, Dinkar, Tanyi, Rieser, Verena, and Zakat, Zeerak. “Mirages: On Anthropomorphism in Dialogue Systems.” Arxiv (2023). https://arxiv.org/abs/2305.09800.Google Scholar
Abercrombie, Gavin, Curry, Amanda Cercas, Pandya, Mugdha, and Rieser, Verena. “Alexa, Google, Siri: What Are Your Pronouns? Gender and Anthropomorphism in the Design and Perception of Conversational Assistants.” In Proceedings of the 3rd Workshop on Gender Bias in Natural Language Processing, edited by Costa-jussà, Marta, Gonen, Hila, Hardmeier, Christian, and Webster, Kellie, 24–33. Online: Association for Computational Linguistics, 2021. https://doi.org/10.18653/v1/2021.gebnlp-1.4.CrossRefGoogle Scholar
Altman, Sam. “The Merge.” Sam Altman (blog), July 12, 2017. https://blog.samaltman.com/the-merge.Google Scholar
Birhane, Abeba, van Dijk, Jelle, and Pasquale, Frank. “Debunking Robot Rights Metaphysically, Ethically, and Legally.” First Monday 29, no. 4 (2024). https://doi.org/10.5210/fm.v29i4.13628.Google Scholar
Bote, Joshua. “Replika Wanted to End Loneliness with a Lurid AI Bot. Then Its Users Revolted.” San Francisco Gate, April 27, 2023. www.sfgate.com/tech/article/replika-san-francisco-ai-chatbot-17915543.php.Google Scholar
Burk, Dan L. “Asemic Defamation, or, the Death of the AI Speaker.” First Amendment Law Review 22 (2025): 189232.Google Scholar
Burrell, Jenna, and Fourcade, Marion. “The Society of Algorithms.” Annual Review of Sociology 47 (2021): 213237. https://doi.org/10.1146/annurev-soc-090820-020800.CrossRefGoogle Scholar
Christian, Brian. “How a Google Employee Fell for the Eliza Effect.” The Atlantic, June 21, 2022. www.theatlantic.com/ideas/archive/2022/06/google-lamda-chatbot-sentient-ai/661322/.Google Scholar
Criddle, Cristina. “How AI-Created Fakes Are Taking Business from Online Influencers.” Financial Times, December 29, 2023. www.ft.com/content/e1f83331-ac65-4395-a542-651b7df0d454.Google Scholar
Cusk, Rachel. Transit. New York: Farrar Strauss Giroux, 2017.Google Scholar
Danaher, John. “Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism.” Science and Engineering Ethics 26 (2020): 20232049. https://doi.org/10.1007/s11948-019-00119-x.CrossRefGoogle Scholar
Danaher, John, and Macarthur, Neil, eds. Robot Sex: Social and Ethical Implications. Cambridge, MA: MIT Press, 2017.CrossRefGoogle Scholar
Ding, Jeffrey. “XiaoIce, Where Do We Go from Here?” ChinAI (blog), December 18, 2023. https://chinai.substack.com/p/chinai-248-xiaoice-where-do-we-go.Google Scholar
Edin, Kathryn, Nelson, Timothy, Cherlin, Andrew, and Francis, Robert. “The Tenuous Attachments of Working-Class Men.” Journal of Economic Perspectives 33, no. 2 (2019): 211228.CrossRefGoogle Scholar
Fourcade, Marion, Ollion, Etienne, and Algan, Yann. “The Superiority of Economists.” Journal of Economic Perspectives 29, no. 1 (February 2015): 89114. https://doi.org/10.1257/jep.29.1.89.CrossRefGoogle Scholar
Gebru, Timnit, and Torres, Émile P.. “The TESCREAL Bundle: Eugenics and the Promise of Utopia through Artificial General Intelligence.” First Monday 29, no. 4 (2024). https://doi.org/10.5210/fm.v29i4.13636.Google Scholar
Harris, Jamie, and Anthis, Jacy Reese. “The Moral Consideration of Artificial Entities: A Literature Review.” Science and Engineering Ethics 27, no. 53 (2021).CrossRefGoogle Scholar
Horning, Rob. “The Dialectic of Simulation.” Internal Exile, June 19, 2024. https://robhorning.substack.com/p/dialectic-of-simulation.Google Scholar
Ishiguro, Kazuo. Klara and the Sun. New York: Knopf, 2021.Google Scholar
Jankowicz, Nina. How to Lose the Information War: Russia, Fake News, and the Future of Conflict. London: I. B. Tauris, 2020.CrossRefGoogle Scholar
Kagan, Jerome. Kinds Come First: Age, Gender, Class, and Ethnicity Give Meaning to Measures. Cambridge, MA: MIT Press, 2019.CrossRefGoogle Scholar
Kass, Leon R.Defending Human Dignity.” In Human Dignity and Bioethics: Essays Commissioned by the President’s Council on Bioethics, edited by President’s Council on Bioethics. U.S. Government Printing Office, 2008.Google Scholar
Kirschenbaum, Matthew. “Prepare for the Textpocalypse.” The Atlantic, March 2023. www.theatlantic.com/technology/archive/2023/03/ai-chatgpt-writing-language-models/673318/.Google Scholar
Kunzru, Hari. Red Pill. London: Scribner, 2020.Google Scholar
Levy, David. Love and Sex with Robots. New York: Harper Perennial, 2008.Google Scholar
Lyotard, Jean-François. The Inhuman: Reflections on Time, translated by Geoffrey Bennington and Rachel Bowlby. Redwood City, CA: Stanford University Press, 1992.Google Scholar
Manguso, Sarah. 300 Arguments. New York: Picador, 2018.Google Scholar
McStay, Andrew. Automating Empathy. New York: Oxford University Press, 2023.CrossRefGoogle Scholar
Means, David. “A.I. Can’t Write My Cat Story Because It Hasn’t Felt What I Feel.” N. Y. Times, March 26, 2023. www.nytimes.com/2023/03/26/opinion/ai-art-fiction.html.Google Scholar
Mineo, Liz. “Why Virtual Isn’t Actual, Especially When It Comes to Friends.” Harvard Gazette, June 21, 2023. https://news.harvard.edu/gazette/story/2023/12/why-virtual-isnt-actual-especially-when-it-comes-to-friends/.Google Scholar
Noys, Benjamin. Malign Velocities: Accelerationism and Capitalism. Winchester: Zero Books, 2014.Google Scholar
Nussbaum, Martha. Upheavals of Thought: The Intelligence of Emotions. Cambridge: Cambridge University Press, 2001.CrossRefGoogle Scholar
Parsons, William B. “The Oceanic Feeling Revisited.” Journal of Religion 78, no. 4 (1998): 501523.CrossRefGoogle Scholar
Pasquale, Frank. “Is AI Poised to Replace Humanity?” Commonweal, November 22, 2023. www.commonwealmagazine.org/ai-poised-replace-humanity.Google Scholar
Pasquale, Frank. “The Algorithmic Self.” Hedgehog Review 17, no. 1 (2015).Google Scholar
Pasquale, Frank. “The Automated Public Sphere.” In The Politics of Big Data, edited by Sætnan, Ann Rudinow, Schneider, Ingrid, and Green, Nicola, 1946. London: Taylor & Francis, 2018.Google Scholar
Pasquale, Frank. “Cognition-Enhancing Drugs: Can We Say No?” Bulletin of Science, Technology & Society 30, no. 9 (2010): 913. https://doi.org/10.1177/0270467609358113.CrossRefGoogle Scholar
Pasquale, Frank. New Laws of Robotics: Defending Human Expertise in the Age of AI. Cambridge, MA: Belknap Press, 2020a.Google Scholar
Pasquale, Frank. “The Substance of Poetic Procedure: Law & Humanity in the Work of Lawrence Joseph.” Law and Literature 32, no. 1 (2020b): 146. https://doi.org/10.1080/1535685X.2019.1680130.CrossRefGoogle Scholar
Pasquale, Frank. “Two Concepts of Immortality.” Yale Journal of Law & the Humanities 14, no. 1 (2002): 73121.Google Scholar
Pasquinelli, Matteo. The Eye of the Master: A Social History of Artificial Intelligence. New York: Verso, 2023.Google Scholar
Petrusich, Amanda, and Cave, Nick. “Nick Cave on the Fragility of Life.” New Yorker, 23 March 2023. www.newyorker.com/culture/the-new-yorker-interview/nick-cave-on-the-fragility-of-life.Google Scholar
Pugh, Allison. The Last Human Job: The Work of Connecting in a Disconnected World. Princeton, NJ: Princeton University Press, 2024.Google Scholar
Reeves, Richard V. Of Boys and Men: Why the Modern Male Is Struggling, Why It Matters, and What to Do about It. Washington, DC: Brookings Institution Press, 2022.CrossRefGoogle Scholar
Rogers, Reece. “How to Detect AI-Generated Text, According to Researchers.” Wired, 2022. www.wired.com/story/how-to-spot-generative-ai-text-chatgpt/.Google Scholar
Singer, Peter W. LikeWar: The Weaponization of Social Media. Boston: Houghton Mifflin Harcourt, 2018.Google Scholar
Tangermann, Victor. “Transcript of Conversation with “Sentient” AI Was Heavily Edited.” Futurism, June 14, 2022. https://futurism.com/transcript-sentient-ai-edited.Google Scholar
Taplin, Jonathan. The End of Reality: How Four Billionaires Are Selling a Fantasy Future of the Metaverse, Mars, and Crypto. New York: PublicAffairs, 2023.Google Scholar
Taylor, Charles. Philosophy and the Human Sciences. Cambridge: Cambridge University Press, 1985a.CrossRefGoogle Scholar
Taylor, Charles. “Self-Interpreting Animals.” In Human Agency and Language: Philosophical Papers Vol. 1, 4576. Cambridge: Cambridge University Press, 1985b.CrossRefGoogle Scholar
Tian, Edward. “GPTZero Case Study: Models and Exploits.” GPTZero (blog), February 7, 2023. https://gptzero.substack.com/p/gptzero-case-study-models-and-exploits.Google Scholar
Torres, Émile P. “The Acronym behind Our Wildest AI Dreams and Nightmares.” Truthdig, June 15, 2023. www.truthdig.com/articles/the-acronym-behind-our-wildest-ai-dreams-and-nightmares/.Google Scholar
Torres, Émile P. “Understanding ‘Longtermism:’ Why This Suddenly Influential Philosophy Is So Toxic.” Salon, August 20, 2022. www.salon.com/2022/08/20/understanding-longtermism-why-this-suddenly-influential-philosophy-is-so/.Google Scholar
Tsakiris, Manos. “Politics Is Visceral.” Aeon, September 2020. https://aeon.co/essays/politics-is-in-peril-if-it-ignores-how-humans-regulate-the-body.Google Scholar
Tucker, Emily. “Artifice and Intelligence.” Tech Policy Press, March 16, 2022. www.techpolicy.press/artifice-and-intelligence/.Google Scholar
Veliz, Carissa. “Chatbots Shouldn’t Use Emojis: Artificial Intelligence That Can Manipulate Our Emotions Is a Scandal Waiting to Happen.” Nature, March 14, 2023. www.nature.com/articles/d41586-023-00758-y.Google Scholar
White, James Boyd. “What Can a Lawyer Learn from Literature?” Harvard Law Review 102, no. 8 (1989): 20142047.CrossRefGoogle Scholar
Wing, Jeannette M. “Computational Thinking.” Communications of the ACM 49, no. 3 (2004): 3335.CrossRefGoogle Scholar

References

Abu-Laban, Yasmeen. “Multiculturalism: Past Present and Future.” Canadian Diversity 18, no. 1 (2021): 9–12.Google Scholar
Abu-Laban, Yasmeen, and Gabriel, Christina. Selling Diversity. Toronto: University of Toronto Press, 2008.Google Scholar
Buolamwini, Joy. Unmasking AI. New York: Penguin, 2022.Google Scholar
Chin, Josh, and Lin, Liza. Surveillance State: Inside China’s Quest to Launch a New Era of Social Control. New York: St Martin’s Press, 2022.Google Scholar
Cinnamon, Jonathan. “Social Injustice in Surveillance Capitalism.” Surveillance & Society 15, no. 5 (2017): 609625.CrossRefGoogle Scholar
van Dijck, José. “Datafication, Dataism and Dataveillance: Big Data between Scientific Paradigm and Ideology.” Surveillance & Society 12, no. 2 (2014): 197208.CrossRefGoogle Scholar
Ellul, Jacques. The Technological Society. New York: Vintage, 1967.Google Scholar
Gutiérrez, Gustavo. A Theology of Liberation. New York: Orbis, 2001.Google Scholar
Ip, John. “Sunset Clauses and Counter-terrorism Legislation.” Public Law 27 (February 2013): 126. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1853945.Google Scholar
Kitchin, Rob. “Civil Liberties or Public Health, or Civil Liberties and Public Health? Using Surveillance Technologies to Tackle the Spread of COVID-19.” Space & Polity 24, no. 3 (2020).CrossRefGoogle Scholar
Kymlicka, Will. Liberalism, Community and Culture. Oxford: Oxford University Press, 1989.Google Scholar
Kymlicka, Will. Multicultural Citizenship. Oxford: Oxford University Press, 1995.Google Scholar
Lin, Liza. “China Marshals its Surveillance Powers against Coronavirus.” Wall Street Journal, February 4, 2020.Google Scholar
Lyon, David. “Reflections on Forty Years of Surveillance Studies.” Surveillance & Society 20, no. 4 (2022a): 353356.CrossRefGoogle Scholar
Lyon, David. ed. Surveillance as Social Sorting. London: Routledge, 2003.Google Scholar
Lyon, David. “Surveillance, Transparency, and Trust: Critical Challenges from the COVID-19 Pandemic,” in Trust and Transparency in an Age of Surveillance, edited by Viola, Lora and Laidler, Paweł. London: Routledge, 2022b.Google Scholar
Lyon, David, et al. Beyond Big Data Surveillance: Freedom and Fairness. Kingston: Surveillance Studies Centre, 2022. [The final report of a 6-year research project funded by the SSHRC, led by Kirstie Ball, Colin Bennett, David Lyon, David Murakami Wood, and Valerie Steeves.]Google Scholar
Marquez, Xavier. “Spaces of Appearance and Spaces of Surveillance.” Polity 44, no. 1 (2012): 631.CrossRefGoogle Scholar
Mosco, Vincent. To the Cloud: Big Data in a Turbulent World. Boulder: Paradigm, 2014.Google Scholar
Mozorov, Evgeny. To Save Everything, Click Here: The Folly of Technological Solutionism. New York: Public Affairs, 2013.Google Scholar
Ollier-Malaterre, Ariane. Living with Digital Surveillance in China: Citizens’ Narratives on Technology, Privacy and Governance. London: Routledge, 2024.Google Scholar
Pei, Minxin. Sentinel State: Surveillance and the Survival of Dictatorship in China. Cambridge, MA: Harvard University Press, 2021.Google Scholar
Reiman, Jeffrey. “Driving to the Panopticon.” Santa Clara High Technology Law Journal 11, no. 1 (1995): 2744. https://digitalcommons.law.scu.edu/cgi/viewcontent.cgi?referer=https://www.google.com/&httpsredir=1&article=1174&context=chtlj.Google Scholar
Scassa, Teresa. “Interesting Amendments to Ontario’s Health Data and Private Sector Privacy Laws Buried in Omnibus Bill.” Teresa Scassa (blog), March 30, 2020. www.teresascassa.ca/index.php?option=com_k2&view=item&id=323:interesting-amendments-to-ontarios-health-data-and-public-sector-privacy-laws-buried-in-omnibus-bill&Itemid=80&tmpl=component&print=1.Google Scholar
Scimecca, Joseph. Christianity and Sociological Theory. London: Routledge, 2018.CrossRefGoogle Scholar
Stoddart, Eric. The Common Gaze: Surveillance and the Common Good. London: SCM, 2021.Google Scholar
Swatos, William, and Kivisto, Peter. “Max Weber as ‘Christian Sociologist’.” Journal for the Scientific Study of Religion 30, no. 4 (1991): 347362.CrossRefGoogle Scholar
Taylor, Charles. Multiculturalism: Examining the Politics of Recognition. Princeton: Princeton University Press, 1994a.CrossRefGoogle Scholar
Taylor, Charles. “The Politics of Recognition” (1992). In Multiculturalism and “The Politics of Recognition”, edited by Gutmann, Amy. Princeton: Princeton University Press, 1994b.Google Scholar
Taylor, Charles. A Secular Age. Cambridge, MA: Harvard University Press, 2007.Google Scholar
Taylor, Linnet. “The Price of Certainty: How the Politics of Pandemic Data Demand an Ethics of Care.” Big Data & Society 7, no. 2 (2020): 1.CrossRefGoogle Scholar
Taylor, Linnet. “What Is Data Justice? The Case for Connecting Digital Rights and Freedoms Globally.” Big Data & Society 4, no. 2 (2017): 114. https://journals.sagepub.com/doi/pdf/10.1177/2053951717736335.CrossRefGoogle Scholar
Volf, Miroslav. Flourishing. New Haven, CT: Yale University Press, 2016.Google Scholar
Weber, Max. Max Weber. Briefe. 1909–1910, edited by Lepsius, M. Rainer, Mommsen, Wolfgang J., Rudhard, Birgit, and Schön, Manfred. Max Weber Gesamtausgabe. II/6. Tübingen: J.C.B. Mohr (Paul Siebeck), 1994.Google Scholar
Xuecun, Murong. Deadly Quiet City: True Stories from Wuhan. New York: The New Press, 2023.CrossRefGoogle Scholar
Zuboff, Shoshana. The Age of Surveillance Capitalism. New York: Public Affairs, 2019.Google Scholar
Zuboff, Shoshana. “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization.” Journal of Information Technology 30, no. 1 (2015): 7589.CrossRefGoogle Scholar

Accessibility standard: WCAG 2.2 AAA

Why this information is here

This section outlines the accessibility features of this content - including support for screen readers, full keyboard navigation and high-contrast display options. This may not be relevant for you.

Accessibility Information

The HTML of this book complies with version 2.2 of the Web Content Accessibility Guidelines (WCAG), offering more comprehensive accessibility measures for a broad range of users and attains the highest (AAA) level of WCAG compliance, optimising the user experience by meeting the most extensive accessibility guidelines.

Content Navigation

Table of contents navigation
Allows you to navigate directly to chapters, sections, or non‐text items through a linked table of contents, reducing the need for extensive scrolling.
Index navigation
Provides an interactive index, letting you go straight to where a term or subject appears in the text without manual searching.

Reading Order & Textual Equivalents

Single logical reading order
You will encounter all content (including footnotes, captions, etc.) in a clear, sequential flow, making it easier to follow with assistive tools like screen readers.
Short alternative textual descriptions
You get concise descriptions (for images, charts, or media clips), ensuring you do not miss crucial information when visual or audio elements are not accessible.
Full alternative textual descriptions
You get more than just short alt text: you have comprehensive text equivalents, transcripts, captions, or audio descriptions for substantial non‐text content, which is especially helpful for complex visuals or multimedia.
Visualised data also available as non-graphical data
You can access graphs or charts in a text or tabular format, so you are not excluded if you cannot process visual displays.

Visual Accessibility

Use of colour is not sole means of conveying information
You will still understand key ideas or prompts without relying solely on colour, which is especially helpful if you have colour vision deficiencies.
Use of high contrast between text and background colour
You benefit from high‐contrast text, which improves legibility if you have low vision or if you are reading in less‐than‐ideal lighting conditions.

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×