Skip to main content Accessibility help
×
Hostname: page-component-68c7f8b79f-b92xj Total loading time: 0 Render date: 2025-12-27T18:43:14.088Z Has data issue: false hasContentIssue false

Part III - Technology and Policy

Published online by Cambridge University Press:  11 November 2025

Beate Roessler
Affiliation:
University of Amsterdam
Valerie Steeves
Affiliation:
University of Ottawa

Information

Type
Chapter
Information
Being Human in the Digital World
Interdisciplinary Perspectives
, pp. 143 - 204
Publisher: Cambridge University Press
Print publication year: 2025
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC 4.0 https://creativecommons.org/cclicenses/

Part III Technology and Policy

10 Exploitation in the Platform Age

Being human in the digital age means confronting a range of disorienting normative challenges. Social problems, such as ubiquitous surveillance, algorithmic discrimination, and workplace automation feel at once familiar and wholly new. It is not immediately apparent whether the language and concepts we’ve traditionally used to describe and navigate ethical, political, and governance controversies, the distinctions we’ve drawn between acceptable and unacceptable relationships, practices, and exercises of power, or the intuitions we’ve relied on to weigh and balance difficult trade-offs adequately capture the difficult issues emerging technologies create. At some level of abstraction, there is nothing truly new under the sun. But for our language and concepts to be practically useful in the present moment we have to attend carefully to how they track – and what they illuminate about – the real-world challenges we face.

This chapter considers a common refrain among critics of digital platforms: big tech “exploits” us (Andrejevic Reference Andrejevic, Fuchs, Boersma, Albrechtslund and Sandoval2012; Cohen Reference Cohen2019; Fuchs Reference Fuchs2017; Jordan Reference Jordan2015; Muldoon Reference Muldoon2022; Zuboff Reference Zuboff2019). It gives voice to a shared sense that technology firms are somehow mistreating people – taking advantage of us, extracting from us – in a way that other data-driven harms, such as surveillance and algorithmic bias, fail to capture.

Take gig work, for example. Uber, Instacart, and other gig economy firms claim that their platforms strengthen worker autonomy by providing flexible schedules and greater control over when, where, and how people work. Yet many worry that gig economy – or what Ryan Calo and Alex Rosenblat call “taking economy” – platforms are, in fact, exploiting workers (Calo and Rosenblat Reference Calo and Rosenblat2017). Regulators warn that gig platforms set prices using “non-transparent algorithms,” charge high fees, shift business risks onto workers, and require workers to pay for overhead expenses that companies normally cover (e.g., car insurance and maintenance costs), allowing platforms to capture an unfair share of proceeds.Footnote 1 Workers are subjected to opaque, even deceptive, terms of employment, “algorithmic labour management” enables fine-grained, potentially manipulative control over work practices (Rosenblat and Stark Reference Rosenblat and Stark2016; Susser et al. Reference Susser2019; US FTC 2022), and high market concentration leaves workers with few alternative options (US FTC 2022). Especially worrying, some forms of gig work – most notably “crowdwork,” where work assignments are divided into micro-tasks and distributed online, which commonly drives content moderation and the labeling of training data for artificial intelligence (AI) – are reproducing familiar patterns of racial exploitation, with the global north extracting labor, digitally, from workers in the global south. Tech workers in Kenya have recently described these practices as “modern day slavery” and called on the US government to stop big tech firms from “systemically abusing and exploiting African workers.”Footnote 2

Now consider a very different example: the increasingly common practice of algorithmic pricing. Price adjustment is a central feature of market exchange – the primary mechanism through which markets (ideally) optimize economic activity. Sellers set prices in response to – amongst other things – overall economic conditions, competitor offerings, the cost of inputs, and buyers’ willingness to pay. Today, many sellers rely on algorithms to do the work of price-setting and these new pricing technologies have sparked a number of concerns. Economists worry, in general, that algorithmic pricing drives prices upward for consumers, in some cases by enabling new forms of collusion between firms, and in others simply as a result of feedback dynamics between multiple pricing algorithms (MacKay and Weinstein Reference MacKay and Weinstein2020). But these technologies don’t simply automate price-setting, they can “personalize” it, tailoring prices to individual buyers (Acquisti et al. Reference Acquisti, Taylor and Wagman2016). “Personalized” (or “customized”) pricing, as industry firms euphemistically call it, is opaque – buyers rarely know when and how prices are personalized, making comparison shopping difficult. And the information used to set prices can include personal information about individual buyers (Seele et al. Reference Seele, Dierksmeier, Hofstetter and Schultz2021), leading to concerns that algorithmic pricing helps firms “extract wealth” from consumers and “shift it to themselves” (MacKay and Weinstein Reference MacKay and Weinstein2020, 1).

One more case: “surveillance advertising.” The contemporary digital economy is driven by targeted advertising.Footnote 3 Rather than charge consumers for the services they offer, such as search and social media, companies like Google and Facebook infuse their products with ads. Some argue that this business model is a win–win: users get access to valuable digital services for free, while technology firms earn huge profits monetizing users’ attention.Footnote 4 But many have come to view the ad-based digital economy as a grave threat to privacy, autonomy, and democracy. Because targeted advertising relies on personal information – data about individual beliefs, desires, habits, and circumstances – to place ads in front of the people most likely receptive to them, digital platforms have become, effectively, instruments of mass surveillance (Tufekci Reference Tufekci2018). And because targeted ads can influence people in ways they don’t understand and endorse, they challenge important values like autonomy and democracy (Susser et al. Reference Susser2019). Beyond these concerns, however, others argue that the surveillance economy involves an insidious form of extraction. Julie E. Cohen describes the market for personal information as the enclosure of a “biopolitical public domain,” which “facilitates new and unprecedented surplus extraction strategies within which data flows extracted from people – and, by extension, people themselves – are commodity inputs, valuable only insofar as their choices and behaviours can be monetized” (Cohen Reference Cohen2019, 71).Footnote 5

The goal of what follows is to unpack the claims that these platform-mediated practices are exploitative. What does exploitation entail, exactly, and how do platforms perpetrate it? Is exploitation in the platform economy a new kind of exploitation, or are these old problems dressed up as new ones? What would a theory of digital exploitation add to our understanding of the platform age? First, I define exploitation and argue that critics are justified in describing many platform practices as wrongfully exploitative. Next, I focus on platforms themselves – both as businesses and technologies – in order to understand what is and isn’t new about the kinds of exploitation we are witnessing. In some cases, digital platforms perpetuate familiar forms of exploitation by extending the ability of exploiters to reach and control exploitees. In other cases, they enable new exploitative arrangements by creating or exposing vulnerabilities that powerful actors couldn’t previously leverage. On the whole, I argue, the language of exploitation helps express forms of injustice overlooked or only partially captured by dominant concerns about, for example, surveillance, discrimination, and related platform abuses, and it provides valuable conceptual and normative resources for challenging efforts by platforms to obscure or legitimate them.

10.1 Defining Exploitation

What exploitation is and what makes it wrong have been the subject of significant philosophical debate. In its modern usage, the term has a Marxist vintage: the engine and the injustice of capitalism, Marx argued, is the exploitation of workers by the capitalist class. For Marx, labor is unique in its ability to generate value; lacking ownership and control over the means of production, workers are coerced to give over to their bosses most of the value they create. This, in Marx’s view, is the sense in which workers are exploited: value they produce is taken, extracted from them, and claimed, unjustly, by others.Footnote 6

Some media studies and communications scholars have adopted this Marxian framework and applied it in the digital context, arguing that online activity can be understood as a form of labor and platform exploitation as appropriation of the value such labor creates.Footnote 7 For example, pioneering work by Dallas Smythe on the “audience commodity” – the packaging and selling of consumer attention by advertisers – which focused primarily on radio and television, has been extended by theorists such as Christian Fuchs and Mark Andrejevic to understand the internet’s political economy through a constellation of Marxist concepts, including exploitation, commodification, and alienation.Footnote 8 As Andrejevic argues, this work adds a crucial element to critical theories of the digital economy, missing from approaches focused entirely on data collection and privacy (Reference Andrejevic, Fuchs, Boersma, Albrechtslund and Sandoval2012, 73).

While these accounts offer important insights, I depart from them somewhat in conceptualizing platform exploitation, for several reasons. Many – including many Marxist theorists – dispute the details of Marx’s account. Specifically, critics have demonstrated that the “labour theory of value” (the idea that value is generated exclusively by labor, that it is more or less homogeneous, and that it can be measured in “socially necessary labour time”), upon which Marx builds his notion of exploitation, is implausible (Cohen Reference Cohen1979; Wertheimer Reference Wertheimer1996, x). So, the particulars of the orthodox Marxist story about exploitation are probably wrong and building a theory of digital exploitation on top of it would mean placing that theory on a questionable foundation. Still, the normative intuition motivating the theory – that workers are often subject to unjust extraction, that something of theirs is taken, wrongfully, to benefit others – is widely shared, and efforts have been made to put that intuition on firmer theoretical ground (Cohen Reference Cohen1979; Reiman Reference Reiman1987; Roemer Reference Roemer1985).

Moreover, the concept of exploitation is more capacious than the Marxist account suggests. Beyond concerns about capitalist exploitation, we might find and worry about exploitation more broadly, in some cases outside of economic life altogether (Goodin Reference Goodin and Reeve1987). Feminist theorists, for example, have identified exploitation in sexual and marital relationships (Sample Reference Sample2003), bringing a wider range of potential harms into view. And, while the exploitation of workers – central to Marxist accounts – continues to be vitally important, as we will see, the incorporation of digital platforms into virtually all aspects of our lives opens the door to forms of exploitation Marxist accounts underemphasize or ignore.

Contemporary theorists define exploitation as taking advantage of someone – using them to benefit oneself. Paradigm cases motivating the philosophical literature include worries about sweatshop labor, commercial surrogacy, and sexual exploitation.Footnote 9 Of course, taking advantage is not always wrong – one can innocently take advantage of opportunities or rightly take advantage of an opponent’s misstep in a game. Much of the debate in exploitation theory has thus centred on its “wrong-making features,” that is, what makes taking advantage of someone morally unacceptable. There are two main proposals: one explains wrongful exploitation in terms of unfairness, the other in terms of disrespect or degradation.

10.1.1 Exploitation as Unfairness

Taking advantage of someone can be unfair either for procedural or substantive reasons. An interaction or exchange is procedurally unfair if the process is defective – for example, if one party deceives the other about the terms of their agreement or manipulates them into accepting disadvantageous terms. Substantive unfairness, by contrast, is a feature of outcomes. Even if the process of reaching an agreement is defect-free, the terms agreed to might be unacceptable in and of themselves. Consider sweatshop labor: a factory owner could be entirely forthright about wages, working conditions, and the difficult nature of the job, and likewise workers could reflect on, understand, and – given few alternative options – decide to accept them. The process is above-board, yet in many cases of sweatshop labor the terms themselves strike people as obviously unfair.

One way to understand what has gone wrong here is via the notion of “social surplus.”Footnote 10 Often when people interact or exchange the outcome is positive-sum: cooperation can leave everyone better off than they started. In economics, the surplus created through exchange is divided (sometimes equally, sometimes unequally) between sellers and buyers. But the concept of a social surplus need not be expressed exclusively in monetary terms. The idea is simply that when people interact, they often increase total welfare. If I spend my Saturday helping a friend move, he benefits from (and I lose) the labor I’ve provided for free. But we both enjoy each other’s company, feel secure in knowing we’re deepening our relationship, and I derive satisfaction from doing someone a favor.

Exploitation enters the picture when the social surplus is divided unfairly.Footnote 11 Returning to the sweatshop case, for example, the exchange is unfair – despite the absence of procedural issues – because the factory owner claims more than his fair share of the value created. He could afford to pay the factory workers more (by collecting a smaller profit) but chooses not to.Footnote 12 Likewise, we sometimes use the language of exploitation to describe similar dynamics within personal relationships: if one friend always relies on another for help but rarely reciprocates, we say that the first is exploiting the second.

10.1.2 Exploitation as Degradation

Not all exploitation, however, can be explained in terms of unfairness. Take price gouging, another standard example of exploitation: imagine, say, a thirsty hiker, lost in the desert, encounters a fellow traveller who offers to part with their extra bottle of water for $1,000.Footnote 13 The seller is perfectly forthright about their product, its condition, and the terms of sale, and the buyer reflects on, understands, and decides to accept them. In other words, there is no procedural unfairness involved. Moreover, if buying the water will save the hiker’s life, he is – in one sense – getting a pretty good deal. Most people value their life at a lot more than $1,000. Indeed, as Zwolinski points out, in such cases there is reason to believe that the hiker is getting far more of the surplus created through the exchange than the greedy seller (the former gets his life, the latter $1,000). So substantive unfairness – unevenly distributing the social surplus – can’t explain the problem here either.

For some theorists, cases like this demonstrate another possible wrong-making feature of exploitation: degradation, or the failure to treat people with dignity and respect. Allen Wood (Reference Wood1995) argues that using another person’s vulnerability to one’s own advantage is instrumentalizing and demeaning. “Proper respect for others is violated when we treat their vulnerabilities as opportunities to advance our own interests or projects. It is degrading to have your weaknesses taken advantage of, and dishonorable to use the weaknesses of others for your ends” (Wood Reference Wood1995, 150–51). Indeed, for Wood (Reference Wood1995, 154), even in cases like sweatshops, which – as we’ve just seen – can plausibly be explained in terms of unfairness, this kind of degradation is the deeper, underlying evil.

Some argue that exploitation is wrong solely in virtue of one or another of these moral considerations – at bottom, it is either unfair or degrading – and such theorists have worked to show that certain cases intuitively cast in one moral frame can be explained equally well or better through another. For the present purposes, I follow theorists who adopt a more pluralistic approach and define wrongful exploitation as Matt Zwolinski (Reference Zwolinski2012) does: taking advantage of someone in an unfair or degrading way.Footnote 14 In some cases, exploitation is wrong because it involves unfairness, in other cases because it involves degradation. Oftentimes more than one wrong-making feature is at play, and digital platforms potentially raise all these concerns.

10.2 Platform Exploitation?

A first question, then, is whether the kinds of practices I described at the start reflect these normative problems. Are platforms exploiting people?

If exploitation is taking advantage of someone in an unfair or degrading way, and what enables exploitation – what induces someone to accept unfair terms of exchange or what makes taking advantage of such terms degrading – is the exploitee’s vulnerability (the fact that they lack decent alternatives), then identifying exploitation is partly an empirical exercise. It requires asking, on a case-by-case basis: Are people vulnerable? What are their options? Are platforms taking advantage of them?

However, that need not prevent us from generalizing a little. Returning to the alleged abuses by gig economy companies, we can now recast them in this frame. Recall the FTC’s concern that gig platforms set prices using “non-transparent algorithms.” Reporting on ethnographic work in California’s gig-based ride hail industry, legal scholar Veena Dubal describes drivers struggling to understand how the prices they’re paid for individual rides are set, why different drivers are paid different rates for similar rides, or how to increase their earnings. Not only because the algorithms powering ride-hail apps are opaque, but because they set prices dynamically: “You’ve got it figured out, and then it all changes,” one driver recounts (Dubal Reference Dubal2023, 1964).Footnote 15 Using the language developed in Section 10.1, we can describe this opacity and dynamism as sources of procedural unfairness – whether the terms of exchange reached are fair or not, the process of reaching them is one in which drivers are disempowered relative to the gig platforms they are “negotiating” with.Footnote 16

There is also reason to worry that the terms reached are often substantively unfair, with platforms siphoning off more than their fair share of profits – an unfair distribution of the social surplus. Beyond concerns about how gig apps set prices, or about the ability of drivers to understand and exert agency in the process, the FTC complaint points out that ride hail apps charge drivers high fees, shift risks of doing business – usually absorbed by firms – onto drivers, and require them to pay for overhead expenses that companies normally cover, such as car insurance and maintenance costs. Similarly, crowdworkers in the global content moderation industry describe doing essential but “mentally and emotionally draining work” for little pay and without access to adequate mental health support: “Our work involves watching murder and beheadings, child abuse and rape, pornography and bestiality, often for more than 8 hours a day. Many of us do this work for less than $2 per hour.”Footnote 17

While charges of exploitation may be unwarranted in cases where, for example, ride hail drivers really are just driving for a little bit of extra cash on the side, in the mine run of cases, where gig workers lack other job options and depend on the income earned through gig app work, the charges seem fitting. Moreover, there is reason to believe that gig companies like Uber actively work to create the very vulnerabilities they exploit, by using venture capital funding to underprice competition, pushing incumbents out of the market and consolidating their own position. One reason ride hail drivers often lack alternative options is Uber has put them out of business.

Algorithmic pricing in consumer contexts also raises procedural and substantive fairness concerns. Like ride hail drivers navigating opaque, dynamic fare setting systems, consumers are increasingly presented with inconsistent prices for the same goods and services, making it difficult to understand why one is offered a particular price or how it compares to the prices others are offered (Seele et al. Reference Seele, Dierksmeier, Hofstetter and Schultz2021). And, because the algorithms determining prices are inscrutable (as in the gig app case), there is an informational asymmetry between buyers and sellers that puts the former at a significant disadvantage, potentially creating procedural fairness problems. How can a buyer decide if prices are competitive without knowing (at least roughly) how they compare to prices others in the marketplace are paying, and how can they comparison shop when prices fluctuate unpredictably?Footnote 18

Personalized pricing makes things even worse. In addition to issues stemming from algorithmic opacity and dynamism, price personalization – or what economists call “first-degree” or “perfect” price discrimination (i.e., the tailoring of prices to specific attributes of individual buyers) – raises the specter that sellers are preying on buyer vulnerabilities. On one hand, as Jeffrey Moriarty (Reference Moriarty2021, 497) argues, price discrimination is commonplace and generally considered acceptable.Footnote 19 Even highly personalized pricing might be unproblematic, provided buyers know about it and have the option to shop elsewhere.Footnote 20 From an economics perspective, first-degree price discrimination has traditionally been viewed as bad for consumers but good for overall market efficiency. If buyers pay exactly as much as they are hypothetically willing to (their “reservation price”) – and not a cent less – then sellers capture all of the surplus but also eliminate deadweight loss (Bar-Gill Reference Bar-Gill2019).

Algorithmically personalized pricing changes things. First, as we have seen, it is often opaque and inscrutable – buyers do not know that they are being offered individualized prices, or if they do, how those prices are determined. Thus, even if they could shop elsewhere, they might not know that they should. Second, the above arguments assume that personalized pricing simply attempts to find and target the buyer’s reservation price. But Oren Bar-Gill (Reference Bar-Gill2019) points out that the conception of “willingness to pay” underlying these traditional arguments, which imagines the reservation price simply as a function of consumer preferences and budgets, misses an important input: how buyers perceive prices and a product or service’s utility.

People are often mistaken about one or both, misjudging, for example, how much something will cost overall, how often they will use it, the value they will ultimately derive from it, and so on (one can think here of the cliché about gym memberships purchased on January 1). Personalized pricing algorithms can provoke and capitalize on these errors, encouraging people to over-value goods (increasing willingness to pay) and under-predict total cost – that is, it can change their reservation price (Calo Reference Calo2014). In such cases, Bar-Gill (Reference Bar-Gill2019, 221) argues, the traditional economics story is wrong – first-degree price discrimination harms consumers and diminishes overall efficiency, as “cost of production exceeds the actual benefit (but not the higher, perceived benefit).” The only benefit is to sellers, who capture the full surplus (and then some), raising substantive fairness concerns. Thus, the exploitation charge seems plausible in this case too. Though again, much depends on the details. If buyers know prices are being personalized, and they can comparison shop, it is less obvious that sellers are taking advantage of them.

Finally, behavioural advertising. Are data collectors and digital advertisers taking advantage of us? In the United States, commercial data collection is virtually unconstrained, and data subjects have little choice in the matter. Companies are required only to present boilerplate terms of service agreements, indicating what data they will collect and how they plan to use it. Data subjects usually have only two options: accept the terms or forego the service. As many have argued, this rarely amounts to a real choice.Footnote 21 If, for example, one is required to use Microsoft Office or Google Docs as part of their job, are they meaningfully free to refuse the surveillance that comes with it? Put another way, many people are in a real sense dependent on digital technologies – for their jobs, at school, in their social lives – and surveillance advertisers, unfairly, take advantage of that dependency for their own gain.

Having said that, it is worth asking further questions about how those gains are distributed – who benefits from this system? Much of the value derived from surveillance advertising obviously flows directly into the industry’s own coffers: revenue from online advertising accounts for the vast majority of profits at Google and Facebook, the two largest industry players (Hwang Reference Hwang2020). But where does the surplus come from? According to one view, elaborated most dramatically by Shoshana Zuboff, the surplus comes from us. It is a “behavioural surplus” – information about our individual desires, habits, and hang-ups, used to steer us toward buying stuff (Zuboff Reference Zuboff2019). According to this argument, personal information and the predictions they make possible are merely conduits, carrying money from regular people’s pockets into the hands of companies running ads (with the surveillance industry taking a cut along the way). In other words, data subjects are being exploited for the benefit of advertisers and sellers.

There is another view, however, according to which this whole system is a sham. Tim Hwang and others argue that behavioural advertising simply doesn’t work – the predictions sold to sellers are largely wrong and the ads they direct rarely get us to buy anything (Hwang Reference Hwang2020).Footnote 22 But, as Hwang points out, that does not mean people do not benefit from online advertising. We benefit from it, enjoying for free all the services digital ads underwrite, which we would have to pay for if the ads went away. On this view, personal data is a conduit carrying money from the advertising budgets of sellers into the hands of app makers and producers of online content (with, again, the surveillance industry collecting its cut along the way). In other words, the companies running ads are being exploited for our benefit.

10.3 What’s Old Is New Again

To this point, I have discussed platforms in general terms, focusing on what they do and whether we ought to accept it rather than on what they are and how they are able to treat people this way. I turn now to the latter: what platforms are, how they can engage in these different forms of exploitation, and what role digital technology specifically is playing in all of this.

The term “platform” is used in multiple registers. In some contexts, it is used to describe a set of companies – for example, Amazon, ByteDance, Meta, or Google. In other contexts, the term is used to describe the heterogeneous set of digital technologies such companies build, deploy, and use to generate revenues – for example, Amazon’s marketplace, the TikTok or Instagram apps, or Google’s digital advertising service. This ambiguity or multiplicity of meaning is neither a mistake nor an accident; platforms are both of these things simultaneously, businesses and technologies, and they must be understood both in economic and sociotechnical terms.

Unlike ordinary service providers, platforms function primarily as social and technical infrastructure for interactions between other parties. TikTok, Instagram, and social media platforms more broadly find audiences for content creators and advertisers who will pay to reach them. Gig economy platforms, like Uber and Lyft, facilitate exchanges between workers and people in need of their labor. As Tarleton Gillespie (Reference Gillespie2010, 4) points out, the term “platform” misleadingly brings to mind a sense of neutrality: “platforms are typically flat, featureless, and open to all.” In fact, digital platforms work tirelessly to shape the interactions they host and to influence the people involved. As we’ve seen, they do this by carefully designing technical affordances (such as opaque and personalized pricing algorithms) and by pressing economic advantages (when, for example, they leverage venture capital to underprice incumbents and eliminate competition).

So: platforms mediate and structure relationships. Some of these relationships have long existed and have often been sites of exploitation; when platforms enter the picture, they perpetuate and profit from them. Other relationships are new – innovations in exploitation particular to the platform age.

10.3.1 Perpetuating Exploitation

Many platforms profit by creating new opportunities for old forms of exploitation. Platform-mediated work is a case in point: while not all employers exploit their employees, the labor/management relationship is frequently a place where worries about exploitation arise, and digital platforms breathe new life into these old concerns.

Indeed, platforms can increase the capacity of exploiters to take advantage of exploitees by enabling exploitation at scale, expanding the reach of exploitative firms and growing the pool of potential exploitees (Pfotenhauer et al. Reference Pfotenhauer, Laurent, Papageorgiou and Stilgoe2022).Footnote 23 Gig app firms, based in Silicon Valley and operated by a relatively small number of engineers, managers, and executives, profit from workers spread throughout the world – in 2022, for example, Uber had 5 million active drivers worldwide (Biron Reference Biron2022). Moreover, as we have seen, these dynamics are visible in the broader phenomenon of “crowdwork,” or what Dubal (Reference Dubal2020) terms “digital piecework.”Footnote 24 Platforms like Amazon Mechanical Turk (AMT) carve work (such as social media content moderation and labeling AI training data) into small, discrete, distributable chunks, which can be pushed out to workers sitting in their homes or in computer centres, new sites of so-called digital sweatshops (Zittrain Reference Zittrain2009). As sociologist Tressie McMillan Cottom (Reference Cottom2020) argues, these practices constitute a kind of “predatory inclusion” – one of many ways digital platforms have implicated themselves in broader patterns of racial capitalism.

At a more granular level, digital platforms also facilitate worker exploitation by reconfiguring work, work conditions, and wage determination. A growing body of scholarship explores the nature and functioning of “algorithmic labor management”: the use of digital platforms to control workers and organize work. In contrast with simplistic narratives about automation displacing workers, this research brings to light the myriad ways digital technologies are becoming insinuated in human labor, changing its character, shifting risks, and creating new pathways for discrimination and extraction. Pegah Moradi and Karen Levy (Reference Moradi, Levy, Dubber, Pasquale and Das2020) argue, for example, that automation and platform intermediation often increase profits for firms not by producing new efficiencies, but rather by shifting the costs of inefficiencies onto workers. “Just-in-time” scheduling algorithms make it possible to employ workers at narrower intervals dynamically tailored to demand, reducing labor costs by rendering jobs more precarious and less financially dependable for workers (Moradi and Levy Reference Moradi, Levy, Dubber, Pasquale and Das2020). And algorithmic management lets employers “narrowly define work to include only very specific tasks and then pay workers for those tasks exclusively” (Moradi and Levy Reference Moradi, Levy, Dubber, Pasquale and Das2020, 281). Ride-hail drivers, for instance, are compensated only for active rides, not for the time they spend searching for new passengers.

From a law and policy perspective, platforms also make it easier to exploit workers through legal arbitrage. By creating the appearance of new forms of work, gig economy apps render workers illegible to the law, and, in so doing, they allow firms to ignore worker rights and circumvent existing worker protections. For example, high profile political battles have recently been waged over whether gig workers should be legally classified as independent contractors or as employees of gig economy companies.Footnote 25 Gig economy firms contend that all their platforms do is connect workers to paying customers; the workers don’t work for them, but rather for app users. Gig workers and their advocates argue that firms carefully manage and directly profit from their labor, and as such they ought to be given the same rights, benefits, and protections other workers enjoy. As Dubal writes about app-based Amazon delivery drivers, “In this putative nonemployment arrangement, Amazon does not provide to the DSP [delivery service providers] drivers workers’ compensation, unemployment insurance, health insurance, or the protected right to organize. Nor does it guarantee individual DSPs or their workers minimum wage or overtime compensation” (Dubal Reference Dubal2023, 1932).

10.3.1 Innovations in Exploitation

Different dynamics are at work in cases like algorithmic pricing. Here, the relationship mediated by digital platforms – in the pricing case, the relationship between buyers and sellers – is not normally a site of exploitation.Footnote 26 The introduction of digital platforms transforms the relationship into an exploitative one, making one party vulnerable to the other in new ways, or giving the latter new tools for taking advantage of existing vulnerabilities they couldn’t previously leverage.

As we’ve seen, sellers can use algorithmic pricing technologies to capture more and more of – and perhaps even raise – a buyer’s reservation price, by engaging in increasingly sophisticated forms of first-degree price discrimination. In part, this means utilizing the particular affordances of digital platforms to take advantage of existing vulnerabilities sellers couldn’t previously leverage. Specifically, platforms enable the collection of detailed personal information about each individual buyer, including information about their preferences, finances, and purchasing histories, which are highly relevant to decisions about pricing. And platforms can analyze that information to make predictions about buyer willingness to pay on-the-fly, dynamically adjusting prices in the moment for different buyers (Seele et al. Reference Seele, Dierksmeier, Hofstetter and Schultz2021). Thus, while it has always been the case that some buyers were willing to pay more than others for certain goods, sellers haven’t always been able to tell them apart, or to use that information to take advantage of buyers at the point of sale.

The affordances of digital platforms also create new vulnerabilities, by making prices more inscrutable. Without knowing (or at least being able to make an educated guess about) why a seller has offered a particular price, and without being able to see what prices other buyers in the marketplace are paying, buyers are placed at a significant disadvantage when bargaining with sellers. And lest one think this is “merely” an issue when shopping online, think again: retailers have tested personalized pricing systems for physical stores, where cameras and other tracking technologies identify particular customers and electronic price tags vary prices accordingly (Seele et al. Reference Seele, Dierksmeier, Hofstetter and Schultz2021). If sellers deploy such systems, they will deprive buyers of access to information about even more of the marketplace, creating new vulnerabilities sellers can exploit.

Moreover, beyond transforming typically non-exploitative relationships into exploitative ones, platforms can create entirely new social relationships, which exist, at least partly, for the express purpose of enabling exploitation. This is the story of “surveillance capitalism.” Digital advertising platforms have created sprawling, largely invisible ecosystems of data collectors and aggregators, analytics firms, and advertising exchanges, which data subjects – everyday people – know little about. They have brought into being a new set of relationships (e.g., the data aggregator/data subject relationship), designed from the ground up to facilitate one party extracting from the other.

We should expect more of this the more we integrate digital platforms into our lives. As platforms extend their reach, mediating new contexts, relationships, and activities, the data collection that comes in tow renders us – and our vulnerabilities – more visible and, as platforms become gatekeepers between us and more of the things we want and need – work, goods and services, information, communication – they create new opportunities to take advantage of what they learn.

10.4 Conclusion

What are we to make of all of this? To conclude, I want to suggest that the language of exploitation is useful not only as a broad indictment against perceived abuses of power by big tech firms. Understanding platforms as vehicles of exploitation helps to illuminate normative issues central to the present conjuncture.

First, theories of exploitation highlight an important but underappreciated truth, which challenges prevailing assumptions in debates about platform governance: exchange can be mutually beneficial, voluntary, and – still – wrong.Footnote 27 Which is to say, two parties can consent to an agreement, the agreement can serve both of their interests, and yet, nonetheless, it can be wrongfully exploitative. This idea, sometimes referred to as “wrongful beneficence,” can be counterintuitive, especially in the United States and other liberal democratic contexts, where political cultures centred on individual rights often treat the presence of consent as settling all questions about ethical and political legitimacy. If two people come to an agreement, there is no deception or manipulation involved, and the agreement is good for both of them (all things considered), many assume the agreement is, therefore, beyond reproach.

Consider again paradigmatic cases of exploitation. When a price gouger sells marked-up goods to someone in need – scarce generators, say, to hurricane survivors – the buyer consents to the purchase and both parties leave significantly better off than they were. Likewise, when a sweatshop owner offers low-paying work in substandard conditions to local laborers and – given few alternatives – they accept, the arrangement is voluntary and serves both the owner’s and the worker’s interests.Footnote 28 Thus, if the price gouger and the sweatshop owner have done anything wrong in these cases, it is not that they have diminished the other parties’ interests or forced them to act against their will. Rather, as we’ve seen, the former taking advantage of the latter is wrongfully exploitative because the treatment is unfair (i.e. the price of the generator is exorbitant, and the sweatshop pay is exceedingly low) and/or degrading (it fails to treat exploitees with dignity and respect).

This insight, that exploitation can be wrong even when mutually beneficial and voluntary, helps explain the normative logic of what Lewis Mumford (Reference Mumford1964) called technology’s “magnificent bribe” – the fact that technology’s conveniences seduce us into tacitly accepting its harms (Loeb Reference Loeb2021). Indictments against digital platforms are frequently met with the response that users not only accept the terms of these arrangements, they benefit from them. Mark Zuckerberg, for example, famously argued in the pages of the Wall Street Journal that Facebook’s invasive data collection practices are justified because: “People consistently tell us that if they’re going to see ads, they want them to be relevant. That means we need to understand their interests.”Footnote 29 In other words, according to Zuckerberg, Facebook users find behaviourally targeted advertising (and the data collection it requires) beneficial, so they choose it voluntarily.Footnote 30 Similarly, as we have seen, gig economy companies deflect criticism by framing the labor arrangements they facilitate as serving the interests of gig workers, both economically and as a means of strengthening worker independence and autonomy.

The language of exploitation shows a way through this moral obfuscation. Implicit in tech industry apologia is the assumption that simply adding to people’s options can’t be wrong. But the price gouging and sweatshop labor cases reveal why it can be: if the only reason someone accepts an offer is because they lack decent alternatives, and if the terms being offered are unfair or degrading, then the offer wrongfully takes advantage of them and their situation. So, while it is true that in many cases digital platforms expand people’s options, giving them opportunities to benefit in ways they would otherwise lack, and which – given few alternatives – they sometimes voluntarily accept, that is not the end of the normative story. If platforms are in a position to provide the same benefits on better terms and simply refuse, they are engaging in wrongful exploitation and ought to be contested.

Second, having said that, the fact that people benefit from and willingly participate in these arrangements should not be ignored – it tells us something about the wider landscape of options they face. When people buy from price gougers or sell their labor to sweatshop factories they do so because they are desperate. From a diagnostic perspective, we can see that taking advantage of someone in such circumstances is morally wrong. But how, as a society, we should respond to that injustice is a more complicated matter. If there aren’t better alternatives available to them, eliminating the option – by, for example, banning price gouging and sweatshop labor, or for that matter, gig work or behavioural advertising – could make the very people one is trying to protect even worse off, at least in the short run (Wood Reference Wood1995, 156).

As Alan Wood (Reference Wood1995) argues, there are two ways to respond to exploitation: what he terms “interference” and “redistribution.”Footnote 31 Interference focuses on the exploiter, stepping in to prevent them from exercising power to take advantage of others. Fair labor standards, for example, interfere with an employer’s ability to exploit workers, and price controls interfere in the market to prevent gouging. Redistribution, by contrast, focuses on exploitees: rather than directly interfering to keep the powerful in check, redistributive strategies aim to empower the vulnerable. Universal basic income policies, for example, strengthen workers’ ability to decline substandard pay and work conditions. Of course, economic support isn’t the only way to help the vulnerable resist exploitation – one might think of certain education or job training programs as designed to achieve similar ends.

Differentiating between interference and redistribution strategies is useful for weighing the myriad proposals to rein in platform abuse. Some proposals adopt an interference approach, which focuses on constraining the powerful – banning gig economy apps or behavioural advertising, for example, or imposing moratoria on face recognition technology.Footnote 32 Others aim to empower the vulnerable: digital literacy programs, for instance, equip people to make better decisions about how to engage with platforms and forced interoperability policies would enable users to more easily switch platforms if they feel like they’re being treated unfairly.Footnote 33 Some strategies combine interference and redistribution: if successful, efforts to revive antitrust enforcement in the technology industry would diminish the power of monopoly firms, weakening their ability to engage in exploitation, while also empowering users by increasing competition and thus strengthening their ability to refuse unfavorable terms.Footnote 34

There are trade-offs involved in the decision to utilize one or the other type of approach. People voluntarily accept unfair terms of exchange when they lack decent alternatives, so interference strategies could do more harm than good if they aren’t accompanied by redistributive efforts designed to expand people’s options. If people are reliant on crowdwork, for example, because they can’t find better paying or more secure jobs, then limiting opportunities for such work might – on balance – make them worse off rather than better, putting them in an even more precarious financial position than where they started.Footnote 35 Similar concerns have been raised about behavioural advertising. Despite its harms, observers point out that digital advertisement markets are “the critical economic engine underwriting many of the core [internet] services that we depend on every day” (Hwang Reference Hwang2020, 1). Interfering in these markets haphazardly could threaten the whole system.Footnote 36

If we step back, however, these insights together paint a clearer and more damning picture than is perhaps first suggested by the careful way I have parsed them. They suggest that the platform age emerged against a backdrop of deep social and economic vulnerability – a world in which many lacked adequate options to begin with – and platform companies responded by developing technologies and business models designed to perpetuate and exploit them. It is a picture, in other words, of many platforms as fundamentally predatory enterprises: high-tech tools for capturing and hoarding value, and not – as their proponents would have us believe – marvels of value creation. This is, I think, the basic normative intuition behind claims that digital platforms are exploitative, and we shouldn’t let our efforts to unspool its implications distract us from the moral clarity driving it.

Moreover, as the Marxist critique emphasizes, what makes exploitation particularly insidious is the thin cover of legitimacy it creates to conceal itself, the veneer of willingness by all parties to participate in the system – their consent and mutual benefit – that obscures the unfairness and degradation hiding just below the surface. As more and more people see through this normative fog, long-held assumptions that digital platforms (as they currently exist) are, at their core, forces for good are losing strength, space is opening up to imagine new, different sociotechnical arrangements, and conditions are improving to advance them.

11 People as Packets in the Age of Algorithmic Mobility Shaping

Mobility has a “hallowed place” in the liberal democratic tradition, providing us with what Blomley has described as “one means by which we can examine the uses to which spaces are put in political life and political relations” (Blomley Reference Blomley, Henderson and Waterstone2009, 206). Mobility is, accordingly, inherently tied to governance and provides a window on the expansion and contraction of the fundamental rights, such as gender equality (Walsh Reference Walsh and Divall2015), that shape the human experience.

In this chapter, we look carefully at how people will move through a fully digitized society. We start by examining how turn-by-turn navigation technologies are automating the human task of driving, and, in doing so, have quietly established a footing for algorithmically controlled mobility systems. Whoever controls the algorithms that route mobility within a system gains de facto control over people and their mobility rights, determining who gets access to mobility, how they access it, and numerous decisions about associated benefits (e.g. quality of service, comfort, and time to destination) and risks (e.g. exposure to noise and other pollution, traffic congestion, and discrimination). In other words, whoever controls those algorithms can deliberately and effectively shape our experience of mobility in ways that were previously unheard of, significantly shifting the experience of being human in the digital age.

Early indicators of this shift seem clear. Turn-by-turn navigation is ubiquitous thanks to the proliferation of smart phones, and the algorithms that power it have become increasingly capable of responding to real-time changes in traffic patterns in order to minimize the time it takes to get to our destinations. That ruthless efficiency – the narrow emphasis on saving time – tempts us to rely on turn-by-turn navigation to get us where we want to be, even when driving in familiar places along familiar routes. Turn-by-turn navigation also powers ride hailing services like Uber and Lyft, and will eventually power more fully automated vehicles, thus restricting the set of navigational decisions available to human drivers and human passengers. These trends demonstrate how we are increasingly delegating navigational decision-making to technologies that, in turn, are (partially) automating the actual person behind the wheel.

We argue that we are currently in the early days of algorithmically controlled mobility systems, but that, even if it is nascent in its form and reach, mobility shaping – the act of deliberately and effectively controlling mobility patterns using an algorithmically controlled mobility systemFootnote 1 – is raising a set of unresolved ethical, political, and legal issues that have significant consequences for shaping human experience in the future. The specific subset of questions we focus on in this chapter considers the extent to which the people travelling, the vehicles they use, and the geographic spaces through which they move, ought to be treated neutrally in the algorithmically controlled mobility system. By way of analogy, we argue that these emerging normative questions in mobility echo those that have been asked in the more familiar context of net neutrality. We seek to apply some of the ethical and legal reasoning surrounding net neutrality to the newly relevant algorithmically controlled mobility space, while adding some considerations unique to mobility. We also suggest extending some of the legal and regulatory framework around net neutrality to mobility providers, for the purpose of establishing and ensuring a just set of principles and rules for shaping mobility in ways that promote human flourishing.

Section 11.1 provides a brief historical survey of turn-by-turn navigationFootnote 2 and contextualizes the current socio-technical landscape. Section 11.2 examines the net neutrality controversy and legal rationales designed to ensure technical infrastructure creates political and economic relationships that are fair to people as citizens and rights-holders. Section 11.3 provides a comparative analysis between information networks (e.g. the Internet) and mobility networks, to demonstrate the extent to which the analogy helps us anticipate issues of fairness in algorithmically controlled mobility systems. Finally, Section 11.4 raises an additional set of ethical issues arising from mobility shaping, including the uneven distribution of mobility benefits and risks, the values underpinning navigational choices, and the enclosure of public concerns in private data.

11.1 A Brief History of the Automation of Driving Navigation

Prior to the widespread availability of turn-by-turn navigation apps on smartphones,Footnote 3 most drivers navigated via a combination of memory, instinct, road markers, oral directions, and paper maps. In most cases, each driving and navigation decision was shaped by two forces: the decisions of the person driving the vehicle (e.g. what speed to travel, whether to turn, change lanes, or come to a sudden stop) and the decisions of democratic institutions and administrative bodies that are both populated with people who administer the rules (e.g. to build roads, establish speed limits, and place road signs), and accountable to people as electors. Though these two forces remain relevant, new technologies are changing how people conceptualize mobility navigation. Incredibly detailed digital maps, satellite connectivity, and, most crucially, the enormous uptake of smartphones and other smart devices, have enabled the near-ubiquity of turn-by-turn navigation systems. This section briefly examines the history of automating in-car navigation.

11.1.1 In-Car Navigation, Then

In-car turn-by-turn navigation systems predate the First World War. In the early days of road travel, motorists could purchase after-market devices such as the “Chadwick Road Guide” to aid in the complex task of navigation (French Reference French and Akerman2006). This mechanical invention, first available in 1910, featured an interchangeable perforated metal disc (each corresponding to a specific route) intricately connected to one of the vehicle’s wheels. As the vehicle drove, the disc would turn and activate actions or warnings, such as “continue straight ahead” or “turn sharply to the left.” The driver could thus be “guided over any highway to [their] destination,” with the device “instructing [them] where to turn and [in] which direction” (French Reference French and Akerman2006, 270). However, there were obvious drawbacks to the Chadwick Road Guide and its contemporaries. Most significantly, these devices could only offer a limited number of predetermined routes. Besides that, the devices were complicated, delicate, and relatively expensive.

In-car navigation devices continued to evolve slowly over the next several decades (French Reference French and Akerman2006). Despite improvements, these systems could still not provide real-time information about current driving conditions and lacked accuracy over long distances.

11.1.2 In-Car Navigation, Now

More recently, a suite of technologies, including GPS, digital cameras, cloud computing, vision systems driven by artificial intelligence, and the widespread adoption of smartphones, have enabled the rapid adoption of much more effective navigation systems.Footnote 4 Satellite imagery and computer vision techniques enable the creation of maps so detailed that the fan blades inside rooftop HVAC units can be seen on some buildings in downtown Los Angeles (O’Beirne Reference O’Beirne2017). Additionally, the advent of multiple satellite positioning systems – the Global Navigation Satellite System (GNSS) – coupled with the widespread adoption of wireless communication systems, allows for far more accurate development, update, deployment, and use of maps.

Smartphones have likely resulted in the most significant changes in automating in-car navigation in recent years. Worldwide, approximately 63 per cent of adults owned smartphones in 2017 (Molla Reference Molla2017). In the United States, that number is significantly higher, at 81 per cent as of June 2019 (Pew Research Center 2019). There are now more mobile phones (8.58B) than people (7.95B) on the planet (Richter Reference Richter2023). According to another recent study, over three-quarters of those US smartphone users “regularly” use navigation apps (that is, 77%) (Panko Reference Panko2018).Footnote 5 Eighty-seven per cent of those respondents primarily use the apps for driving directions (as opposed to walking, cycling, public transit, or just as maps), and 64 per cent use the apps while driving (Panko Reference Panko2018).Footnote 6 Additionally, anecdotal experience suggests that drivers use the apps even in neighbourhoods they know, along routes they often travel. The “nudges” they receive from navigation systems can alert them to poor traffic conditions and work out alternative routes if something goes wrong. As drivers incorporate turn-by-turn navigation into their daily driving routines, and delegate navigation decisions to those apps, they are ushering in the age of algorithmically controlled mobility systems.

11.1.3 The NavigationMarketplace

Despite its growing popularity, the costs of up-front investment in the mapping infrastructure means relatively few companies compete in the turn-by-turn navigation market. Alphabet (Google’s parent company) is by far the most significant player in both mapping and turn-by-turn navigation. The Google Maps app, on which 67 per cent of US navigation app users rely, dominates turn-by-turn navigation. Google Maps far outstrips both Apple Maps and the Israeli-founded system Waze, at 11 per cent and 12 per cent, respectively (Panko Reference Panko2018). Moreover, Alphabet purchased Waze in 2013 (Cohan Reference Cohan2013), and so Waze and Google now share the same base map data. Alphabet thus controls nearly 80 per cent of smartphone-assisted turn-by-turn navigation. As a further sign of Alphabet’s dominance, the Google Maps API is currently embedded in more than five million websites, far more than any of its competitors (BuiltWith 2019).Footnote 7

Alphabet has another advantage in the field: the sheer amount of its accumulated data. Google Maps was launched in 2005 and has been collecting mapping data ever since, using aerial photography, satellite images, land vehicles, and individual smartphone data. In 2012, Google had more than 7,000 employees and contractors on its mapping projects, including the Street View cars (Carlson Reference Carlson2012).Footnote 8 Google Maps has more than one billion active users worldwide (Popper Reference Popper2017); At the time of writing, Waze has more than 151 million (Porter Reference Porter2022). Apple Maps, Alphabet’s closest US competitor in the navigation space, has only been active since 2012, and buys its mapping data second-hand, mostly from the Dutch navigation system company TomTom (Reuters 2015). Though expanding, the Europe-based Here WeGo, founded by Nokia and currently owned by a consortium of German car manufacturers, does not yet threaten Alphabet’s dominance (Here Technologies 2019). Likewise, although Uber is conducting mapping projects (Uber 2019), as are Ford and other traditional car manufacturers (Luo Reference Luo2017), Alphabet’s current advantage is undisputed.

Whether or not Alphabet continues to dominate the in-car navigation landscape, turn-by-turn navigation systems will only become more important as connectivity and functionality improve. For example, turn-by-turn navigation has evolved to include other modes of mobility, including walking, cycling, and public transportation, positioning it as the go-to technology for getting around. Fully autonomous vehicles, should they ever come to fruition, will rely on navigation systems to a far greater extent than even the most obedient driver, as they will undoubtedly move within the mobility system according to the rules designed into routing algorithms. Thus, decisions we make now about how to develop, implement, and regulate turn-by-turn navigation systems will fundamentally shape algorithmically controlled mobility systems in the coming decades.

11.2 A Brief History of Net Neutrality

In anticipation of the evolution of mobility towards algorithmically controlled mobility systems, it is useful and instructive to reflect on the net neutrality debates that have shaped our algorithmically controlled information system – also known as the Internet – in the past two decades. The debates surrounding net neutrality are important because they have been an ongoing site for political struggle and the need to infuse tech policy with human-centric policies. We consider net neutrality to be a useful metaphor in thinking about the ethics of algorithmically controlled mobility, primarily because information networks and mobility networks each contain: their own unique units of analysis – packets of information, and packets of people (or mobility); their own paths through the network – wires and roads; and their own control/routing algorithms that determine how to get the packets to their destination. The parallels, and distinctions, between information networks and algorithmically controlled mobility systems can help anticipate and inform ethical design and regulatory responses in the mobility context as they provide a roadmap to encourage designers and policymakers to develop socio-technical design specifications, and consider the ways that regulation can promote or constrain human mobility. This section provides a brief overview of the main technical, political, and ethical issues in net neutrality, including the concepts of “discrimination,” “non-discrimination,” and “neutrality,” and technical and ethical concerns related to Deep Packet Inspection and the legal concept of “common carriage.”

11.2.1 What Are DataPackets?

Data packets, generally consisting of a header and a payload, can be thought of as the basic units of Internet communication. All information sent over the Internet (e.g. emails, movies, cat memes, Instagram posts, and TikToks) is broken up into smaller chunks of data that are packaged up as one or more data packets. If multiple packets are needed to carry the transmitted information, as they usually are, the divisions between packets are made automatically. Each individual packet is then sent to its destination separately, along whatever route is most convenient at the time (Indiana University 2018). Packet headers include high-level routing information, such as the packet’s source, destination IP addresses, and information instructing how to correctly assemble multiple packets together when they reach their destination (Indiana University 2018). The remainder of the packet is referred to as the payload, containing chunks of the transmitted information.

11.2.2 Packet Discrimination and the Emergence of Net Neutrality

Early in the Internet’s history, communications between people were divided into packets of information that travelled from one place to another with very little oversight. In this early “network of Eden” (Parsons Reference Parsons2013, 14), packets were only subjected to Shallow Packet Inspection (SPI) techniques. As the name implies, SPI is designed only to allow network routers to access high-level information about the packet delivery instructions, that is, SPI limits the inspection to the packet headers (Parsons Reference Parsons2013). Thus, an Internet Service Provider (ISP), using network routers designed to limit routing decisions based on SPI, might examine the source IP address of the packet, the packet identification number, or the kind of protocol the specific packet uses, but would not typically have access to the packet content itself. Thus, SPI is used primarily as a routing tool, much like addresses on envelopes travelling through the post.

Because SPI allows for examining destination and source IP addresses, it enables only relatively crude forms of information discrimination, such as blacklisting, firewalling, and others based solely on IP addressing. “Discrimination” in this sense refers to the choices made in routing one packet compared to another. These choices might be automated and algorithmic, or they might be human-driven. Algorithmic discrimination in this sense might be as simple as the automatic “decision” to route a packet along a specific path with no human oversight. Human-driven discrimination in this context, for example, could include a corporate policy of treating packets originating from a source IP that is known to spread viruses as “blacklisted” in the corporate network – that is, preventing untrusted packets from reaching the corporate server as a security measure. SPI-enabled discrimination may have political or moral dimensions; for instance, some corporate firewalls block all packets from social media websites, while government firewalls could prevent citizens from accessing content that challenges the state.

Clearly, these kinds of restrictions could have a significant impact on the humans using the system to communicate. An ISP’s decision to block particular packets, in particular, likely has far-reaching implications for a broad swath of citizens; and could also potentially manipulate packet routing across the Internet to suit its own purposes. For instance, an ISP could prevent its customers from accessing certain websites or could delay certain packets from one website from reaching their destination as quickly as other websites’ packets. Thus, ISP packet discrimination has the potential to preference certain corporate and state interests over others. When it became clear that many ISPs were using packet discrimination to further their own corporate interests, a public controversy erupted over the role of packet discrimination in anti-competitive market manipulation, precisely because it unevenly, thus unfairly, constrained communication opportunities for people using the Internet to share content and information.

Concerns about the anti-competitive nature of ISP packet discrimination led Tim Wu to propose “the principle of network neutrality or non-discrimination” (Reference Wu2002, 1). Net neutrality, as Wu imagines it, is a principle that “distinguish[es] between forbidden grounds of discrimination – those that distort secondary markets, and permissible grounds – those necessary to network administration and to [avoid] harm to the network” (Reference Wu2002, 5). For Wu, forbidden grounds are those based on “internetwork criteria”: “IP addresses; domain names; cookie information; TCP port; and others” that can lead to unfair outcomes for (classes of) individuals (Reference Wu2002, 5). The permissible grounds are limited to local network integrity concerns, in particular, bandwidth and quality of service. As Wu describes, rather than blocking access to bandwidth-intensive applications like online gaming sites, and thus distorting information flow in favour of non-blocked applications, an ISP concerned with net neutrality “would need to invest in policing bandwidth usage” as a means of nudging consumers (Reference Wu2002, 6). The result would be a more even playing field for all network applications, shaped primarily by human communication choices, instead of an artificially influenced market sphere set up for the benefit of those controlling the flow of information.

Wu’s (Reference Wu2002) concept and coinage took off, and was discussed at the highest levels of the US government (Madrigal and LaFrance Reference Madrigal and LaFrance2014). Moreover, net neutrality is now a proxy for deeper ethical and political issues fundamentally tied to the values of human communication, privacy, surveillance, consumer rights, and freedom of speech. This has direct political consequences. Access to information, an important principle at the core of net neutrality, is recognized in Canada as an implied constitutional right (Ontario (Public Safety and Security) v. Criminal Lawyer’s Association 2010). Further, the ability to lawfully access information is a cornerstone of modern democracy: without a well-informed electorate, the health of a democracy is imperilled (Canada (Information Commissioner) v. Canada (Minister of National Defence) 2011). Globally, content discrimination on the Internet is perhaps “the [free speech] issue of our time” (Hattem Reference Hattem2014), creating a space for political action and resistance.

11.2.3 The Rise of Deep Packet Inspection

Further changes to packet inspection technology amplified a broader set of net neutrality concerns. Deep Packet Inspection (DPI) technology, enabled in 2003 by changes in network router design, enables access to the content of the message itself in real-time. Some DPI equipment can monitor hundreds of thousands of packets simultaneously, in effect looking over the shoulder of the people communicating and reading the text of their emails and other communications (Anderson Reference Anderson2007).

The US telecom corporation Comcast provided a striking example of DPI-enabled algorithmic discrimination. In 2007, several public interest groups filed a complaint with the US Federal Communications Commission (FCC), citing Comcast’s practice of secretly “delaying” the transmission of packets from peer-to-peer file-sharing sites (FCC 2008). Comcast argued that severely delaying traffic from these sites was necessary to manage bandwidth requirements, and that earlier rulings and statements from the FCC had merely prohibited outright blocking. The FCC disagreed, holding that the delays in this case were so extreme that they amounted to blocking (FCC 2008). In any event, the FCC noted, “Comcast selectively targeted and terminated the upload connections of … peer-to-peer applications and … this conduct significantly impeded consumers’ ability to access the content and use the applications of their choice” (FCC 2008, para. 44). The FCC ordered Comcast to end its blocking practices in the interest of “the open character and efficient operation of the Internet” (FCC 2008, para. 51).

Although later rulings invalidated the FCC’s order, and have called the Commission’s jurisdiction into question, Comcast adjusted its network management practices so that no specific application, or category of applications, was targeted by its routing algorithms. Rather, network congestion is now managed by slowing down the connections of specific individuals (heavy bandwidth users) during peak usage periods (Comcast Corporation 2008). Although these practices are still discrimination of a sort, they have become commonplace and seem to fall within the permissible grounds identified by Wu (Reference Wu2002).

11.2.4 Common Carriage

The principle of net neutrality is partly based on the idea of “common carriage,” a legal concept that has long roots in the common law. Common carriage speaks to the need to ensure infrastructural systems serve the interests of citizens in ways that are recognized and acknowledged as fair.

Common carriage itself arose from the “common calling”: people engaged in what might be called public service professions, such as innkeepers, barbers, and farriers, could be found liable for refusing service to an individual without reasonable justification (Burdick Reference Burdick1911). Those with a common calling made a “general undertaking” to serve the public at large “in a workmanlike manner,” and any failure to do so left them open to legal action under the law of contract (Burdick Reference Burdick1911, 518).

As technology advanced, the common calling expanded to include “common carriers,” particularly railroads, shipping lines, and other transportation organizations. One defining feature of common carriers, as opposed to common callings, is the up-front infrastructure investment that the former requires. Building a railroad requires massive amounts of start-up capital, time, and (typically) political goodwill. These factors make it difficult for competitors to enter the market, thus limiting both competition and consumer options. If someone wishes to travel but does not wish to pay a certain price for a train ticket, their options are limited. They may find alternative means of transport or choose not to go, but (except in the most exceptional circumstances) they cannot build themselves a railroad. Thus, railroads and other common carriers operate as “virtual monopol[ies]” (Wyman Reference Wyman1904, 161) and, though they are often private companies, they are “in the exercise of a sort of public office, [with] public duties to perform” (New Jersey Steam Navigation Co. v. Merchants’ Bank 1848, 47). As a result, their service should be agnostic with respect to the cargo (and people) they transport.

Though the Internet shares features of common carriers, whether the Internet is considered a common carrier depends on national jurisdiction. Canadian policy, for example, is firmly behind the common carrier model, and the need to ensure all people have fair access to infrastructural services. Because of this political commitment to equal treatment of people, the Canadian Radio-television and Telecommunications Commission (CRTC), which assumed telecommunications control from a variety of bodies in 1968, strongly supports the equal treatment of the data those people communicate “regardless of its source or nature” (CRTC 2017, para. 3).Footnote 9

11.3 Net Neutrality and the Ethics of Mobility Shaping

Net neutrality debates, and the discussion of common carriage principles, alert us to many related problems in algorithmically controlled mobility systems. An algorithmically controlled mobility system recalls the distinction between “forbidden” and “permissible” discrimination of information packets, where people are reduced to mobility packets comprised of specific vehicles and the goods and people within them. Like common carriers, algorithmically controlled mobility systems require substantial up-front investment in both publicly and privately owned and operated infrastructure, creating virtual monopolies (as evidenced by the very few global players in the space and Alphabet’s overwhelming dominance in the market). Indeed, their public–private nature raises complex questions about the governance of algorithmically controlled mobility systems as a public good. Thus, there are many similarities between neutrality in the Internet context and mobility neutrality in the context of algorithmically controlled mobility systems, though there are important distinctions to be drawn as well. This section will examine the applicability of net neutrality concepts to the mobility context in more detail.

11.3.1 Traffic Shaping Is to Information Packets as Mobility Shaping Is to People Packets

To a routing algorithm, there is little difference between a packet of digital information moving through a digital information network (e.g. the Internet) and a packet of people (or goods) moving through a physical mobility network. In an important sense, a map of a digital information network is very similar to a map of a mobility network, with origins, pathways, routing decision points, destinations, and rules about how a packet can move through the system. Just as algorithms control and shape the flow of packets in information systems, often referred to as traffic shaping, algorithms control and shape the flow of people packets through mobility systems, which we refer to as mobility shaping. Thus, there is an ethics of mobility shaping that must be considered when designing the set of rules that govern an algorithmically controlled mobility system. Mobility shaping algorithms, for example, could be designed to move people packets from source to destination according to principles of fairness.

There is a clear analogy to both SPI and DPI in the mobility shaping context. Mobility shaping decisions could be relatively neutral, based only on a set of information containing origin and destination. Mobility shaping could also be complex, intended to support new models of mobility service delivery, and based on detailed information about who (or what) is in the vehicle, such as their socioeconomic status, age, political leaning, gender, purchase history, customer rating, and driving experience preferences, among endless other data breadcrumbs. Indeed, whole new categories of information could be invented to accommodate new forms of mobility shaping. One can imagine different mobility service levels, such as virtual fast lanes, made available to the wealthy (or inaccessible to the poor) or perhaps available to those subscribing to particular loyalty programs (themselves designed to collect more of individuals’ data).

DPI for mobility shaping is already a fact of everyday life and new possibilities for mobility shaping are emerging as more mobility algorithms are designed to take individuals’ data into account. Individual user preferences that shape mobility, say the choice of avoiding tolls or highways, are commonplace features in turn-by-turn navigation apps. But mobile devices and online platforms linking single user accounts together over multiple services (e.g. Google Search, Gmail, Docs, Photos, Maps, and smart phone location data) are enabling the collection and curation of massive datasets applicable to mobility shaping,Footnote 10 significantly upping the ante when it comes to the potential for lived discrimination and unfairness.

Highly granular geo-physical records of a person’s location and movement can reveal many aspects of their life, especially when linked to other data about that person. For example, Waze collects location data and repackages that data as insights into consumer behaviour. A chart on the “Waze For Brands” website displays “Driving Patterns,” showing “when drivers are most likely to visit different business categories” (Waze n.d.). The categories shown are “Auto,” “Coffee,” “Fast Food,” “Fuel,” and “Retail” and can be sorted by day or by hour. From the chart, we learn that the more than 90 million Waze users worldwide are most likely to visit coffee shops between 8 am and 10 am, and are least likely to go to retail stores on Sundays. These particular facts are not earth-shattering revelations, but they signal the trend toward DPI-based mobility shaping models designed to serve private interests rather than the public good. In addition to access to its entire global database, Waze also provides local marketers with multiple advertising strategies. Among other tools, Waze offers Branded Pins with Promoted Search (large branded corporate symbols that appear on the map when the user is within a certain distance of the promoted location) and Zero-Speed Takeovers (large banner ads that cover the Waze interface if the user stops nearby) (Waze 2019). Thus, the “map” shown to Waze users doubles as a promotional engine intended to shape navigation decisions by nudging a driver in a particular direction.

Mobility shaping, though in its infancy, is on the rise; today’s navigational nudges will be tomorrow’s strategies for absolute control over people’s mobility. However, the principles and rules defining forbidden and permissible mobility shaping are as yet undefined. In Section 11.3.2 we consider to what extent the rules used to distinguish between forbidden and permissible information traffic shaping in the net neutrality debate may help us to better understand and protect the importance of regulation to protect individual mobility rights.

11.3.2 Forms of Inclusion, Exclusion, and Discrimination

Today’s turn-by-turn navigation systems, by design, shape mobility by nudging the human user (i.e. driver, cyclist, etc.) to take a certain path to their destination. Generally speaking, today’s systems are designed to minimize the time it would take to reach a chosen destination – but mobility can be algorithmically shaped according to any number of values and preferences other than minimizing time to destination. Turn-by-turn systems also currently shape mobility by nudging people toward certain destinations rather than others, for example by presenting curated lists of options when drivers search for nearby restaurants, gas stations, or other potential destinations. In this sense, mobility shaping functions as a powerful choice architecture, designed to privilege certain values and preferences over others. Mobility shaping can obscure whole categories of routing options or destinations, keeping vehicles on roads designed for high volumes of traffic or out of neighbourhoods where children tend to play in the streets or wealthy homeowners want privacy. Mobility shaping algorithms can also be designed to maximize returns on investment for those corporations heavily invested in the technology, leveraging vast quantities of individual data and preferencing other interests (e.g. corporate) over the needs of people in ways that remain largely opaque to the public. As we march down the road of automating mobility, delegating more decision making to navigation algorithms (which will eventually have broad power over more automated forms of mobility), nudges eventually morph into pushes. At some point in time the idea of people acting as agents entitled to move through space making fine-grained mobility decisions for their own purposes – turning left here instead of right because it’s prettier, switching into this lane versus that one to get a better look at a friend’s new garden, taking Main St. instead of Fifth to avoid passing an ex-partner – fades into the background of the algorithmically controlled mobility system.

Borrowing from Wu’s (Reference Wu2002) net neutrality framework, some of these mobility shaping strategies may be based on permissible discrimination, some may not. Many of them could mimic the discriminatory practices discussed in the net neutrality context, particularly “blocking,” “zero-rating,” and “throttling,” each of which will now be discussed in turn.

11.3.2.1 Blocking

Blocking, in the net neutrality context, is the simple blacklisting of certain Internet destinations (websites). The Fairplay Canada proposal, in 2018, for example, sought to compel ISPs to block access to any site deemed to contain copyright-infringing content (CRTC 2018; O’Rourke Reference O’Rourke2018). Blocking is also sometimes called “filtering” (generally by its proponents) and is used to prevent access to content that is considered illegitimate.

In the mobility shaping context, blocking strategies are easily imagined. Mobility shaping algorithms could blacklist physical destinations, or origins, with varying degrees of interpretability and transparency. On the clearly permissible end of the spectrum, trying to access a restricted destination (such as a military base) could result in a refusal to navigate to that location. In the less straightforwardly permissible spectrum could be situations in which those hailing rides are refused pickups or drop-offs from/to locations that are, for any number of questionable reasons, blacklisted. More subtly, though, certain destinations might simply be left off the map or excluded from the system’s search function, as is the case with the famous Hollywood sign in Los Angeles (Walker Reference Walker2014). Mobility shapers might use such strategies to artificially restrict access to politicized locations (for example, the meeting points for political protests or abortion clinics) that do not align with corporate or state interests. Combined with DPI, blocking could be targeted at individual mobility users, who find themselves excluded from certain destinations, mobility services, routing options, or mobility service levels, and could prove very difficult to detect.

11.3.2.2 Zero-rating

The practice of zero-rating is, in some ways, the inverse of blocking. In the net neutrality context, zero-rating involves exempting certain websites or web resources from bandwidth caps. This practice thus encourages an ISP’s customers to consume the “free” resources instead of content that will increase their data consumption levels. Zero-rating is thus an artificial intervention in a secondary market (that is, in content), and one that often benefits the ISP – particularly when the ISP also provides the content.Footnote 11

“Zero-rating”-like strategies are possible in the mobility shaping context. This is not necessarily a bad thing: as in the net neutrality context, there may be reasonable and valid grounds for prioritizing some routes, services, or locations. For instance, cities might choose to subsidize ride-hailing fares to and from hospitals in order to help people get to the hospital more easily and to save on maintaining expensive parking facilities. There could be benefits to zero-rating airports or central transit hubs or to nudging drivers onto major highways rather than along side streets.

However, there may be times when zero-rating could be less permissible – for instance, if a dominant mobility service provider used zero-rating to dissuade people from accessing services or locations associated with a small competitor’s mobility ecosystem. As an example, Google Maps could offer cheaper fares to users hailing Lyft or Uber rides so long as Google Maps powered them both, thus discriminating against users who choose to hail a ride using a non-Google-powered service. As on the Internet, zero-rating in the mobility context could impermissibly affect a secondary market – in this example, ride-hailing providers – in ways that constrain human agency and fair access to services.

11.3.2.3 Throttling

Throttling, on the Internet, is the practice of selectively and deliberately either improving or degrading the level of service (i.e. the speed of information transfer) between two internet addresses. Throttling can be similar to blocking, but rather than barring a website or web resource outright, an ISP can merely make that resource very slow or difficult to access. In the Comcast Corporation (2008) complaint discussed in Section 11.2.3, one of Comcast’s arguments was that they were not truly “blocking” peer-to-peer transfers but simply “delaying” them. In that case, however, the FCC (2008) determined that Comcast was essentially engaged in blocking because the “delays” were effectively infinite. Yet even shorter delays can have a significant effect: a Google study from 2016 showed that 53 per cent of mobile device users will abandon a website that takes more than 3 seconds to load (Think with Google Reference An2018). More recent data suggests that the “bounce rate” (the number of visitors who leave a site after viewing only one page) increases dramatically with loading times – users are 90 per cent more likely to leave a site that takes 5 seconds to load than a site that only takes 1 second (An Reference An2018). Worse, if loading the site takes 10 seconds, the user is 123 per cent more likely to bounce than if it only takes 1 second (An Reference An2018). Clearly, website throttling need not cause enormous “real-world” delays to have the same effects as outright blocking, with the same consequences for human agency and choice.

In the mobility context, throttling can be thought of as the deliberate manipulation of the time it takes to travel between two locations; it is the algorithmic creation of fast lanes and traffic jams. Throttling on the internet is intended to persuade or dissuade customers from accessing a particular resource by making access to the resource feel either seamless and smooth or frustratingly slow. Mobility shaping by throttling could be as simple as nudging certain drivers into “slow lanes” on a multi-lane freeway with common “stay in the right-hand lane” messages, while nudging privileged individuals into less occupied “fast lanes.” More drastic versions could take certain drivers along completely different routes in order to keep “fast lanes” relatively unoccupied. In an automated driving context where drivers are only there to take over in emergencies, or not at all, systems would simply force vehicles into virtually negotiated fast and slow lanes. As with blocking and zero-rating, certain forms of discrimination by throttling could be deemed permissible, perhaps to support democratically accountable initiatives, or other essential services like first responders. Others could be more difficult to justify: offering fast lanes as a means of rewarding people who purchase particular vehicle brands, and slow lanes, either through access queuing or slower transit times, for people living in low income neighbourhoods, could be deemed impermissible.

11.4 The Need for an Ethics of Mobility Shaping

Given the centralized control that algorithms will (and to an extent already do) exert over various aspects of human mobility, and the differing qualities of mobility service that individuals might be subject to in an algorithmically controlled system, mobility shaping practices thus threaten to exacerbate existing mobility inequalities, while inventing whole new categories of harm to the people who move through space.

Responding to these new ethical challenges will require a more clearly articulated ethics of mobility shaping. In this section we suggest a few general categories of inquiry that we feel could help lay the ethical groundwork for dealing with the specific issues, several of which we have raised, that arise in the context of mobility shaping. Our goal here is to start a conversation, recognizing that much more work is required to flesh these issues out.

11.4.1 The Just Distribution of Mobility Benefits and Harms

As we have described, mobility shaping can result in the uneven and problematic distribution of mobility benefits and harms. As mobility shaping becomes more prevalent and displaces traditional individual driver-determined forms of navigation, it will be important to examine whether any scheme of altering people’s ability to move from place to place results in a permissible or impermissible distribution of those affordances. Those benefits and harms include access to mobility, accessibility of mobility, quality of mobility service, noise and air pollution, vehicle speed and congestion, and the safety of vulnerable road users (e.g. pedestrians and cyclists) (Millar Reference Millar, Lin, Abney and Jenkins2017). Like other distribution problems we face in society, mobility distribution problems, many of which will be created or exacerbated by mobility shaping, should be decided by careful attention to contextual details to avoid problematic constraints on human agency.

11.4.2 Preserving Individual and Collective Mobility Decision Making

Current mobility shaping algorithms are ruthlessly focused on minimizing the time to destination, while ignoring other individual and social values that are likely worth preserving in the mobility context. At times, for example, drivers might prefer a slower, more scenic, or less busy route along a rural road to increase their well-being, rather than travelling through a busy industrial corridor. They might prefer to avoid quiet neighbourhoods where children often play in the streets, in order to improve safety. Yet most turn-by-turn navigation systems do not allow individual drivers to easily adjust their route to accommodate such values-based considerations. As these systems evolve, it might be important to build them in ways to help preserve and amplify the role of human agency in mobility decision making.

This focus on time-efficiency can also disrupt democratic values, especially given the important role that the public space plays in democratic governance. Local citizens have a democratic interest in traffic planning that apps like Waze undermine. The town of Leonia, New Jersey, for instance, is bordered by Interstate 95 and has always struggled with vehicles cutting through town. But with the arrival of Waze and other efficiency-seeking navigation systems, Leonia saw a massive uptick in rush hour traffic, so extreme that many residents could not leave their driveways. In response, Leonia decided to close nearly all of its streets to non-local traffic during rush hour periods, 7 days a week (Foderaro Reference Foderaro2017). Though this might seem like a happy ending for the people of Leonia, it underscores the immediate impact that corporate mobility shaping can have on people’s experience of space and the systems of democratic accountability in which traffic planning decisions are typically made. At the same time, it points to the incredible potential for more democratic forms of algorithmic mobility shaping.Footnote 12

These concerns reflect a significant difference between the Internet and mobility networks. While the Internet began as many disjunct, semi-private networks, albeit ones often constructed with public funding, most roads began as inherently public. Homer’s (1898) Iliad, for instance, refers to moving “along the public way,” and though private roads were known in the Roman Empire, the “main roads … were built, maintained and owned by the State” (Jacobson Reference Jacobson1940, 103).Footnote 13 Toll roads and private roads are still relatively uncommon. As a result, decision-making about roads has been a critical aspect of public discourse for hundreds, if not thousands of years.

In this emerging era of private digital navigational and mapping data and increasingly automated mobility, which function as the metaphorical routers and packets of algorithmically controlled mobility systems, we are confronting the unanticipated privatization of the roads themselves. Though the roads may remain public, that designation could morph into something quite alien relative to our current understanding of mobility, as private interests drape an invisible yet powerful web of algorithmic control over our physical space. Decisions about mobility are being removed from the democratic sphere, and a fundamental restructuring is occurring with little oversight, debate, or explanation. These forms of digital enclosure – of creating a digital fence around ostensibly public roads and structuring people’s mobility within the network – deserve attention, so that we preserve what forms of individual and collective mobility interests are deemed worth preserving, and balance individual, collective, and private interests in mobility more transparently and democratically.

11.5 Conclusion

The age of digital connectivity and mobile computing has brought massive changes to human movement. Human drivers increasingly delegate navigational decision making to apps, thus automating significant aspects of driving and enabling early forms of mobility shaping. The similarities between traffic shaping on the Internet and mobility shaping on physical roadways provide a starting point for examining the ethical and legal challenges that turn-by-turn navigation systems are raising in the public sphere. Yet, although compelling, the parallels between communication networks and mobility networks are not the whole story. As we move towards ever greater algorithmic shaping of our mobility, we must recognize that our ability to move freely in the physical world engages with some of our most fundamental democratic freedoms, that access to mobility uniquely reflects societal values, distinguishing it from information, and demanding a more rigorous investigation of the ethics of mobility that can account for mobility shaping.Footnote 14 This paper hopes to spark those investigations and ensuing debates – now is the time to evaluate the permissibility of different forms of mobility shaping, and to lay the normative foundation for tomorrow’s algorithmically controlled mobility systems.

12 Doughnut Privacy A Preliminary Thought Experiment

Previous chapters in the book have highlighted the ways that data-driven technologies are altering the human experience in the digital world. In this chapter, I explore the implications of the “doughnut” model of sustainable economic development for efforts to strike the appropriate balance between data-driven surveillance and privacy. I conclude that the model offers a useful corrective for policymakers seeking to ensure that the development of digital technologies serves human priorities and purposes.

Among environmental economists and some city planners, Kate Raworth’s (Reference Raworth2017) theory of “doughnut economics” is all the rage. Raworth argues that, in an era when human wellbeing depends on sustainable development rather than on unlimited growth, economics as a discipline can no longer embrace models of welfare oriented exclusively toward the latter. As an alternative model to the classic upward-trending growth curve, she offers the doughnut: an inner ring consisting of the minimum requirements for human wellbeing, a middle band consisting of the safe and just space for human existence, and an ecological ceiling above which continued growth produces planetary disaster.Footnote 1

I will argue, first, that a similarly doughnut-shaped model can advance conceptualization of the appropriate balance(s) between surveillance and privacy and, second, that taking the doughnut model seriously suggests important questions about the uses, forms, and modalities of legitimate surveillance. Foregrounding these questions can help policymakers centre the needs and priorities of humans living in digitally mediated spaces.

A note on definitions: By “surveillance” I mean to refer to sets of sociotechnical conditions (and their associated organizational and institutional practices) that involve the purposeful, routine, systematic, and focused collection, storage, processing, and use of personal information (Murakami Wood Reference Murakami Wood2006). By “privacy” I mean to refer to sets of sociotechnical conditions (and their associated organizational and institutional practices) that involve forbearance from information collection, storage, processing, and use, thereby creating “…(degrees of) spatial, informational, and epistemological open-endedness” (Cohen Reference Cohen2019b, 13). Although conditions of surveillance and privacy are inversely related, they are neither absolute nor mutually exclusive – for example, one can have surveillance of body temperatures without collection of other identifying information or surveillance of only those financial transactions that exceed a threshold amount – and they are capable of great variation in both granularity and persistence across contexts.

12.1 From Framing Effects to Mental Maps: Defining Policy Landscapes

The animating insight behind the doughnut model concerns the importance of mental maps in structuring shared understandings of the feasible horizons for economic and social policymaking. Frames and models create mental maps that foreclose some options and lend added weight to others (van Hulst and Yanow Reference van Hulst and Yanow2016). For that reason, if one wishes to contest existing policy choices, it will generally be insufficient simply to name the framing effects that produced them. Displacing framing effects requires different mental maps.

Specifically, the doughnut model of sustainable development represents an effort to displace an imagined policy landscape organized around the familiar figure of the upward-trending growth curve. The curve depicts (or so it is thought) the relationship between economic growth and social welfare: more is better. That philosophy resonates strongly with the logics of datafication and data extractive capitalism. Unsurprisingly, the imagined topography of policy interventions relating to surveillance is also organized around an upward-trending growth curve, which reflects (or so it is thought) the relationship between growth in data-driven “innovation” and social welfare: here too, more is better. The doughnut model visually reorders policy priorities, producing imagined policy landscapes that feature other human values – sustainability and privacy – more prominently.

12.1.1 Sustainability and the Economic Growth Curve

In economic modelling, the classic upward-trending growth curve links increased growth with increased social welfare. The curve tells us that more economic growth produces more social welfare and, conversely, that increasing social welfare requires continuing economic growth (Raworth Reference Raworth2017). The resulting mental map of feasible and desired policy interventions has produced decades of discussions about economic policy that take for granted the primacy of growth and then revolve narrowly around the twin problems of how to incentivize it and, equally important, how to avoid disincentivizing it.

Although sustainability has emerged over the last half century as a key determinant of social welfare – indeed, an existentially important one – it has no clear place within that imagined landscape. This has become increasingly evident in recent decades. Concerns about long-term sustainability and species survival have fueled increasingly urgent challenges to production and consumption practices that treat resources as infinite and disposable. Those concerns have inspired new approaches to modeling production and consumption as circular flows of resources (Friant et al. Reference Friant, Vermeulen and Salomone2020). In the abstract, however, circular-economy models have trouble escaping the gravitational pull of a policy landscape dominated by the upward-trending growth curve. In many circular-economy narratives, recycling-driven approaches to production and consumption are valuable, and deserving of inclusion in the policy landscape, precisely because they fuel continuing growth (Corvallec et al. Reference Corvallec, Stowell and Johansson2022).

The doughnut model is premised on a more foundational critique of growth-driven reasoning. It deploys ecological and systems thinking to model policy frontiers – outer bounds on growth that it is perilous to transgress. And it represents those boundaries using a crisp, simple visual depiction, offering policymakers a new imagined landscape for their recommendations and interventions (Raworth Reference Raworth2017, 38–45). It compels attention to sustainability considerations precisely because it forces us to look at them – and it demands that ostensibly more precise mathematical models and forecasts organized around growth be dismantled and reorganized around development ceilings calibrated to preserve safe and just space for human existence (Luukkanen et al. Reference Luukkanen, Vehmas and Kaivo-oja2021).

12.1.2 Privacy and the Surveillance Innovation Curve

Imagined policy landscapes also do important work shaping policy outcomes in debates about surveillance and privacy. Most often, that landscape is dominated by a close relative of the economist’s upward-trending growth curve, which models data-driven “innovation” versus social welfare. Like the upward-trending growth curve in economics, the upward-trending surveillance innovation curve suggests that, generally speaking, new ventures in data collection and processing will increase social welfare – and, conversely, that continuing increases in social welfare demand continuing growth in data harvesting and data processing capacities (e.g. Thierer Reference Thierer2014).

Imagined policy landscapes dominated by the upward-trending surveillance innovation curve have proved deeply inhospitable to efforts to rehabilitate privacy as an important social value. Richly textured accounts of privacy’s importance abound in the privacy literature. Some scholars (e.g. Cohen Reference Cohen2012, Reference Cohen2013; Richards Reference Richards2021; Roessler Reference Roessler2005; Steeves Reference Steeves, Kerr, Steeves and Lucock2009) focus on articulating privacy’s normative values; others (e.g. Nissenbaum Reference Nissenbaum2009) on defining norms of appropriate flow; and others (e.g. Post Reference Post1989; Solove Reference Solove2008) on mapping privacy’s embeddedness within a variety of social and cultural practices. But the imagined policy landscape generated by the upward-trending surveillance innovation curve locates “innovation” and its hypothesized ability to solve a wide variety of economic and social problems solidly at centre stage.

As in the case of sustainable development, the doughnut model is an effective visual device for directing attention toward the negative effects of excess surveillance and, therefore, toward the difficult but necessary task of specifying surveillance ceilings. Additionally, theoretical accounts of privacy directed primarily toward rehabilitating it as a value worth preserving typically do not offer enough guidance on how to identify necessary surveillance floors. Claims about appropriate versus inappropriate flow (Nissenbaum Reference Nissenbaum2009) tend to be most open to contestation at times of rapid sociotechnical change, when norms of contextual integrity are unsettled. My own account of post-liberal privacy as an inherently interstitial and structural construct devotes some attention to the technical and operational requirements for implementing privacy safeguards (Cohen Reference Cohen2012, Reference Cohen2013, Reference Cohen2019b) but does not consider how to distinguish between pro-social and anti-social surveillance implementations. The doughnut model productively engages and frames questions about how to identify and manage both kinds of surveillance/privacy frontiers.

12.2 From Mental Maps to Policy Horizons: Mapping Surveillance/Privacy Interfaces

The doughnut model for privacy policymaking defines two distinct “surfaces” over which balance needs to be achieved. The outer perimeter of the doughnut includes sectors representing different threats to safe and just human existence flowing from excesses of surveillance. Conversely, as in the case of the sustainability doughnut, the hole at the centre represents insufficient levels of data-driven surveillance – or privacy afforded to a degree that undermines the social foundation for human wellbeing.

12.2.1 The Sustainability Ceiling: Antisocial Surveillance (and Prosocial Privacy)

The growing and increasingly interconnected literatures in surveillance studies, information studies, and law have developed detailed accounts of the ways that excesses of surveillance undermine prospects for a safe and just human existence. Just as the outer perimeter of Raworth’s (Reference Raworth2017) doughnut is divided into sectors representing different kinds of planetary threats, so we can divide the privacy doughnut’s outer perimeter into sectors representing the different kinds of threats to human wellbeing that scholars have identified. Because the literatures on these issues are extensive, I will summarize them only briefly.

Some sectors of the privacy doughnut’s outer perimeter involve surveillance practices that undermine the capacity for self-development. Dominant platform companies such as Google, Meta, Amazon, TikTok, and Twitter, and many other providers of networked applications and information services use browsing, reading, listening, and viewing data to impose pattern-driven personalization, tailoring the information environment for each user to what is already known about that user or inferred based on the behaviours and preferences of similar users. Pattern-driven personalization privileges habit and convenience over more open-ended processes of exploration, experimentation, and play (Cohen Reference Cohen2012, Reference Cohen2013; Richards Reference Richards2021; Steeves Reference Steeves, Kerr, Steeves and Lucock2009). It also facilitates the continual delivery of nudges designed to instill more predictable and more easily monetizable patterns of behavior (Zuboff Reference Zuboff2019).

Other sectors involve surveillance practices that destabilize democratic institutions and practices. In particular, providers of online search and social media services use data about user behaviours and preferences to target and/or uprank flows of information, including both user-generated content and promoted content. Patterns of affinity-based information flow deepen political polarization, and this in turn affords more fertile ground for misinformation to take root and disinformation campaigns to flourish (e.g. Cohen Reference Cohen2019a; Nadler et al. Reference Nadler, Crain and Donovan2018). The persistent optimization and re-optimization of online environments around commercial and narrowly tribal priorities and interests undermines trust in democratic institutions and erodes the collective capacity to define and advance more broadly public-regarding priorities and interests (Farrell and Schneier Reference Farrell and Schneier2018; Viljoen Reference Viljoen2021; see also Chapter 2, by Murakami Wood).

Other sectors involve surveillance practices that reinforce economic power and widen distributive gaps. Many employers use surveillance technologies to monitor employee behavior both in and, increasingly, outside workplaces (Ajunwa et al. Reference Ajunwa, Crawford and Schultz2017). Persistent work-related surveillance magnifies power disparities between employers and workers and raises the barriers to collective organization by workers that might mitigate those disparities (Rogers Reference Rogers2023). The same persistent surveillance of user behaviors and preferences that enables pattern-driven personalization of the information environment also facilitates personalization of prices and non-price terms for consumer goods and services, imposing hierarchical logics within consumer markets (e.g. Cohen Reference Cohen2019a; Fourcade and Healy Reference Fourcade and Healy2017; Zuboff Reference Zuboff2019).

Other sectors involve surveillance practices that compound pre-existing patterns of racialized and/or gendered inequality (e.g. Benjamin Reference Benjamin2019; Citron Reference Citron2022; Richardson and Kak Reference Richardson and Kak2022; see also Chapter 9, by Akbari). Scholars who focus on race, poverty, and their intersections show that privacy tends to be afforded differently to different groups, in ways that reinforced racialized abuses of power and that subjugate the poor while framing poverty’s pathologies as failures of personal responsibility (Bridges Reference Bridges2017; Eubanks Reference Eubanks2018; Gilliom Reference Gilliom2001; Gilman Reference Gilman2012). Data extractive capitalism reinforces and widens these patterns and strengthens linkages between market-based and carceral processes of labeling and sorting (Benjamin Reference Benjamin2019; Browne Reference Browne2017; see also Chapter 5, by Lyon).

Seen through a global prism, many extractive surveillance implementations reinforce pre-existing histories of colonialist exploitation and resource extraction (Couldry and Mejias Reference Couldry and Mejias2019; see also Chapter 9, by Akbari). Recognition of the resulting threats to self-governance and self-determination has fueled a growing movement by scholars and activists in the Global South to assert control of the arc of technological development under the banner of a new “non-aligned technologies movement” (Couldry and Mejias Reference Couldry and Mejias2023).

Last, but hardly least, the surveillance economy also imposes planetary costs. These include both chemical pollution caused by extraction of rare earth metals used in digital devices and air pollution, ozone depletion and other climate effects produced by immense data centres (Crawford Reference Crawford2021). These problems also link back to Raworth’s (Reference Raworth2017) original doughnut diagram; the surveillance economy is both socially and ecologically unsustainable.

12.2.2 The Hole at the Centre: Prosocial Surveillance (and Antisocial Privacy)

If the doughnut analogy is to hold, the hole at the doughnut’s centre must represent too much privacy – privacy afforded to a degree that impedes human flourishing by undermining the social foundation for collective, sustainable governance. Diverse strands of scholarship in law and political theory have long argued that excesses of privacy can be socially destructive. The doughnut model reinforces some of those claims and suggests skepticism toward others. But the privacy doughnut’s ‘hole’ also includes other, more specific surveillance deficits. I will develop this argument by way of two examples, one involving public health and the other involving the public fisc.

Liberal and feminist scholars have long argued that certain understandings of privacy reinforce conditions of political privation and patriarchal social control. The most well-known liberal critique of excess privacy is Hannah Arendt’s (Reference Arendt1958) description of the privation of a life lived only in home spaces segregated from the public life of the engaged citizen. Building on (and also critiquing) Arendt’s account of privacy and privation, feminist privacy scholars (e.g. Allen Reference Allen2003; Citron Reference Citron2022; Roessler Reference Roessler2005) have explored the ways that invocations of privacy also function as a modality of patriarchal social control. It is useful to distinguish these arguments from those advanced by communitarian scholars about the ways that privacy undermines social wellbeing (e.g. Etzioni Reference Etzioni2000). Theorists in the latter group have difficulty interrogating communally asserted power and identifying any residual domain for privacy. The communitarian mode of theorizing about privacy therefore tends to reinforce the imagined policy landscape generated by the upward-trending surveillance innovation curve. In different ways and to different extents, liberal and feminist critiques of excess privacy are concerned with the nature of the balance struck between “public” and “private” spheres of authority and with the ways in which excesses of privacy can impede full inclusion in civil society and reinforce maldistributions of power.

Moving beyond these important but fairly general objections, excess privacy can also impede human flourishing in more context-specific ways. Here are two examples:

The events of the past years have illustrated that competent and humane public health surveillance is essential for human flourishing even when it overrides privacy claims that might warrant dispositive weight in other contexts (Rozenshtein Reference Rozenshtein2021; see also Chapter 5, by Lyon). A competent system of public health surveillance needs to detect and trace the spread of both infections and viral mutations quickly and capably (Grubaugh et al. Reference Grubaugh, Hodcroft, Fauver, Phelan and Cevik2021). A humane system of public health surveillance must identify and care for those who are sick or subject to preventive quarantine. At the same time, however, such a system must safeguard collected personal information so it cannot be repurposed in ways that undermine public trust, and it must take special care to protect vulnerable populations (Hendl et al. Reference Hendl, Chung and Wild2020). Competent and humane public health surveillance therefore necessitates both authority to collect and share information and clearly delineated limits on information collection and flow.

Some public health surveillance operations clearly cross the doughnut’s outer perimeter. From the Western legal perspective, obvious candidates might include the Chinese regime of mandatory punitive lockdowns (e.g., Chang et al. Reference Chang, Qin, Qian and Chien2022) and (at one point) testing via anal swabs (Wang et al. Reference Wang, Chen, Wang, Geng, Liu and Han2021). But having avoided these particular implementations does not automatically make a system of public health surveillance competent and humane. In the United States and the United Kingdom, for example, information collected for pandemic-related public health care functions has flowed in relatively unconstrained ways to contractors deeply embedded in systems of law enforcement and immigration surveillance, fueling public distrust and fear (No Tech for Tyrants and Privacy International 2020).

Particularly in the current neoliberal climate, however, it has been less widely acknowledged that other kinds of public health surveillance interventions fail the threshold-conditions criterion. The US regime of public health surveillance during the coronavirus pandemic operated mostly inside the doughnut’s hole, relying on patchy, haphazard, and often privatized networks of protocols for testing and tracing backstopped by an equally patchy, haphazard, and often privatized network of other protective and social support measures (Jackson and Ahmed Reference Jackson and Ahmed2022). Some nations, meanwhile, constructed systems of public health surveillance designed to operate within the doughnut. One example is the Danish regime combining free public testing and centralized contact tracing with a public passport system designed to encourage vaccination and facilitate resumption of public and communal social life (Anderssen et al. Reference Anderssen, Loncarevic, Damgaard, Jacobsen, Bassioni-Stamenic and Karlsson2021; see also Ada Lovelace Institute 2020). Additionally, although a responsible and prosocial system of public health surveillance must balance the importance of bodily control claims in ways that respect individual dignity, it should not permit overbroad privacy claims to stymie legitimate and necessary public health efforts (Rozenshtein Reference Rozenshtein2021). Refusal to participate in testing and tracing operations, to comply with humanely designed isolation and masking protocols, and to enroll in regimens for vaccination and related status reporting can fatally undermine efforts to restore the threshold conditions necessary for human flourishing – that is, to return society more generally to the zone of democratic sustainability defined by the doughnut.

As a second example of necessary, public-regarding surveillance, consider mechanisms for financial surveillance. The legal and policy debates surrounding financial and communications surveillance arguably present a puzzle. If, as any competent US-trained lawyer would tell you, speech and money are sometimes (always?) interchangeable, we ought to be as concerned about rules allowing government investigators access to people’s bank statements as we are about rules allowing the same investigators access to people’s communication records. Yet far more public and scholarly attention attaches to the latter. In part, this is because the financial surveillance rules are complex and arcane and the entities that wield them are obscure. In part, however, it is because it is far more widely acknowledged that systemic financial oversight – including some financial surveillance – implicates undeniably prosocial goals.

Financial surveillance authority underpins the ability to enforce tax liabilities without which important public services necessary for human wellbeing could not be provided (Swire Reference Swire1999). Such services include everything from roads, clean water, and sewage removal to public education, housing assistance, and more. By this I don’t mean to endorse current mechanisms for providing such assistance or the narratives that surround them, but only to claim that such services need to be provided and need to be funded.

Relatedly, financial surveillance authority enables investigation of complex financial crimes, including not only the usual poster children in contemporary securitized debates about surveillance (organized crime, narcotrafficking, and global terrorism) (Swire Reference Swire1999), but also and equally importantly the kleptocratic escapades of governing elites and oligarchies. A wide and growing assortment of recent scandals – involving everything from assets offshored in tax havens (ICIJ 2021) to diverted pandemic aid (AFREF et al. 2021; Podkul Reference Podkul2021) to real estate and other assets maintained in capitalist playgrounds by oligarchs and the uber-rich (Kendzior Reference Kendzior2020; Kumar and de Bel Reference Kumar and de Bel2021) – underscore the extent to which gaps in financial oversight systems threaten social wellbeing. Effective, transnational financial surveillance is an essential piece (though only one piece) of an effective response.

The inability to perform any of these financial surveillance functions would jeopardize the minimum requisite conditions for human flourishing. And, to be clear, this argument does not depend on the continued existence of nation states in their current form and with their current geopolitical and colonial legacies. If current nation states ceased to exist tomorrow, other entities would need to provide, for example, roads, clean water, and sewage removal, and other entities would need to develop the capacity to support and protect the least powerful.Footnote 2

12.3 Inside the Doughnut:Abolition v./or/and Governance

To (over)simplify a bit, so far, I may seem to have argued that one can have too much surveillance or not enough. Broadly speaking, that is a familiar problem within the privacy literature, so at this point it may appear that I have not said that much after all. And equally important, I have not specifically addressed the characteristic orientations and effects of surveillance models in our particular, late capitalist, insistently racialized society. In practice, surveillance implementations have tended to entrench and intensify extractive, colonialist, and racialized pathologies (Benjamin Reference Benjamin2019; Browne Reference Browne2017; Couldry and Mejias Reference Couldry and Mejias2023; see also Chapter 9, by Akbari), and awareness of that dynamic now underwrites a rapidly growing movement for surveillance abolition whose claims lie in tension with some of my own claims about the doughnut’s inner ring.

12.3.1 An Existential Dilemma

Surveillance abolition thinking rejects thinking about the possibility of reorienting surveillance technologies toward prosocial and equality-furthering goals as pernicious and wrongheaded. Although beneficial uses are hypothetically possible, the track record of abuse is established and far more compelling. There is no ‘right kind’ of surveillance because all kinds of surveillance – including those framed as luxuries for the well-to-do – will invariably present a very different face to the least fortunate (Gilliard Reference Gilliard2020, Reference Gilliard2022). Drawing an explicit parallel to the campaign for abolition of policing more generally (e.g. McLeod Reference McLeod2019; Morgan Reference Morgan2022), surveillance abolition thinking calls upon its practitioners to imagine and work to create a world in which control over data and its uses is radically reimagined (Milner and Traub Reference Milner and Traub2021). Abolitionist thinkers and activists tend to view proposals for incremental and/or procedural privacy reforms as working only to entrench surveillance-oriented practices and their disparate impacts more solidly.

As one example of the case for surveillance abolition, consider evolving uses of biometric technologies. Facial recognition technology has been developed and tested with brutal disregard for its differential impacts on different skin tones and genders (Buolamwini and Gebru Reference Buolamwini and Gebru2018) and deployed for a wide and growing variety of extractive and carceral purposes (Garvie et al. Reference Garvie, Bedoya and Frankle2016; Hill Reference Hill2020). At the same time, it has been normalized as a mechanism for casual, everyday authentication of access to consumer devices in a manner that creates profound data security threats (Rowe Reference Rowe2020). India’s Aadhaar system of biometric authentication, which relies on digitalized fingerprinting, was justified as a public welfare measure, but works least well for the least fortunate – for example, manual laborers whose fingerprints may have been worn away or damaged (Singh and Jackson Reference Singh and Jackson2017). At the same time, the privatization of the “India stack” has created a point of entry for various commercial and extractive ventures (Hicks Reference Hicks2020).

As a second example of the case for surveillance abolition, consider credit scoring. In the United States, there are deep historical links between credit reporting and racial discrimination (Hoffman Reference Hoffman2021), and that relationship extends solidly into the present, creating self-reinforcing circuits that operate to prevent access to a wide variety of basic needs, including housing (Leiwant Reference Leiwant2022; Poon Reference Poon2009; Smith and Vogell Reference Smith and Vogell2022) and employment (Traub Reference Traub2014). In municipal and state systems nationwide, unpaid fines for low-level offenses routinely become justifications for arrest and imprisonment, creating new data streams that feed back into the credit reporting system (Bannon et al. Reference Bannon, Nagrecha and Diller2010).

The other half of the existential dilemma to which this section’s title refers, however, is that governing complex societies requires techniques for governing at scale. Some functions of good governance relate to due process in enforcement. I do not mean this to refer to policing but rather and more generally to the ability to afford process and redress to those harmed by private or government actors. For some time now, atomistic paradigms of procedural due process have been buckling under the strain of large numbers. The data protection notion of a “human in the loop” is no panacea for the defects embedded in current pattern-driven processes (e.g. Crootof et al. Reference Crootof, Kaminski and Nicholson Price2023; Green Reference Green2022), but, even if it were, it simply isn’t possible to afford every type of complaint that a human being might lodge within a bureaucratic system the type of process to which we might aspire.

Other functions of good governance are ameliorative. Governments can and do (and must) provide a variety of important public benefits, and surveillance implementations intersect with these in at least three ways. First, surveillance can be used (and misused) to address problems of inclusion. Failure to afford inclusion creates what Gilman and Green (Reference Gilman and Green2018) term “surveillance gaps” in welfare and public health systems. Second, distributing government benefits without some method of accounting for them invites fraud – not by needy beneficiaries too often demonized in narratives about responsibility and advantage-taking, but rather by powerful actors and garden-variety scammers seeking to enrich themselves at the public’s expense (AFREF et al. 2021; Podkul Reference Podkul2021). Third, mechanisms for levying and collecting tax revenues to fund public benefits and other public works invite evasion by wealthy and well-connected individuals and organizations (Global Alliance for Tax Justice 2021; Guyton et al. Reference Guyton, Langetieg, Reck, Risch and Zucman2021; ICIJ 2021). In a world of large numbers, the possibilities for scams multiply. Surveillance has a useful role to play in combating fraud and tax evasion. For example, the Internal Revenue Service, which is chronically under-resourced, spends an outsize portion of the enforcement resources that it does have pursuing (real or hypothesized) tax cheats at the lower end of the socioeconomic scale (Kiel Reference Kiel2019), but training artificial intelligence for fraud detection at the upper end of that scale, where tax evasion is also more highly concentrated (Alstadsaeter et al. Reference Alstadsaeter, Johannesen and Zucman2019), could produce real public benefit.

In short, a basic function of good government is to prevent the powerful from taking advantage of the powerless, and this requires rethinking both what constitutes legitimate surveillance and what constitutes legitimate governance. Current surveillance dysfunctions and injustices suggest powerfully that the root problem to be confronted involves re-learning how to govern, and for whose benefit, before re-learning how to surveil.

The doughnut model is not a cure-all for pathologies of exclusion and exploitation that have deep historical roots, but it does more than simply position privacy problems as matters of degree. It suggests, critically, that one can have too much of the wrong kind of surveillance, and/or not enough of the right kind, and that “wrong” and “right” relate to power and its abuses in ways that have very specific valences. We may make some headway simply by asking more precise questions about the types of surveillance that a just society must employ or should never permit. But not enough. Surveillance implementations are always already situated relative to particular contexts in which power and resources are distributed unequally and, unless very good care is taken, they will tend to reinforce and widen pre-existing patterns of privilege and disempowerment. Even for processes that (are claimed to) occur within the doughnut’s interior, the details matter.

12.3.2 Policymaking inside the Doughnut:Five Legitimacy Constraints

Engaging the abolitionist critique together with the need to govern at scale suggests (at least) five additional constraints that ostensibly prosocial surveillance implementations must satisfy. The first two constraints, sectoral fidelity and data parsimony, are necessary to counteract surveillance mission creep. Policymakers must ask more precise questions about the particular sustainability function to which a proposed implementation relates and must insist on regimes that advance that function and no others. And the formal commitment to sectoral fidelity must be supported by a mandate for parsimonious design that, wherever possible and to the greatest extent possible, prevents collected data from migrating into new surveillance implementations. The third constraint is distributive justice. Policymakers must interrogate existing and proposed surveillance implementations through an equity lens and, as necessary, abandon or radically modify those that reinforce or increase pre-existing inequities. The fourth and fifth constraints, openness to revision and design for countervailing power, work against epistemic closure of narratives embraced to justify surveillance in the first place. Policymakers should create oversight mechanisms that facilitate revisiting and revising policies and practices and should require design for countervailing power in ways that reinforce such mechanisms.

One of privacy law’s most difficult challenges has involved building in appropriate leeway for evolution in data collection and use while still minimizing the risk of surveillance mission creep. The data minimization and purpose limitation principles that underpin European-style data protection regimes represent one articulation of this challenge, but those principles date back to the era of standalone databases and present interpretive difficulties in an era of interconnected, dynamic information systems. Their touchstones – respectively, collection that is “limited to what is necessary in relation to” the stated purpose and further processing that is “compatible” with the original stated purposeFootnote 3 – seem to invite continual erosion. In particular, they have been continually undermined by prevailing design practices that create repositories of data seemingly begging to be repurposed for new uses. Nissenbaum’s (Reference Nissenbaum2009) theory of privacy as contextual integrity represents an attempt to situate the construct of purpose limitation within a more dynamic frame; sometimes, changes in data flow threaten important moral values, but not always. Exactly for that reason, however, the theory of contextual integrity does not adequately safeguard the public against moral hazard and self-dealing by those who implement and benefit from surveillance systems.

Together, the constraints of sectoral fidelity and data parsimony offer a more reliable pathway to maintaining prosocial surveillance implementations while resisting certain predictable and predictably harmful forms of mission creep. To begin, a sectoral fidelity constraint enshrined in law (and reaffirmed with adequate and effective public oversight) would represent a much stronger public commitment to limiting surveillance in the interest of social sustainability. So, for example, such a constraint would allow reuse of data collected for public health purposes for new or evolving public health purposes, but it would forbid mission creep from one sector to another – for example, from health to security – even when data are repurposed for a security-related use that otherwise would fall inside the doughnut. Instances of mission creep in which data collected for public health purposes flow out the back door to be used for national security purposes jeopardize the public trust on which public health surveillance needs to rely. Systems of national security surveillance are necessary in complex societies, but they require separate justification and separate forms of process.

Absent reinforcement by a corresponding design constraint, however, a commitment to sectoral fidelity that is expressed purely as a legal prohibition, seems predestined to fail. Because surveillance implementations express, and cannot ever fully avoid expressing, power differentials, they inevitably present temptations to abuse. Where surveillance is necessary for social sustainability, a requirement of design for data parsimony can work to limit mission creep in ways that legal restrictions alone cannot. So, for example, large-grain surveillance proxies that use hashed, locally stored data for credentialing and authentication might facilitate essential governance functions in privacy protective ways, ensuring access to public services and facilitating access to transit systems without persistent behavioural tracking.

Neither the sectoral fidelity principle nor the data parsimony principle, however, speaks directly to surveillance-based practices that have powerful differential impacts on privileged and unprivileged groups of people living in the digital age. A legitimacy constraint capable of counteracting the extractive drift of such systems needs to be framed in terms of equity and anti-subordination (cf. Viljoen Reference Viljoen2021). Some kinds of scoring are inequitable because they entrench patterns of lesser-than treatment, and some kinds of goods ought to be distributed in ways that do not involve scoring at all. For example, as Foohey and Greene (Reference Foohey and Greene2022) document, tweaks designed to make the consumer credit scoring system more accurate simply entrench its systemic role as a mechanism for perpetuating distributional inequity. Piecemeal prohibitions targeting particular types or uses of data are overwhelmingly likely to inspire workarounds that violate the spirit of the prohibitions and reinforce existing practices – for example, “ban the box” laws prohibiting inquiry about employment applicants’ criminal records have engendered other profiling efforts that disparately burden young men of color (Strahilevitz Reference Strahilevitz2008). Under such circumstances, the question for policymakers should be how to restrict both the nature and the overall extent of reliance on scoring and sorting as mechanisms for allocation and pricing. The background presumption of inherent rationality that has attached to credit scoring should give way to comprehensive oversight designed to restore and widen semantic gaps; mandate use of data-parsimonious certifications of eligibility; and encourage creation of alternative allocation mechanisms. Where state-driven surveillance implementations must be deployed to address problems of inclusion, equity should be understood as a non-negotiable first principle constraining every aspect of their design.

The fourth and fifth legitimacy constraints – openness to revision and design for countervailing power – follow from the principle of equity. Training surveillance implementations away from the path of least resistance – that is, away from policies and practices that reinforce historic patterns of injustice and inequity – demands institutional and technical design to resist epistemic closure. Too often, proposed regulatory oversight models for surveillance implementations amount to little more than minor tweaks that, implicitly, take the general contours of those implementations as givens. That sort of epistemic closure is both unwarranted (because it cedes the opportunity to contest the validity of data-driven decisions) and self-defeating (because it disables public-regarding governance from achieving (what ought to be) its purposes). More specifically, since failure modes for surveillance are likely to have data-extractive, racialized, and carceral orientations, accountability mechanisms directed toward rejection of epistemic closure need to be designed with those failure modes in mind.

Like strategies for avoiding surveillance mission creep, strategies for embedding a revisionist and equity-regarding ethic of public accountability within surveillance implementations are both legal and technological. On one hand, honoring the principle of openness to revision requires major reforms to legal regimes that privilege trade secrecy and expert capture of policy processes (Kapczynski Reference Kapczynski2022; Morten Reference Morten2023). But surveillance power benefits from technical opacity as well as from secrecy (Burrell Reference Burrell2016), and merely rolling back legal protections for entities that create and operate surveillance implementations still risks naturalizing opaque practices of algorithmic manipulation that ought themselves to be open to question and challenge. An oversight regime designed to resist epistemic closure should mobilize technological capability to create countervailing power wherever surveillance implementations are used. As a relatively simple example, algorithmic processes (that also satisfy the other legitimacy constraints) might be designed to incorporate tamper-proof audit mechanisms designed to open their operation to public oversight. A more complicated example is Mireille Hildebrandt’s (Reference Hildebrandt2019) proposal for agonistic machine learning – that is, machine learning processes that are designed to interrogate their own assumptions and test alternate scenarios.

12.4 Conclusion

The doughnut model for privacy suggests important questions about the appropriate boundaries between surveillance and privacy and about the forms and modalities of legitimate data-driven governance that should inform future research and prescriptive work. Living within the doughnut requires appropriate safeguards against forms of data-driven surveillance that cross the outer perimeter, and it also requires data-driven governance implementations necessary to attain the minimum requirements for human wellbeing. In particular, automated, data-driven processes have important roles to play in the governance of large, complex societies. Ensuring that any particular surveillance implementation remains within the space defined by the doughnut rather than drifting inexorably across the outer perimeter requires subjecting it to additional legitimacy constraints, of which I have offered five – sectoral fidelity, data parsimony, equity, openness to revision, and design for countervailing power. Strategies for bending the arc of surveillance toward the safe and just space for human wellbeing must include both legal and technical components – such as, for example, reliance on surveillance proxies such as credentialing and authentication to facilitate essential governance and allocation functions in data-parsimonious ways. Ultimately, governing complex societies in ways that are sustainable, democratically accountable, and appropriately respectful of human rights and human dignity requires techniques that are appropriately cabined in their scope and ambition, equitable in their impacts, and subject to critical, iterative interrogation and revision by the publics whose futures they influence.

Footnotes

10 Exploitation in the Platform Age

1 According to the US Federal Trade Commission (2022, 5): “[G]ig companies may use nontransparent algorithms to capture more revenue from customer payments for workers’ services than customers or workers understand.”

2 “Open Letter to President Biden from Tech Workers in Kenya.” For context, see Haskins (Reference Haskins2024).

3 As Tim Hwang (Reference Hwang2020, 5) writes, “From the biggest technology giants to the smallest startups, advertising remains the critical economic engine underwriting many of the core services that we depend on every day. In 2017, advertising constituted 87 percent of Google’s total revenue and 98 percent of Facebook’s total revenue.”

4 For example, the Interactive Advertising Bureau (IAB), a trade association for the online marketing industry, argued in a recent comment in response to the US Federal Trade Commission’s Notice of Proposed Rulemaking on commercial surveillance: “there is substantial evidence that data-driven advertising actually benefits consumers in immense ways. As explained below, not only does data-driven advertising support a significant portion of the competitive US economy and millions of American jobs, but data-driven advertising is also the linchpin that enables consumers to enjoy free and low-cost content, products, and services online” (IAB 2022, 10).

5 Or, as Shoshana Zuboff (Reference Zuboff2019, 94) puts it, “the essence of the exploitation [typical of ‘surveillance capitalism’] is the rendering of our lives as behavioural data for the sake of others’ improved control of us,” the “self-authorized extraction of human experience for others’ profit” (Zuboff Reference Zuboff2019, 19).

6 For a more complex picture of the relationship between exploitation and capitalist appropriation, especially focusing on its racialized character, see Nancy Fraser (Reference Fraser2016).

7 See, for example, Tiziana Terranova (Reference Terranova2000).

8 For example, see Fuchs (Reference Fuchs2010). For a helpful intellectual history of related work on the political economy of media and communication technology, see Lee McGuigan (Reference McGuigan, McGuigan and Manzerolle2014).

9 On sweatshop labor, see e.g., Jeremy Snyder (Reference Snyder2010); and Matt Zwolinski (Reference Zwolinski2012). On commercial surrogacy, see e.g., Wertheimer (Reference Wertheimer1996). On sexual exploitation, see Sample (Reference Sample2003).

10 For an overview of competing accounts, see Zwolinski et al. (Reference Zwolinski, Ferguson, Wertheimer, Zalta and Nodelman2022).

11 Determining what counts as an unfair division of the social surplus is, unsurprisingly, a matter of some controversy. Hillel Steiner (Reference Steiner1984) argues that the distribution is unfair when it’s the product of historical injustice, while, for John Roemer (Reference Roemer1985), the unfairness derives from background conditions of inequality. On Alan Wertheimer’s (Reference Wertheimer1996) account, the distribution is unfair when one party pays more than a hypothetical “fair market price.”

12 This is another way of framing the normative intuition that motivates Marxist accounts of exploitation: the capitalist class claims an unfair share of the surplus created by the working class. See Roemer (Reference Roemer1985) and Reiman (Reference Reiman1987).

13 This example is borrowed from Zwolinski et al. (Reference Zwolinski, Ferguson, Wertheimer, Zalta and Nodelman2022).

14 For an overview and argument in favor of a pluralist approach, see Snyder (Reference Snyder2010).

15 Veena Dubal (Reference Dubal2023). See also, Zephyr Teachout (Reference Teachout2023).

16 Even describing the process as a negotiation is perhaps too generous – drivers simply have the option of accepting a ride and the designated fare or not.

17 “Open Letter to President Biden from Tech Workers in Kenya,” May 22, 2024, www.foxglove.org.uk/open-letter-to-president-biden-from-tech-workers-in-kenya/.

18 For a related discussion, see Ariel Ezrachi and Maurice Stucke (Reference Ezrachi and Stucke2016).

19 Indeed, offering different people different prices may, on balance, benefit the worst off. To use a well-known example, if pharmaceutical companies couldn’t charge different prices to consumers in rich and poor countries, they would have to charge everyone (including those with the fewest resources) higher prices in order to recoup costs. See Jeffrey Moriarty (Reference Moriarty2021).

20 Moriarty (Reference Moriarty2021, p. 498) explicitly argues that under these conditions price personalization is non-exploitative. Etye Steinberg (Reference Steinberg2020) disagrees, arguing that data-driven personalized pricing is unfair on account of concerns about relational equality.

21 For an overview, see Susser (Reference Susser2019).

22 For a more careful investigation into this question and its implications, see Daniel Susser and Vincent Grimaldi (Reference Susser and Grimaldi2021).

23 Pfotenhauer et al. (Reference Pfotenhauer, Laurent, Papageorgiou and Stilgoe2022) describe the inexorable march toward massive scale as “the uberization of everything,” which introduces, they argue, “new patterns of exploitation.”

24 Others describe this as “ghost work.” See Mary L. Gray and Siddharth Suri (Reference Gray and Suri2019) and Veena Dubal (Reference Dubal2020).

25 Or perhaps some third thing. See Valerio De Stefano (Reference De Stefano2016), Orly Lobel (Reference Lobel2019), and Veena Dubal (Reference Dubal2021).

26 We often worry about sellers deceiving buyers or selling them unsafe products, and consumer protection law is designed to prevent such harms. But we don’t normally worry that sellers will exploit buyers.

27 As Joel Feinberg (Reference Feinberg1990, 176) put it, “a little-noticed feature of exploitation is that it can occur in morally unsavory forms without harming the exploitee’s interests and, in some cases, despite the exploitee’s fully voluntary consent to the exploitative behaviour.” Wood (Reference Wood1995), Wertheimer (Reference Wertheimer1996), Sample (Reference Sample2003), and others also emphasize this point.

28 One might want to argue that the buyer in the first case and worker in the second are “coerced by circumstances,” and therefore the exchanges are not truly voluntary. However, as Chris Meyers (Reference Meyers2004) points out, that’s not the price gouger’s or the sweatshop owner’s fault – they didn’t create the desperate conditions, and all they are doing is adding to the sets of options from which the other parties can choose. If in doing so they are wronging them (which, in cases of wrongful beneficence, they arguably are) it is not because they are forcing them to act against their will.

29 Mark Zuckerberg (Reference Zuckerberg2019, January 25) in The Facts About Facebook.

30 Of course, researchers have cast doubt on these claims about user preferences. See Joseph Turow and Chris Jay Hoofnagle (Reference Turow and Hoofnagle2019).

31 Erik Malmqvist and András Szigeti (Reference Malmqvist and Szigeti2021) argue that there is, in fact, a third option – what they term “remediation.” To my mind, remediation is a form of redistribution.

32 Bans and moratoria are frequently proposed, and sometimes implemented, as a strategy for bringing abuse by digital platforms under control. Uber, for example, has been directly banned or indirectly forced out of the market at various times and places (Rhodes Reference Rhodes2017). Regulators, especially in Europe, have made compelling cases to eliminate behavioural advertising, especially when targeted at children. See, for example, www.forbrukerradet.no/wp-content/uploads/2021/06/20210622-final-report-time-to-ban-surveillance-based-advertising.pdf. And a number of cities in the United States have imposed moratoria on the use of facial recognition technology by the police and other public actors, while at the same time it continues to find new applications. See, for example, www.wired.com/story/face-recognition-banned-but-everywhere/

35 Once again, questions about these trade-offs mirror debates about how to respond to exploitative sweatshop labor. For a helpful overview of these debates, see Snyder (Reference Snyder2010).

36 Hwang (Reference Hwang2020) suggests “controlled demolition” instead. For a more nuanced history and political economy of digital advertising markets, see Lee McGuigan (Reference McGuigan2023) in Selling the American People: Advertising, Optimization, and the Origins of Adtech.

11 People as Packets in the Age of Algorithmic Mobility Shaping

1 Mobility, as we conceive of it, refers to all the ways in which people and goods move through built environments. “The mobility system” is thus a broad concept: it encompasses personal vehicles (cars and trucks, as well as scooters, bicycles, and similar vehicles), pedestrian movement, goods transportation, all forms of mass and public transit, and the roads and pathways themselves.

2 For the purposes of this chapter, the terms “GNSS,” “GNSS device,” “navigation system,” “in-car navigation system,” and “turn-by-turn navigation system” are used relatively interchangeably (unless otherwise specified). The acronym “GNSS” stands for “Global Navigation Satellite System,” and includes the US military’s Global Positioning System (GPS), the Russian GLONASS, the recently completed European Galileo system, and the even more recent Chinese BeiDou, among others.

3 This shift occurred sometime around autumn 2010, when Google Maps Navigation, the turn-by-turn enabled mapping app which at the time was distinct from Google Maps, appeared as a free download for all Android and iOS smartphone users.

4 Satellite imagery and computer vision techniques enable the creation of maps so detailed that the fan blades inside rooftop HVAC units can be seen on some buildings in downtown Los Angeles (O’Beirne Reference O’Beirne2017).

5 While these numbers are US-centric, much of the developed world is likely relatively similar.

6 The remaining 36 per cent simply use the apps to plan routes ahead of time (Panko Reference Panko2018).

7 The next most popular map embedding is the Russian “Yandex Maps” service, on around 400,000 sites (BuiltWith 2019).

8 Due to these resources, Street View is even available at Antarctica’s McMurdo Station, and underwater at the Great Barrier Reef (Google Maps: Street View 2019).

9 In the United States, debate over the Internet’s classification, whether as a common carrier or as a less-regulated “information service,” has raged for nearly two decades (Finley Reference Finley2018). Across the Atlantic, the European Union (EU) adopted the Open Internet Regulation in 2015. The Regulation enshrines non-discrimination and net neutrality in EU law, implicitly invoking the common carriage paradigm for ISPs (EC 2015). However, it is important to note that, in general, the EU comprises civil law jurisdictions which do not explicitly share the common law “common carriage” concept.

10 A recent article describes one researcher’s reaction to seeing his Google dataset, “When he requested his data from Google, he found that it was constantly tracking his location in the background, including calculating how long it took to travel between different points, along with his hobbies, interests, possible weight and income, data on his apps and records of files he had deleted. And that’s just for starters” (Popken Reference Popken2018, para. 4).

11 That was the case in 2015, when a complaint was filed at the Canadian Radio-television and Telecommunications Commission against Bell Mobility, Quebecor Media Inc., and Videotron (CRTC 2019). The companies, which provide Internet access to many millions of Canadian consumers, also provided streaming television services over the websites “Bell Mobile TV” and “illico.tv.” These websites were either exempted from customers’ data plans or severely discounted, in some cases by almost 90 per cent (CRTC 2019). This practice encouraged the ISP’s customers to subscribe to and use Bell and Videotron sites, rather than other audio-visual content sites (such as Netflix). The Commission ruled that the zero-rating practice conferred an “undue and unreasonable preference” on those who subscribed to the ISP’s mobile content services, violating the Telecommunications Act, and conferring a corresponding undue and unreasonable disadvantage on ISP customers who did not subscribe to the ISP’s content offerings (CRTC 2019, para. 61).

12 The Low Down to Hull and Back illustrates another way in which navigational systems disregard local concerns (CBC News 2019). The Low Down is an English-language newspaper based in Gatineau, Quebec, and has used a combination of English and French place names to refer to Quebec locations since its founding. For instance, the paper would refer to “Valley Road” rather than “Chemin de la Vallée de Wakefield,” but use “Lac Philippe” and not “Philippe Lake.” However, because of Google Maps’ conventions, The Low Down has recently changed its standard. The app would not recognize English place names, and reporters sent out to cover stories were getting lost in Gatineau Park (“le parc de la Gatineau”). For consistency, the paper has therefore decided to switch to French for all place names. Norms around choosing between English and French are politically fraught in Quebec, but Google Maps undermines them. The Low Down’s story illustrates a telling detail about our relationship to maps and systems: rather than changing what they saw on the map, the people reluctantly changed themselves.

13 Though generally built for the State’s military use, the public at large was not excluded from these main roads.

14 This paper was drafted during the COVID-19 pandemic in Ontario, during the fourth week of mobility restrictions. The inherent value of physical access to public space has rarely been so starkly evident.

12 Doughnut Privacy A Preliminary Thought Experiment

My thanks to participants in the eQuality Project research workshop “On Being Human in the Digital World” and the 2022 Privacy Law Scholars Conference for their helpful comments, and to Rasheed Evelyn, Conor Kane, and Sherry Tseng for research assistance.

1 You can see Raworth’s (Reference Raworth2017) doughnut diagram at www.kateraworth.com/doughnut/.

2 The implications of these arguments for current experiments in cryptocurrency-based disintermediation of fiat currency are evident but beyond the scope of this paper.

3 Regulation 2016/679 of the European Parliament and of the Council of April 27, 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), O.J. (L 119) 1, art. 5(1)(b)–(c).

References

References

Acquisti, Alessandro, Taylor, Curtis, and Wagman, Liad. “The Economics of Privacy.” Journal of Economic Literature 54, no. 2 (2016): 442492.CrossRefGoogle Scholar
Andrejevic, Mark. “Exploitation in the Data Mine.” In Internet and Surveillance: The Challenges of Web 2.0 and Social Media, edited by Fuchs, Christian, Boersma, Kees, Albrechtslund, Anders, and Sandoval, Marisol, 7188. New York: Routledge, 2012.Google Scholar
Bar-Gill, Oren. “Algorithmic Price Discrimination When Demand Is a Function of Both Preferences and (Mis)Perceptions.” The University of Chicago Law Review 86, no. 2 (2019): 217254.Google Scholar
Biron, Bethany. “Number of Uber Drivers Hits Record High of 5 Million Globally as Cost of Living Soars: With 70% Citing Inflation as Their Primary Reason for Joining the Company.” Business Insider, August 3, 2022. www.businessinsider.com/uber-drivers-record-high-5-million-cost-living-inflation-2022-8.Google Scholar
Calo, Ryan. “Digital Market Manipulation.” The George Washington Law Review 82, no. 4 (2014): 773802.Google Scholar
Calo, Ryan, and Rosenblat, Alex. “The Taking Economy: Uber, Information, and Power.” Columbia Law Review 117, no. 6 (2017): 16231690.Google Scholar
Cohen, Gerald A. “The Labor Theory of Value and the Concept of Exploitation.” Philosophy & Public Affairs 8, no. 4 (1979): 338360.Google Scholar
Cohen, Julie E. Between Truth and Power: The Legal Constructions of Informational Capitalism. New York: Oxford University Press, 2019.CrossRefGoogle Scholar
Cottom, Tressie McMillan. “Where Platform Capitalism and Racial Capitalism Meet: The Sociology of Race and Racism in the Digital Society.” Sociology of Race and Ethnicity 6, no. 4 (2020): 441449.CrossRefGoogle Scholar
De Stefano, Valerio. “The Rise of the ‘Just-in-Time Workforce’: On-Demand Work, Crowd Work and Labour Protection in the ‘Gig-Economy’.” Comparative Labor Law & Policy Journal 37, no. 3 (Spring 2016): 471504.Google Scholar
Dubal, Veena. “On Algorithmic Wage Discrimination.” Columbia Law Review 123 (2023): 19291992.Google Scholar
Dubal, Veena. “The New Racial Wage Code.” Harvard Law and Policy Review 15 (2021): 511549.Google Scholar
Dubal, Veena. “The Time Politics of Home-Based Digital Piecework.” Center for Ethics Journal: Perspectives on Ethics, July 4, 2020. https://c4ejournal.net/2020/07/04/v-b-dubal-the-time-politics-of-home-based-digital-piecework-2020-c4ej-xxx/.CrossRefGoogle Scholar
Ezrachi, Ariel, and Stucke, Maurice E.. “The Rise of Behavioural Discrimination.” European Competition Law Review 37, no. 2 (2016): 485492.Google Scholar
Feinberg, Joel. The Moral Limits of the Criminal Law. Vol. 4, Harmless Wrongdoing. Oxford: Oxford University Press, 1990.Google Scholar
Fraser, Nancy. “Expropriation and Exploitation in Racialized Capitalism: A Reply to Michael Dawson.” Critical Historical Studies 3, no. 1 (2016): 163178.CrossRefGoogle Scholar
Fuchs, Christian. “Labor in Informational Capitalism and on the Internet.” The Information Society 26, no. 3 (2010): 179196.CrossRefGoogle Scholar
Fuchs, Christian. Social Media: A Critical Introduction. London: Sage, 2017.Google Scholar
Gillespie, Tarleton. “The Politics of ‘Platforms’.” New Media & Society 12, no. 3 (2010): 347364.CrossRefGoogle Scholar
Goodin, Robert. “Exploiting a Situation and Exploiting a Person.” In Modern Theories of Exploitation, edited by Reeve, Andrew, 166200. London: Sage, 1987.Google Scholar
Gray, Mary L., and Suri, Siddharth. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Boston: Harper, 2019.Google Scholar
Haskins, Caroline. “The Low-Paid Humans behind AI’s Smarts Ask Biden to Free Them from ‘Modern Day Slavery’.” Wired, May 22, 2024. www.wired.com/story/low-paid-humans-ai-biden-modern-day-slavery/.Google Scholar
Hwang, Tim. Subprime Attention Crisis: Advertising and the Bomb at the Heart of the Internet. New York: Farrar, Straus and Giroux, 2020.Google Scholar
Interactive Advertising Bureau. “Advance Notice of Proposed Rulemaking for a Trade Regulation Rule on Commercial Surveillance and Data Security,” November 2022. www.iab.com/wp-content/uploads/2022/11/IAB-ANPRM-Comments.pdf.Google Scholar
Jordan, Tim. Information Politics: Liberation and Exploitation in the Digital Society. London: Pluto Press, 2015.CrossRefGoogle Scholar
Lobel, Orly. “The Debate Over How to Classify Gig Workers Is Missing the Bigger Picture.” Harvard Business Review, July 24, 2019. https://hbr.org/2019/07/the-debate-over-how-to-classify-gig-workers-is-missing-the-bigger-picture.Google Scholar
Loeb, Zachary. “The Magnificent Bribe.” Real Life Magazine, October 25, 2021. https://reallifemag.com/the-magnificent-bribe/.Google Scholar
MacKay, Alexander, and Weinstein, Samuel. “Dynamic Pricing Algorithms, Consumer Harm, and Regulatory Response,” 2020. www.ssrn.com/abstract=3979147.CrossRefGoogle Scholar
Malmqvist, Erik, and Szigeti, András. “Exploitation and Remedial Duties.” Journal of Applied Philosophy 38, no. 1 (2021): 5572.CrossRefGoogle Scholar
McGuigan, Lee. “After Broadcast, What? An Introduction to the Legacy of Dallas Smythe.” In The Audience Commodity in the Digital Age, edited by McGuigan, Lee and Manzerolle, Vincent, 122. New York: Peter Lang, 2014.CrossRefGoogle Scholar
McGuigan, Lee. Selling the American People: Advertising, Optimization, and the Origins of Adtech. Cambridge, MA: The MIT Press, 2023. 10.7551/mitpress/13562.001.0001.CrossRefGoogle Scholar
Meyers, Chris. “Wrongful Beneficence: Exploitation and Third World Sweatshops.” Journal of Social Philosophy 35, no. 3 (2004): 319333.CrossRefGoogle Scholar
Moradi, Pegah, and Levy, Karen. “The Future of Work in the Age of AI: Displacement or Risk-Shifting?” In The Oxford Handbook of Ethics of AI, edited by Dubber, Markus D., Pasquale, Frank, and Das, Sunit, 269288. New York: Oxford University Press, 2020.Google Scholar
Moriarty, Jeffrey. “Why Online Personalized Pricing Is Unfair.” Ethics and Information Technology 23, no. 3 (2021): 495503.CrossRefGoogle Scholar
Muldoon, James. Platform Socialism: How to Reclaim Our Digital Future from Big Tech. London: Pluto Press, 2022.CrossRefGoogle Scholar
Mumford, Lewis. “Authoritarian and Democratic Technics.” Technology and Culture 5, no. 1 (1964): 18.CrossRefGoogle Scholar
“Open Letter to President Biden from Tech Workers in Kenya,” May 22, 2024. www.foxglove.org.uk/open-letter-to-president-biden-from-tech-workers-in-kenya/.CrossRefGoogle Scholar
Pfotenhauer, Sebastian, Laurent, Brice, Papageorgiou, Kyriaki, and Stilgoe, Jack. “The Politics of Scaling.” Social Studies of Science 52, no. 1 (2022): 334.CrossRefGoogle Scholar
Reiman, Jeffrey. “Exploitation, Force, and the Moral Assessment of Capitalism: Thoughts on Roemer and Cohen.” Philosophy & Public Affairs 16, no. 1 (1987): 341.Google Scholar
Rhodes, Anna. “Uber: Which Countries Have Banned the Controversial Taxi App.” The Independent (September 22, 2017). www.independent.co.uk/travel/news-and-advice/uber-ban-countries-where-world-taxi-app-europe-taxi-us-states-china-asia-legal-a7707436.html.Google Scholar
Roemer, John. “Should Marxists Be Interested in Exploitation?” Philosophy & Public Affairs 14, no. 1 (1985): 3065.Google Scholar
Rosenblat, Alex, and Stark, Luke. “Algorithmic Labor and Information Asymmetries: A Case Study of Uber’s Drivers.” International Journal of Communication 10 (2016): 37583784.Google Scholar
Sample, Ruth. Exploitation: What It Is and Why It’s Wrong. Lanham, MD: Rowman and Littlefield, 2003.Google Scholar
Seele, Peter, Dierksmeier, Claus, Hofstetter, Reto, and Schultz, Mario D.. “Mapping the Ethicality of Algorithmic Pricing: A Review of Dynamic and Personalized Pricing.” Journal of Business Ethics 170, no. 4 (2021): 697719. https://doi.org/10.1007/s10551-019-04371-w.CrossRefGoogle Scholar
Snyder, Jeremy. “Exploitation and Sweatshop Labor: Perspectives and Issues.” Business Ethics Quarterly 20, no. 2 (2010): 187213.CrossRefGoogle Scholar
Steinberg, Etye. “Big Data and Personalized Pricing.” Business Ethics Quarterly 30, no. 1 (January 2020): 97117. https://doi.org/10.1017/beq.2019.19.CrossRefGoogle Scholar
Steiner, Hillel. “A Liberal Theory of Exploitation.” Ethics 94, no. 2 (1984): 225241.CrossRefGoogle Scholar
Susser, Daniel. “Notice after Notice-and-Consent: Why Privacy Disclosures are Valuable Even if Consent Frameworks Aren’t.” Journal of Information Policy 9 (2019): 3762.CrossRefGoogle Scholar
Susser, Daniel, Roessler, Beate, and Nissenbaum, Helen. “Online Manipulation: Hidden Influences in a Digital World.” Georgetown Law Technology Review 4, no. 1 (2019): 145.Google Scholar
Susser, Daniel, and Grimaldi, Vincent. “Measuring Automated Influence: Between Empirical Evidence and Ethical Values.” In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 112. New York: ACM, 2021. https://dl.acm.org/doi/proceedings/10.1145/3461702.Google Scholar
Teachout, Zephyr. “Algorithmic Personalized Wages.” Politics and Society 51, no. 3 (2023): 436458.CrossRefGoogle Scholar
Terranova, Tiziana. “Free Labor: Producing Culture for the Digital Economy.” Social Text 18, no. 2 (2000): 3358.CrossRefGoogle Scholar
Tufekci, Zeynep. “Facebook’s Surveillance Machine.” The New York Times, March 19, 2018. www.nytimes.com/2018/03/19/opinion/facebook-cambridge-analytica.html.Google Scholar
Turow, Joseph, and Hoofnagle, Chris. “Mark Zuckerberg’s Delusion of Consumer Consent.” The New York Times, January 29, 2019. www.nytimes.com/2019/01/29/opinion/zuckerberg-facebook-ads.html.Google Scholar
US Federal Trade Commission. “Policy Statement on Enforcement Related to Gig Work,” September 15, 2022. www.ftc.gov/legal-library/browse/policy-statement-enforcement-related-gig-work.Google Scholar
Wertheimer, Alan. Exploitation. Princeton, NJ: Princeton University Press, 1996.CrossRefGoogle Scholar
Wood, Allen. “Exploitation.” Social Philosophy and Policy 12, no. 2 (1995): 136158.CrossRefGoogle Scholar
Zittrain, Jonathan. “The Internet Creates a New Kind of Sweatshop.” Newsweek, December 7, 2009. www.newsweek.com/internet-creates-new-kind-sweatshop-75751.Google Scholar
Zuckerberg, Mark. “The Facts about Facebook.” Wall Street Journal, January 24, 2019. www.wsj.com/articles/the-facts-about-facebook-11548374613.Google Scholar
Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs, 2019.Google Scholar
Zwolinski, Matt. “Structural Exploitation.” Social Philosophy and Policy 29, no. 1 (2012): 154179.CrossRefGoogle Scholar
Zwolinski, Matt, Ferguson, Benjamin, and Wertheimer, Alan. “Exploitation.” In The Stanford Encyclopedia of Philosophy, edited by Zalta, Edward N. and Nodelman, Uri. Stanford, CA: Stanford University, 2022. https://plato.stanford.edu/archives/win2022/entries/exploitation/.Google Scholar

References

An, Daniel. “Find Out How You Stack Up to New Industry Benchmarks for Mobile Page Speed.” Think with Google. Internet Archive Wayback Machine. February 2018. https://web.archive.org/web/20190125174538/https:/www.thinkwithgoogle.com/marketing-resources/data-measurement/mobile-page-speed-new-industry-benchmarks/.Google Scholar
Anderson, Nate. “Deep Packet Inspection Meets ‘Net Neutrality, CALEA.” Ars Technica, July 26, 2007. https://arstechnica.com/gadgets/2007/07/deep-packet-inspection-meets-net-neutrality/.Google Scholar
Blomley, Nicholas K.Mobility, Empowerment and the Rights Revolution.” In Geographical Thought: A Praxis Perspective, edited by Henderson, George and Waterstone, Marvin, 201215. London: Routledge, 2009.Google Scholar
BuiltWith. “Google Maps Usage Statistics.” Internet Archive Wayback Machine. January 23, 2019. https://tinyurl.com/ydbcqlht.Google Scholar
Burdick, Charles K. “The Origin of the Peculiar Duties of Public Service Companies.” Columbia Law Review 11, no. 8 (1911): 743764. https://doi.org/10.2307/1110915.CrossRefGoogle Scholar
Canada (Information Commissioner) v Canada (Minister of National Defence), 2011 SCC 25, [2011] 2 SCR 306. https://decisions.scc-csc.ca/scc-csc/scc-csc/en/item/7939/index.do.Google Scholar
Carlson, Nicholas. “To Do What Google Does in Maps, Apple Would Have to Hire 7,000 People.” Business Insider Australia, June 27, 2012. www.businessinsider.com/to-do-what-google-does-in-maps-apple-would-have-to-hire-7000-people-2012-6.Google Scholar
CBC News. “Western Quebec Newspaper Changes Policy to Help Google Maps Users.” CBC, January 17, 2019. www.cbc.ca/news/canada/ottawa/outaouais-french-street-names-gps-1.4974821.Google Scholar
Cohan, Peter. “Four Reasons Google Bought Waze.” Forbes, June 11, 2013. www.forbes.com/sites/petercohan/2013/06/11/four-reasons-for-google-to-buy-waze/?sh=2f6ba0a6726f.Google Scholar
Comcast Corporation. “Description of Planned Network Management Practices to be Deployed Following the Termination of Current Practices.” Comcast. 2008. http://downloads.comcast.net/docs/Attachment_B_Future_Practices.pdf.Google Scholar
CRTC. “Asian Television Network International Limited, on Behalf of the FairPlay Coalition: Application to Disable Online Access to Piracy Websites.” Government of Canada. October 2, 2018. https://crtc.gc.ca/eng/archive/2018/2018-384.htm.Google Scholar
CRTC. “Complaint against Bell Mobility Inc. and Quebecor Media Inc., Videotron Ltd. and Videotron G.P. Alleging Undue and Unreasonable Preference and Disadvantage in regard to the Billing Practices for their Mobile TV Services Bell Mobile TV and illico.tv.” Government of Canada. January 29, 2019. https://crtc.gc.ca/eng/archive/2015/2015-26.htm.Google Scholar
CRTC. “Telecom Regulatory Policy CRTC 2017-104.” Government of Canada. April 20, 2017. https://crtc.gc.ca/eng/archive/2017/2017-104.htm.Google Scholar
EC, Regulation (EU) 2015/2120 of the European Parliament and of the Council of 25 November 2015 laying down measures concerning open internet access and amending Directive 2002/22/EC on universal service and users’ rights relating to electronic communications networks and services and Regulation (EU) No 531/2012 on roaming on public mobile communications networks within the Union (Text with EEA relevance), [2015] OJ, L 310/1. https://eur-lex.europa.eu/eli/reg/2015/2120/oj.Google Scholar
Finley, Klint. “A Brief History of Net Neutrality.” WIRED, May 9, 2018. www.wired.com/amp-stories/net-neutrality-timeline/.Google Scholar
Foderaro, Lisa W. “Navigation Apps Are Turning Quiet Neighborhoods into Traffic Nightmares.” The New York Times, December 24, 2017. www.nytimes.com/2017/12/24/nyregion/traffic-apps-gps-neighborhoods.html.Google Scholar
French, R. L.Maps on Wheels.” In Cartographies of Travel and Navigation, edited by Akerman, James R., 269270. Chicago: University of Chicago Press, 2006.Google Scholar
Google Maps: Street View. “Where We’ve Been & Where We’re Headed Next.” Google. Internet Archive Wayback Machine. January 22, 2019. https://web.archive.org/web/20190125160029/https:/www.google.ca/streetview/understand/.Google Scholar
Hattem, Julian. “Franken: Net Neutrality Is ‘First Amendment Issue of Our Time’.” The Hill, July 8, 2014. https://thehill.com/policy/technology/211607-franken-net-neutrality-is-first-amendment-issue-of-our-time.Google Scholar
Here Technologies. “About Us.” Here. 2019. www.here.com/company/about-us.Google Scholar
Homer. The Iliad, translated by S. Butler. Book XV. London: Longman Green and Co., 1898. https://omnika.org/texts/875#Book-XV.Google Scholar
Indiana University. “What Is a Packet?” University Information Technology Services, January 18, 2018. https://kb.iu.edu/d/anyq.Google Scholar
Jacobson, Herbert R. “A History of Roads from Ancient Times to the Motor Age.” Master’s Thesis, Georgia School of Technology, 1940. https://repository.gatech.edu/server/api/core/bitstreams/566242b3-8fcf-4d88-a509-56b08323d563/content.Google Scholar
Luo, Wei. “DeepMap Collaborates with Ford on HD Mapping Research for Autonomous Vehicles.” DeepMap, Inc., October 24, 2017. https://deepmap.medium.com/deepmap-collaborates-with-ford-on-hd-mapping-research-for-autonomous-vehicles-e0c444764320.Google Scholar
Madrigal, Alexis C., and LaFrance, Adrienne. “Net Neutrality: A Guide to (and History of) a Contested Idea.” The Atlantic. April 25, 2014. www.theatlantic.com/technology/archive/2014/04/the-best-writing-on-net-neutrality/361237/.Google Scholar
Millar, Jason. “Ethics Settings for Autonomous Vehicles.” In Robot Ethics 2.0, edited by Lin, Patrick, Abney, Keith, and Jenkins, Ryan, 2034. Oxford: Oxford University Press, 2017.Google Scholar
Molla, Rani. “Two-thirds of Adults Worldwide Will Own Smartphones Next Year.” Vox, October 16, 2017. www.vox.com/2017/10/16/16482168/two-thirds-of-adults-worldwide-will-own-smartphones-next-year.Google Scholar
New Jersey Steam Navigation Co v Merchants’ Bank, 47 US (6 How) 344 (1848). https://supreme.justia.com/cases/federal/us/47/344/.Google Scholar
O’Beirne, Justin. “Google Maps’s Moat.” Justin O’Beirne (blog), 2017. www.justinobeirne.com/google-maps-moat.Google Scholar
Ontario (Public Safety and Security) v Criminal Lawyers’ Association, 2010 SCC 23, [2010] 1 SCR 815. https://decisions.scc-csc.ca/scc-csc/scc-csc/en/item/7864/index.do.Google Scholar
O’Rourke, Patrick. “CRTC Denies Bell-led FairPlay Canada Coalition on ‘Jurisdictional Grounds’.” MobileSyrup, October 2, 2018. https://mobilesyrup.com/2018/10/02/crtc-denies-fairplay-canada-coalition-jurisdictional-grounds/.Google Scholar
Panko, Riley. “The Popularity of Google Maps: Trends in Navigation Apps in 2018.” The Manifest, July 10, 2018. https://themanifest.com/app-development/trends-navigation-apps.Google Scholar
Parsons, Christopher. “The Politics of Deep Packet Inspection: What Drives Surveillance by Internet Service Providers?” PhD Dissertation, University of Victoria, 2013. https://dspace.library.uvic.ca/items/233f1449-c664-40d6-b6d6-dea15058a2c7.Google Scholar
Pew Research Center. “Mobile Fact Sheet.” Pew Research Center, June 12, 2019. www.pewresearch.org/internet/fact-sheet/mobile/.Google Scholar
Popken, Ben. “Worried about What Facebook Knows about You? Check out Google.” NBC News, March 28, 2018. www.nbcnews.com/tech/social-media/worried-about-what-facebook-knows-about-you-check-out-google-n860781.Google Scholar
Popper, Ben. “Google Announces over 2 Billion Monthly Active Devices on Android.” The Verge, May 17, 2017. www.theverge.com/2017/5/17/15654454/android-reaches-2-billion-monthly-active-users.Google Scholar
Porter, Jon. “Google Is Bringing Together Its Waze and Maps Teams as It Pushes to Reduce Overlap.” The Verge, December 8, 2022. www.theverge.com/2022/12/8/23499734/google-maps-waze-development-teams-combined-productivity.Google Scholar
Reuters. “TomTom Shares Jump after Apple Renews Digital Maps Contract.” Reuters, May 17, 2015. www.reuters.com/article/us-tomtom-apple/tomtom-shares-jump-after-apple-renews-digital-maps-contract-idUSKBN0O40GE20150519.Google Scholar
Richter, Felix. “Charted: There Are More Mobile Phones than People in the World.” World Economic Forum, April 11, 2023. www.weforum.org/agenda/2023/04/charted-there-are-more-phones-than-people-in-the-world/.Google Scholar
Think with Google. “Mobile Site Abandonment.” Think with Google. Internet Archive Wayback Machine. August 15, 2018. https://web.archive.org/web/20180815133218/https:/www.thinkwithgoogle.com/data/mobile-site-abandonment-three-second-load/.Google Scholar
Uber. “Uber – Mapping.” January 22, 2019. www.uber.com/info/mapping/.Google Scholar
Walker, Alissa. “Why People Keep Trying to Erase the Hollywood Sign from Google Maps.” Gizmodo, November 21, 2014. https://gizmodo.com/why-people-keep-trying-to-erase-the-hollywood-sign-from-1658084644.Google Scholar
Walsh, Margaret. “Gender and American Mobility: Cars, Women and the Issue of Equality.” In Cultural Histories of Sociabilities, Spaces and Mobilities, edited by Divall, Colin, 2938. London: Routledge, 2015.Google Scholar
Waze Ads Starter Help. “About Ad Formats in Waze.” Waze, Google. n.d. Accessed April 2022. https://support.google.com/wazelocal/answer/9747689?hl=en&ref_topic=6153431&visit_id=637496346364878181-3063296140&rd=1.Google Scholar
Wu, Tim. A Proposal for Network Neutrality. Charlottesville: University of Virginia Law School, 2002.Google Scholar
Wyman, Bruce. “The Law of the Public Callings as a Solution of the Trust Problem.” Harvard Law Review 17, no. 4 (1904): 217247. www.jstor.org/stable/1323312.CrossRefGoogle Scholar

References

Ada Lovelace Institute. “International Monitor: Vaccine Passports and COVID Status Apps.” Ada Lovelace Institute. May 1, 2020. www.adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/.Google Scholar
Ajunwa, Ifeoma, Crawford, Kate, and Schultz, Jason. “Limitless Worker Surveillance.” California Law Review 105 (2017): 735776.Google Scholar
Allen, Anita. Why Privacy Isn’t Everything: Feminist Reflections on Personal Accountability. Lanham, MD: Rowman & Littlefield, 2003.Google Scholar
Alstadsaeter, Annette, Johannesen, Niels, and Zucman, Gabriel. “Tax Evasion and Inequality.” American Economic Review 109, no. 6 (2019): 20732103.CrossRefGoogle Scholar
Americans for Financial Reform, Anti-Corruption Data Collective and Public Citizen. “Report: Public Money for Private Equity: Pandemic Relief Went to Companies Backed by Private Equity Titans.” Americans for Financial Reform, September 15, 2021. https://ourfinancialsecurity.org/2021/09/report-public-money-for-private-equity-cares-act.Google Scholar
Anderssen, Pernille Tangaard, Loncarevic, Natasa, Damgaard, Maria B., Jacobsen, Mette W., Bassioni-Stamenic, Farida, and Karlsson, Leena E.. “Public Health, Surveillance Policies and Actions to Prevent Community Spread of COVID-19 in Denmark, Serbia, and Sweden.” Scandinavian Journal of Public Health 50, no. 6 (2021): 711729. https://doi.org/10.1177%2F14034948211056215.CrossRefGoogle Scholar
Arendt, Hannah. The Human Condition. Chicago: University of Chicago Press, 1958.Google Scholar
Bannon, Alicia, Nagrecha, Mitali, and Diller, Rebekah. Criminal Justice Debt: A Barrier to Reentry. New York: Brennan Center for Justice at New York University School of Law, 2010.Google Scholar
Benjamin, Ruha. Race after Technology: Abolitionist Tools for the New Jim Code. New York: Polity Press, 2019.Google Scholar
Bridges, Khiara. The Poverty of Privacy Rights. Redwood City, CA: Stanford University Press, 2017.CrossRefGoogle Scholar
Browne, Simone. Dark Matters: On the Surveillance of Blackness. Durham, NC: Duke University Press, 2017.Google Scholar
Buolamwini, Joy, and Gebru, Timnit. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research 81 (2018): 115.Google Scholar
Burrell, Jenna. “How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms.” Big Data & Society 3, no. 1 (2016): 112.CrossRefGoogle Scholar
Chang, Agnes, Qin, Amy, Qian, Isabelle, and Chien, Amy C.. “Under Lockdown in China.” The New York Times, April 29, 2022. www.nytimes.com/interactive/2022/04/29/world/asia/shanghai-lockdown.html.Google Scholar
Citron, Danielle Keats. The Fight for Privacy: Protecting Dignity, Identity, and Love in the Digital Age. New York: W. W. Norton, 2022.Google Scholar
Cohen, Julie E. Between Truth and Power: The Legal Constructions of Informational Capitalism. New York: Oxford University Press, 2019a.CrossRefGoogle Scholar
Cohen, Julie E. Configuring the Networked Self: Law, Code, and the Play of Everyday Practice. New Haven, CT: Yale University Press, 2012.Google Scholar
Cohen, Julie E. “Turning Privacy Inside Out.” Theoretical Inquiries in Law 20, no. 1 (2019b): 121.CrossRefGoogle Scholar
Cohen, Julie E. “What Privacy Is For.” Harvard Law Review 126 (2013): 19041933.Google Scholar
Corvallec, Herve, Stowell, Alison F., and Johansson, Nils. “Critiques of the Circular Economy.” Journal of Industrial Ecology 26, no. 3 (2022): 421432.CrossRefGoogle Scholar
Couldry, Nick, and Mejias, Ulises A.. “Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject.” Television & New Media 20, no. 4 (2019): 336349.CrossRefGoogle Scholar
Couldry, Nick, and Mejias, Ulises A.. “The Decolonial Turn in Data and Technology Research: What Is at Stake and Where Is It Heading?” Information, Communication & Society 26, no. 3 (2023): 786802.CrossRefGoogle Scholar
Crawford, Kate. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven, CT: Yale University Press, 2021.Google Scholar
Crootof, Rebecca, Kaminski, Margot E., and Nicholson Price, W.. “Humans in the Loop.” Vanderbilt Law Review 76 (2023): 429510.Google Scholar
Etzioni, Amitai. The Limits of Privacy. New York: Basic Books, 2000.Google Scholar
Eubanks, Virginia. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: Macmillan, 2018.Google Scholar
Farrell, Henry, and Schneier, Bruce. “Research Publication No. 2018-7: Common-Knowledge Attacks on Democracy.” Berkman Klein Center for Internet & Society at Harvard University. November 17, 2018. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3273111.Google Scholar
Foohey, Pamela, and Greene, Sara S.. “Credit Scoring Duality.” Law and Contemporary Problems 85, no. 3 (2022): 101122.Google Scholar
Fourcade, Marian, and Healy, Kieran. “Seeing Like a Market.” Socio-Economic Review 15 (2017): 929.CrossRefGoogle Scholar
Friant, Martin Callisto, Vermeulen, Walter J. V., and Salomone, Roberta. “A Typology of Circular Economy Discourses: Navigating the Diverse Versions of a Contested Paradigm.” Resources, Conservation and Recycling 161 (2020): 119.Google Scholar
Garvie, Clare, Bedoya, Alvaro, and Frankle, Jonathan. “The Perpetual Lineup: Unregulated Police Face Recognition in America.” Georgetown Center on Privacy & Technology, 2016. www.perpetuallineup.org/.Google Scholar
Gilliard, Chris. “The Rise of ‘Luxury Surveillance.’” The Atlantic, October 18, 2022. www.theatlantic.com/technology/archive/2022/10/amazon-tracking-devices-surveillance-state/671772/.Google Scholar
Gilliard, Chris. “The Two Faces of the Smart City.” Fast Company, January 20, 2020. www.fastcompany.com/90453305/the-two-faces-of-the-smart-city.Google Scholar
Gilliom, John. Overseers of the Poor: Surveillance, Resistance, and the Limits of Privacy. Chicago: University of Chicago Press, 2001.Google Scholar
Gilman, Michele E. “The Class Differential in Privacy Law.” Brooklyn Law Review 77, no. 4 (2012): 13891445.Google Scholar
Gilman, Michele E., and Green, Rebecca. “The Surveillance Gap: The Harms of Extreme Privacy and Data Marginalization.” New York University Review of Law & Social Change 42 (2018): 253307.Google Scholar
Global Alliance for Tax Justice. “The State of Tax Justice 2021.” Tax Justice Network. November 16, 2021. https://taxjustice.net/reports/the-state-of-tax-justice-2021/.Google Scholar
Green, Ben. “The Flaws of Policies Requiring Human Oversight of Government Algorithms.” Computer & Security Review 45 (2022): 122. https://doi.org/10.1016/j.clsr.2022.105681.Google Scholar
Grubaugh, Nathan D., Hodcroft, Emma B., Fauver, Joseph R., Phelan, Alexandra L., and Cevik, Muge. “Public Health Actions to Control New SARS-CoV-2 variants.” Cell 184, no. 5 (2021): 11271132.CrossRefGoogle Scholar
Guyton, John, Langetieg, Patrick, Reck, Daniel, Risch, Max, and Zucman, Gabriel. “Tax Evasion at the Top of the Income Distribution: Theory and Evidence.” National Bureau of Economic Research, December 2021. www.nber.org/papers/w28542.CrossRefGoogle Scholar
Hendl, Tereza, Chung, Ryoa, and Wild, Verina. “Pandemic Surveillance and Racialized Subpopulations: Mitigating Vulnerabilities in COVID-19 Apps.” Journal of Bioethical Inquiry 17 (2020): 928934.CrossRefGoogle Scholar
Hicks, Jacqueline. “Digital ID Capitalism: How Emerging Economies Are Reinventing Digital Capitalism.” Contemporary Politics 26, no. 3 (2020): 330350.CrossRefGoogle Scholar
Hildebrandt, Mireille. “Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning.” Theoretical Inquiries in Law 20, no. 1 (2019): 83121.CrossRefGoogle Scholar
Hill, Kashmir. “The Secretive Company That Might End Privacy as We Know It.” The New York Times, January 18, 2020. www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html.Google Scholar
Hoffman, Tamar. “Debt and Policing: The Case to Abolish Credit Surveillance.” Georgetown Journal of Poverty Law and Policy 29, no. 1 (2021): 93119.Google Scholar
van Hulst, Merlijn, and Yanow, Dvora. “From Policy ‘Frames’ to ‘Framing’: Theorizing a More Dynamic Approach.” American Review of Public Administration 46, no. 1 (2016): 92112.CrossRefGoogle Scholar
International Consortium of Investigative Journalists (ICIJ). “Offshore Havens and Hidden Riches of World Leaders and Billionaires Exposed in Unprecedented Leak.” International Consortium of Investigative Journalists, October 3, 2021. www.icij.org/investigations/pandora-papers/global-investigation-tax-havens-offshore/.Google Scholar
Jackson, Jason, and Ahmed, Aziza. “The Public/Private Distinction in Public Health: The Case of COVID-19.” Fordham Law Review 90, no. 6 (2022): 25412559.Google Scholar
Kapczynski, Amy. “The Public History of Trade Secrets.” U.C. Davis Law Review 55 (2022): 13671443.Google Scholar
Kendzior, Sarah. Hiding in Plain Sight: The Invention of Donald Trump and the Erosion of America. New York: Flatiron Books, 2020.Google Scholar
Kiel, Paul. “It’s Getting Worse: The IRS Now Audits Poor Americans at About the Same Rate as the Top 1%.” ProPublica, 2019. www.propublica.org/article/irs-now-audits-poor-americans-at-about-the-same-rate-as-the-top-1-percent.Google Scholar
Kumar, Lakshmi, and de Bel, Kaisa. “Acres of Money Laundering: Why US Real Estate Is a Kleptocrat’s Dream.” Global Financial Integrity, August 2021. https://gfintegrity.org/acres-of-money-laundering-2021/.Google Scholar
Leiwant, Matthew Harold. “Locked Out: How Algorithmic Tenant Screening Exacerbates the Housing Crisis in the United States.” Georgetown Law Technology Review 6 (2022): 276299.Google Scholar
Luukkanen, Jyrki, Vehmas, Jarmo, and Kaivo-oja, Jari. “Quantification of Doughnut Economy with the Sustainability Window Method: Analysis of Development in Thailand.” Sustainability 13 (2021): 118. https://doi.org/10.3390/su13020847.CrossRefGoogle Scholar
McLeod, Allegra. “Envisioning Abolition Democracy.” Harvard Law Review 132 (2019): 16131649.Google Scholar
Milner, Yeshimabeit, and Traub, Amy. Data Capitalism + Algorithmic Racism. Demos, 2021. www.demos.org/research/data-capitalism-and-algorithmic-racism.Google Scholar
Morgan, Jamelia. “Responding to Abolition Anxieties: A Roadmap for Legal Analysis.” Michigan Law Review 120 (2022): 11991224.CrossRefGoogle Scholar
Morten, Christopher J. “Publicizing Corporate Secrets.” University of Pennsylvania Law Review 170 (2023): 13191404.Google Scholar
Murakami Wood, David, ed. A Report on the Surveillance Society for the Information Commissioner by the Surveillance Studies Network. London: Mark Siddoway/Knowledge House, 2006. https://ico.org.uk/media/about-the-ico/documents/1042390/surveillance-society-full-report-2006.pdf.Google Scholar
Nadler, Anthony, Crain, Matthew, and Donovan, Joan. “Weaponizing the Digital Influence Machine: The Political Perils of Online Ad Tech.” Data & Society, October 17, 2018. https://datasociety.net/library/weaponizing-the-digital-influence-machine/.Google Scholar
Nissenbaum, Helen. Privacy in Context. Stanford: Stanford University Press, 2009.CrossRefGoogle Scholar
No Tech for Tyrants and Privacy International. “All Roads Lead to Palantir: A Review of How the Data Analytics Company Has Embedded Itself Throughout the UK.” Privacy International, October 29, 2020. https://privacyinternational.org/report/4271/all-roads-lead-palantir.Google Scholar
Podkul, Cezary. “How Unemployment Insurance Fraud Exploded during the Pandemic.” ProPublica, 2021. www.propublica.org/article/how-unemployment-insurance-fraud-exploded-during-the-pandemic.Google Scholar
Poon, Martha. “From New Deal Institutions to Capital Markets: Commercial Consumer Risk Scores and the Making of Subprime Mortgage Finance.” Accounting, Organizations & Society 34, no. 5 (2009): 654674.CrossRefGoogle Scholar
Post, Robert. “The Social Foundations of Privacy: Community and Self in the Common Law Tort.” California Law Review 77 (1989): 9571010.CrossRefGoogle Scholar
Raworth, Kate. Doughnut Economics: 7 Ways to Think Like a 21st Century Economist. White River Junction, VT: Chelsea Green Publishing, 2017.Google Scholar
Richards, Neil. Why Privacy Matters. New York: Oxford University Press, 2021.Google Scholar
Richardson, Rashida, and Kak, Amba. “Suspect Development Systems: Databasing Marginality and Enforcing Discipline.” University of Michigan Journal of Law Reform 55, no. 4 (2022): 813883.CrossRefGoogle Scholar
Rogers, Brishen. Rethinking the Future of Work: Law, Technology and Economic Citizenship. Cambridge, MA: MIT Press, 2023.Google Scholar
Roessler, Beate. The Value of Privacy. Cambridge, MA: Polity Press, 2005.Google Scholar
Rowe, Elizabeth A. “Regulating Facial Recognition Technology in the Private Sector.” Stanford Technology Law Review 24 (2020): 154.Google Scholar
Rozenshtein, Alan Z. “Digital Disease Surveillance.” American University Law Review 70 (2021): 1511.Google Scholar
Singh, Ranjit, and Jackson, Steven J.. “From Margins to Seams: Imbrication, Inclusion, and Torque in the Aadhaar Identification Project.” In Proceedings of the 2017 Conference on Human Factors in Computing Systems, 47764824. New York: ACM, 2017. https://doi.org/10.1145/3025453.3025910.CrossRefGoogle Scholar
Smith, Erin, and Vogell, Heather. “How Your Shadow Credit Score Could Decide Whether You Get an Apartment.” ProPublica, 2022. www.propublica.org/article/how-your-shadow-credit-score-could-decide-whether-you-get-an-apartment.Google Scholar
Solove, Daniel. Understanding Privacy. Cambridge, MA: Harvard University Press, 2008.Google Scholar
Steeves, Valerie. “Reclaiming the Social Value of Privacy.” In Lessons from the Identity Trail: Anonymity, Privacy and Identity in a Networked Society, edited by Kerr, Ian, Steeves, Valerie, and Lucock, Carole, 191208. New York: Oxford University Press, 2009.Google Scholar
Strahilevitz, Lior J. “Privacy Versus Antidiscrimination.” University of Chicago Law Review 75, no. 1 (2008): 363381.Google Scholar
Swire, Peter P. “Financial Privacy and the Theory of High-Tech Government Surveillance.” Washington University Law Quarterly 77 (1999): 461512.Google Scholar
Thierer, Adam. Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom. Arlington County, VA: Mercatus Center at George Mason University, 2014.Google Scholar
Traub, Amy. Discredited: How Employment Credit Checks Keep Qualified Workers out of a Job. Demos, 2014. www.demos.org/research/discredited-how-employment-credit-checks-keep-qualified-workers-out-job.Google Scholar
Viljoen, Salome. “A Relational Theory of Data Governance.” Yale Law Journal 131 (2021): 573654.Google Scholar
Wang, Yuliang, Chen, Xiaobo, Wang, Feng, Geng, Jie, Liu, Bingxu, and Han, Feng. “Value of Anal Swabs for SARS-COV-2 Detection: A Literature Review.” International Journal of Medical Sciences 18 (2021): 23892393.CrossRefGoogle Scholar
Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs, 2019.Google Scholar

Accessibility standard: WCAG 2.2 AAA

Why this information is here

This section outlines the accessibility features of this content - including support for screen readers, full keyboard navigation and high-contrast display options. This may not be relevant for you.

Accessibility Information

The HTML of this book complies with version 2.2 of the Web Content Accessibility Guidelines (WCAG), offering more comprehensive accessibility measures for a broad range of users and attains the highest (AAA) level of WCAG compliance, optimising the user experience by meeting the most extensive accessibility guidelines.

Content Navigation

Table of contents navigation
Allows you to navigate directly to chapters, sections, or non‐text items through a linked table of contents, reducing the need for extensive scrolling.
Index navigation
Provides an interactive index, letting you go straight to where a term or subject appears in the text without manual searching.

Reading Order & Textual Equivalents

Single logical reading order
You will encounter all content (including footnotes, captions, etc.) in a clear, sequential flow, making it easier to follow with assistive tools like screen readers.
Short alternative textual descriptions
You get concise descriptions (for images, charts, or media clips), ensuring you do not miss crucial information when visual or audio elements are not accessible.
Full alternative textual descriptions
You get more than just short alt text: you have comprehensive text equivalents, transcripts, captions, or audio descriptions for substantial non‐text content, which is especially helpful for complex visuals or multimedia.
Visualised data also available as non-graphical data
You can access graphs or charts in a text or tabular format, so you are not excluded if you cannot process visual displays.

Visual Accessibility

Use of colour is not sole means of conveying information
You will still understand key ideas or prompts without relying solely on colour, which is especially helpful if you have colour vision deficiencies.
Use of high contrast between text and background colour
You benefit from high‐contrast text, which improves legibility if you have low vision or if you are reading in less‐than‐ideal lighting conditions.

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×