To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Advanced AI (generative AI) poses challenges to the practice of law and to society as a whole. The proper governance of AI is unresolved but will likely be multifaceted (soft law such as standardisation, best practices and ethical guidelines), as well as hard law consisting of a blend of existing law and new regulations. This chapter argues that ‘lawyer’s professional codes’ of conduct (ethical guidelines) provide a governance system that can be applied to the AI industry. The increase in professionalisation warrants the treating of AI creators, developers and operators, as professionals subject to the obligations foisted on the legal profession and other learned professions. Legal ethics provides an overall conceptual structure that can guide AI development serving the purposes of disclosing potential liabilities to AI developers and building trust for the users of AI. Additionally, AI creators, developers and operators should be subject to fiduciary duty law. Fiduciary duty law as applied to these professionals would require a duty of care in designing safe AI systems, a duty of loyalty to customers, users and society not to create systems that manipulate consumers and democratic governance and a duty of good faith to create beneficial systems. This chapter advocates the use of ethical guidelines and fiduciary law not as soft law but as the basis of structuring private law in the governance of AI.
Education aims to improve our innate abilities, teach new skills and habits, and nurture intellectual virtues. Poorly designed or misused generative AI disrupts these educational goals. I propose strategies to design generative AI that aligns with education’s aims. The paper proposes a design for a generative AI tutor that teaches students to question well. I argue that such an AI can also help students learn to lead noble inquiries, achieve deeper understanding, and experience a sense of curiosity and fascination. Students who learn to question effectively through such an AI tutor may also develop crucial intellectual virtues.
This scoping review directs attention to artificial intelligence–mediated informal language learning (AI-ILL), defined as autonomous, self-directed, out-of-class second and foreign language (L2) learning practices involving AI tools. Through analysis of 65 empirical studies published up to mid-April 2025, it maps the landscape of this emerging field and identifies the key antecedents and outcomes. Findings revealed a nascent field characterized by exponential growth following ChatGPT’s release, geographical concentration in East Asia, methodological dominance of cross-sectional designs, and limited theoretical foundations. Analysis also demonstrated that learners’ AI-mediated informal learning practices are influenced by cognitive, affective, and sociocontextual factors, while producing significant benefits across linguistic, affective, and cognitive dimensions, particularly enhanced speaking proficiency and reduced communication anxiety. This review situates AI-ILL as an evolving subfield within intelligent CALL and suggests important directions for future research to understand the potential of constantly emerging AI technologies in supporting autonomous L2 development beyond the classroom.
Artificial Intelligence (AI) has reached memory studies in earnest. This partly reflects the hype around recent developments in generative AI (genAI), machine learning, and large language models (LLMs). But how can memory studies scholars handle this hype? Focusing on genAI applications, in particular so-called ‘chatbots’ (transformer-based instruction-tuned text generators), this commentary highlights five areas of critique that can help memory scholars to critically interrogate AI’s implications for their field. These are: (1) historical critiques that complicate AI’s common historical narrative and historicize genAI; (2) technical critiques that highlight how genAI applications are designed and function; (3) praxis critiques that centre on how people use genAI; (4) geopolitical critiques that recognize how international power dynamics shape the uneven global distribution of genAI and its consequences; and (5) environmental critiques that foreground genAI’s ecological impact. For each area, we highlight debates and themes that we argue should be central to the ongoing study of genAI and memory. We do this from an interdisciplinary perspective that combines our knowledge of digital sociology, media studies, literary and cultural studies, cognitive psychology, and communication and computer science. We conclude with a methodological provocation and by reflecting on our own role in the hype we are seeking to dispel.
This short research article interrogates the rise of digital platforms that enable ‘synthetic afterlives’, with a focus on how deathbots – AI-driven avatar interactions grounded in personal data and recordings – reshape memory practices. Drawing on socio-technical walkthroughs of four platforms – Almaya, HereAfter, Séance AI, and You, Only Virtual – we analyse how they frame, archive, and algorithmically regenerate memories. Our findings reveal a central tension: between preserving the past as a fixed archive and continually reanimating it through generative AI. Our walkthroughs demonstrate how these services commodify remembrance, reducing memory to consumer-driven interactions designed for affective engagement while obscuring the ethical, epistemological and emotional complexities of digital commemoration. In doing so, they enact reductive forms of memory that are embedded within platform economies and algorithmic imaginaries.
Chapter 3 examines the regulatory approaches outlined in the Artificial Intelligence Act (AIA) concerning Emotion Recognition Systems (ERS). As the first legislation specifically addressing ERS, the EU’s AI Act employs a multilayered framework that classifies these systems as both limited and high-risk AI technologies. By categorising all ERS as limited risk, the AIA aims to eliminate the practice of inferring emotions or intentions from individuals without their awareness. Additionally, all ERS must adhere to the stringent requirements set for high-risk AI systems. The use of AI systems for inferring emotions in workplace and educational settings is classified as an unacceptable risk and thus prohibited. Considering the broader context, the regulation of ERS represents a nuanced effort by legislators to balance the promotion of innovation with the necessity of imposing rigorous safeguards. However, this book contends that the AIA should not be seen as the ultimate regulation of MDTs. Instead, it serves as a general framework or baseline that requires further legal measures, including additional restrictions or prohibitions through sector-specific legislation.
Generative AI based on large language models (LLM) currently faces serious privacy leakage issues due to the wide range of parameters and diverse data sources. When using generative AI, users inevitably share data with the system. Personal data collected by generative AI may be used for model training and leaked in future outputs. The risk of private information leakage is closely related to the inherent operating mechanism of generative AI. This indirect leakage is difficult to detect by users due to the high complexity of the internal operating mechanism of generative AI. By focusing on the private information exchanged during interactions between users and generative AI, we identify the privacy dimensions involved and develop a model for privacy types in human–generative AI interactions. This can provide a reference for generative AI to avoid training private data and help it provide clear explanations of relevant content for the types of privacy users are concerned about.
This article explores the transformational potential of artificial intelligence (AI), particularly generative AI (genAI) – large language models (LLMs), chatbots, and AI-driven smart assistants yet to emerge – to reshape human cognition, memory, and creativity. First, the paper investigates the potential of genAI tools to enable a new form of human-computer co-remembering, based on prompting rather than traditional recollection. Second, it examines the individual, cultural, and social implications of co-creating with genAI for human creativity. These phenomena are explored through the concept of Homo Promptus, a figure whose cognitive processes are shaped by engagement with AI. Two speculative scenarios illustrate these dynamics. The first, ‘prompting to remember’, analyses genAI tools as cognitive extensions that offload memory work to machines. The second scenario, ‘prompting to create’, explores changes in creativity when performing together with genAI tools as co-creators. By mobilising concepts from cognitive psychology, media and memory studies, together with Huizinga’s exploration of play, and Rancière’s intellectual emancipation, this study argues that genAI tools are not only reshaping how humans remember and create but also redefining cultural and social norms. It concludes by calling for ‘critical’ engagement with the societal and intellectual implications of AI, advocating for research that fosters adaptive and independent (meta)cognitive practices to reconcile digital innovation with human agency.
This research explores concertinaing past, present and future interventional creative and pedagogical practices to address the challenges of the Post-Anthropocene era. We argue that the Post-Anthropocene is marked by biotechnological entanglements, environmental violence and digital overstimulation. The discussions herein critique a hyperattentive achievement society characterised by a scattering of attention, a near-constant screen-mediated stream of digital material and tasks and the commodification of leisure time. Enlisting Byung-Chul Han’s concept of hyperattention and themes and motifs from David Cronenberg’s films, the authors propose “FUTURE PROOF re(image)ining” as a collaborative Cli-Fi narrative concept. The project reimagines objects from an initial art installation with a diffusion-based machine learning model. By drawing on a constellation of Taoist philosophical practices, Zen garden design, scholars’ rocks and Cronenbergian themes, the authors propose an exhibition featuring reimagined cave-like gongshi rock structures and objects. A triangulation of spaces for FUTURE PROOF participants to inhabit facilitates an unfolding contemplative-creative trajectory. The concept includes a sensory deprivation cave, a View-Master cave for focused stereoscopic image viewing and a haiku/soundscape cave to initiate experiences. FUTURE PROOF aims to promote deep contemplation, challenging some of the deleterious aspects of Western digital-algorithmic screen culture and cultivating relationality with an always more-than-human world.
This article constructs an approach to analyzing longitudinal panel data which combines topological data analysis (TDA) and generative AI applied to graph neural networks (GNNs). TDA is deployed to identify and analyze unobserved topological heterogeneities of a dataset. TDA-extracted information is quantified into a set of measures, called functional principal components. These measures are used to analyze the data in four ways. First, the measures are construed as moderators of the data and their statistical effects are estimated through a Bayesian framework. Second, the measures are used as factors to classify the data into topological classes using generative AI applied to GNNs constructed by transforming the data into graphs. The classification uncovers patterns in the data which are otherwise not accessible through statistical approaches. Third, the measures are used as factors that condition the extraction of latent variables of the data through a deployment of a generative AI model. Fourth, the measures are used as labels for classifying the graphs into classes used to offer a GNN-based effective dimensionality reduction of the original data. The article uses a portion of the militarized international disputes (MIDs) dataset (from 1946 to 2010) as a running example to briefly illustrate its ideas and steps.
Generative AI (GenAI) offers potential for English language teaching (ELT), but it has pedagogical limitations in multilingual contexts, often generating standard English forms rather than reflecting the pluralistic usage that represents diverse sociolinguistic realities. In response to mixed results in existing research, this study examines how ChatGPT, a text-based generative AI tool powered by a large language model (LLM), is used in ELT from a Global Englishes (GE) perspective. Using the Design and Development Research approach, we tested three ChatGPT models: Basic (single-step prompts); Refined 1 (multi-step prompting); and Refined 2 (GE-oriented corpora with advanced prompt engineering). Thematic analysis showed that Refined Model 1 provided limited improvements over Basic Model, while Refined Model 2 demonstrated significant gains, offering additional affordances in GE-informed evaluation and ELF communication, despite some limitations (e.g., defaulting to NES norms and lacking tailored GE feedback). The findings highlight the importance of using authentic data to enhance the contextual relevance of GenAI outputs for GE language teaching (GELT). Pedagogical implications include GenAI–teacher collaboration, teacher professional development, and educators’ agentive role in orchestrating diverse resources alongside GenAI.
The emergence of large language models (LLMs) provides an opportunity for AI to operate as a co-ideation partner during the creative processes. However, designers currently lack a comprehensive methodology for engaging in co-ideation with LLMs, and there is a limited framework that describes the process of co-ideation between a designer and ChatGPT. This research thus aimed to explore how LLMs can act as codesigners and influence creative ideation processes of industrial designers and whether the ideation performance of a designer could be improved by employing the proposed framework for co-ideation with custom GPT. A survey was first conducted to detect how LLMs influenced the creative ideation processes of industrial designers and to understand the problems that designers face when using ChatGPT to ideate. Then, a framework which based on mapping content to guide the co-ideation between humans and custom GPT (named as Co-Ideator) was promoted. Finally, a design case study followed by a survey and an interview was conducted to evaluate the ideation performance of the custom GPT and framework compared with traditional ideation methods. Also, the effect of custom GPT on co-ideation was compared with a non-artificial intelligence (AI)-used condition. The findings indicated that if users employed co-ideation with custom GPT, the novelty and quality of ideation outperformed by using traditional ideation.
The nexus of artificial intelligence (AI) and memory is typically theorized as a ‘hybrid’ or ‘symbiosis’ between humans and machines. The dangers related to this nexus are subsequently imagined as tilting the power balance between its two components, such that humanity loses control over its perception of the past to the machines. In this article, I propose a new interpretation: AI, I posit, is not merely a non-human agency that changes mnemonic processes, but rather a window through which the past itself gains agency and extends into the present. This interpretation holds two advantages. First, it reveals the full scope of the AI–memory nexus. If AI is an interactive extension of the past, rather than a technology acting upon it, every application of it constitutes an act of memory. Second, rather than locating AI’s power along familiar axes – between humans and machines, or among competing social groups – it reveals a temporal axis of power: between the present and the past. In the article’s final section, I illustrate the utility of this approach by applying it to the legal system’s increasing dependence on machines, which, I claim, represents not just a technical but a mnemonic shift, where the present is increasingly falling under the dominion of the past – embodied by AI.
GenAI has significant potential to transform the design process, driving efficiency and innovation from ideation to testing. However, its integration into professional design workflows faces a gap: designers often lack control over outcomes due to inconsistent results, limited transparency, and unpredictability. This paper introduces a framework to foster human ownership in GenAI-assisted design. Developed through a mixed- methods approach—including a survey of 21 designers and a workshop with 12 experts from product design and architecture—the framework identifies strategies to enhance ownership. It organizes these strategies into source, interaction, and outcome, and maps them across four design phases: define, ideate, deliver, and test. This framework offers actionable insights for responsibly integrating GenAI tools in design practices.
Text-to-Image Generative AI (GenAI) platforms offer designers new opportunities for inspiration-seeking and concept generation, marking a significant shift from traditional visualisation approaches like sketching. This study investigates how designers work with text-to-image GenAI during inspiration-seeking and ideation, aiming to characterise designers’ behaviours through designer-GenAI interaction data. Analysis of 503 prompts by four designers engaging in a GenAI supported design task identifies two distinct behaviours: exploratory, characterised by short, diverse prompts with low similarity; and narrowing, characterised by longer, high-similarity prompts used with detail focused variation functions. The findings highlight the value of GenAI interaction data to reveal patterns in designers’ behaviours, offering insights into how these tools support designers and inform best practices.
With the increase of service robots, understanding how people perceive their human-likeness and capabilities in use contexts is crucial. Advancements in generative AI offer the potential to create realistic, dynamic video representations of robots in motion. This study introduces an AI-assisted workflow for creating video representations of robots for evaluation studies. As a comparative study, it explores the effect of AI-generated videos on people's perceptions of robot designs in three service contexts. Nine video clips depicting robots in motion were created and presented in an online survey. Videos increased human-likeness perceptions for supermarket robots but had the same effect on restaurant and delivery robots as images. Perceptions of capabilities showed negligible differences between media types. No significant differences in the effectiveness of communication were found.
In The Secret Life of Copyright, copyright law meets Black Lives Matter and #MeToo in a provocative examination of how our legal regime governing creative production unexpectedly perpetuates inequalities along racial, gender, and socioeconomic lines while undermining progress in the arts. Drawing on numerous case studies – Harvard’s slave daguerreotypes, celebrity sex tapes, famous Wall Street statues, beloved musicals, and dictator copyrights – the book argues that, despite their purported neutrality, key rules governing copyrights – from the authorship, derivative rights, and fair use doctrines to copyright’s First Amendment immunity – systematically disadvantage individuals from traditionally marginalized communities. Since laws regulating the use of creative content increasingly mediate participation and privilege in the digital world, The Secret Life of Copyright provides a template for a more robust copyright system that better addresses egalitarian concerns and serves the interests of creativity.
This chapter examines the transformative effects of generative AI (GenAI) on competition law, exploring how GenAI challenges traditional business models and antitrust regulations. The evolving digital economy, characterised by advances in deep learning and foundation models, presents unique regulatory challenges due to market power concentration and data control. This chapter analyses the approaches adopted by the European Union, United States, and United Kingdom to regulate the GenAI ecosystem, including recent legislation such as the EU Digital Markets Act, the AI Act, and the US Executive Order on AI. It also considers foundational models’ reliance on key resources, such as data, computing power, and human expertise, which shape competitive dynamics across the AI market. Challenges at different levels—including infrastructure, data, and applications—are investigated, with a focus on their implications for fair competition and market access. The chapter concludes by offering insights into the balance needed between fostering innovation and mitigating the risks of monopolisation, ensuring that GenAI contributes to a competitive and inclusive market environment.
Several criminal offences can originate from or culminate with the creation of content. Sexual abuse can be perpetrated by producing intimate material without the subject’s consent, while incitement to criminal activity can begin with a simple conversation. When the task of generating content is entrusted to artificial agents, it becomes necessary to delve into the associated risks posed by this technology. Generative AI changes criminal affordances because it simplifies access to harmful or dangerous content, amplifies the range of recipients, creates new kinds of harmful content, and can exploit cognitive vulnerabilities to manipulate user behaviour. Given this evolving landscape, the question that arises is whether criminal law should be involved in the policies aimed at fighting and preventing Generative AI-related harms. The bulk of criminal law scholarship to date would not criminalise AI harms on the theory that AI lacks moral agency. However, when a serious harm occurs, responsibility needs to be distributed considering the guilt of the agents involved, and, if it is lacking, it needs to fall back because of their innocence. Legal systems need to start exploring whether and how guilt can be preserved when the actus reus is completely or partially delegated to Generative AI.
This chapter deals with the use of Large Language Models (LLMs) in the legal sector from a comparative law perspective, exploring their advantages and risks, the pertinent question as to whether the deployment of LLMs by non-lawyers can be classified as an unauthorized practice of law in the US and Germany, what lawyers, law firms and legal departments need to consider when using LLMs under professional rules of conduct - especially the American Bar Association Model Rules of Professional Conduct and the Charter of Core Principles of the European Legal Profession of the Council of Bars and Law Societies of Europe, and, finally, how the recently published AI Act will affect the legal tech market – specifically, the use of LLMs. A concluding section summarizes the main findings and points out open questions.