To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter takes the distinctive materiality of the modern stage, the homely table, as a way to place two very different productions into conversation: Forced Entertainment’s Table Top Shakespeare and Annie Dorsen’s Prometheus Firebringer. Although these two productions might trace the arc from the residual (telling a story at a table using small household items) to the emergent (a dialogue between an AI-generated reconstruction of a lost Aeschylus play and a narrative composed of citations), they also dramatize an increasing absorption of the human into the apparatus of performance, a possibly fearsome absorption traced through Dorsen’s work, and touching on a range of other contemporary performances, including Mona Pirnot’s I Love You So Much I Could Die.
Initially, an attempt is made to provide a precise definition of channel functions, which are so vital to the firm. The tough challenge of gaining acceptable performance of work activities in all the firm’s channels is explained. Then, an analysis is presented of how new technologies can affect the processing and delivery of customer orders. Acknowledgment is made of the impact of brand positioning and value propositions on channel functions. It follows that superior performance of critical channel functions is vital to delighting targeted end-customers and a thorough explanation is given. To conclude, a discussion is provided of the role of supply chain management in the firm and the main steps necessary to be taken in the order management cycle.
In recent years, evidence for extraterrestrial life has focused mainly on the following sections, meteorites, space probes, radio telescopes, and extraterrestrial intelligence and civilization. Biochemical studies on meteorites have tried to trace fossilized microorganisms or organic molecules in living structures. Images and atmospheric information obtained from various planets by space probes have been used to uncover the habitability of other celestial bodies in the solar system. Observations of radio telescopes that receive the waves emitted by cosmic objects and display them on their screens have pave the way to estimate the habitability of heavenly bodies. As the last one, claims related to extraterrestrial intelligence and civilization have been repeatedly reported in different periods of history. All of this evidence points to the possibility of extraterrestrial life, but how close we are to confirming or disproving this hypothesis is still debatable. However, recent advancements in artificial intelligence, particularly in machine learning, have significantly enhanced the ability to analyze complex astrobiological data. This technology optimizes the processing of meteoritic data, differentiates astronomical signals, and reinterprets historical evidence, opening new frontiers in the search for extraterrestrial life. In this review, we have attempted to present the above-mentioned evidence in detail to provide a suitable understanding of the level of our extraterrestrial knowledge.
Chapter 10 predicts the “future” of chilling effects – which today looks darker and more dystopian than ever in light of the proliferation of new forms of artificial intelligence, machine learning, and automation technologies in society. The author here introduces a new term “superveillance” to explain new forms of AI-driven systems of automated legal and social norm enforcement that will likely cause mass societal chilling effects at an unprecedented scale. The author also argues how chilling effects today enable this more oppressive future and proposes a comprehensive law and public policy reforms and solutions to stop it.
States are reshaping the global digital economy to assert control over the artificial intelligence (AI) value chain. Operating outside multilateral institutions, they pursue measures such as export controls on advanced semiconductors, infrastructure partnerships, and bans on foreign digital platforms. This digital disintegration reflects an elite-centered response to the infrastructural power that private firms wield over critical AI inputs. A handful of companies operate beyond the reach of domestic regulation and multilateral oversight, controlling access to technologies that create vulnerabilities existing institutions struggle to contain. As a result, states have asserted strategic digital sovereignty: the exercise of authority over core digital infrastructure, often through selective alliances with firms and other governments. The outcome is an emergent form of AI governance in techno-blocs: coalitions that coordinate control over key inputs while excluding others. These arrangements challenge the liberal international order by replacing multilateral cooperation with strategic—and often illiberal—alignment within competing blocs.
This chapter explores bias and fairness in Swedish employment testing from legal, historical, and practical perspectives. Swedish labor laws, influenced by trade unions and the welfare state, emphasize non-discrimination under the Discrimination Act. The law prohibits bias based on sex, gender identity, ethnicity, religion, disability, sexual orientation, and age, and requires preventive action. It is enforced by the Equality Ombudsman and Labour Court. Although validity evidence is not explicitly required, selection decisions should be based on a job analysis. No proof of intent is required in discrimination claims, and the burden of proof is shared. Quotas are banned, but positive action is allowed for gender balance when qualifications are equal. Psychological test certification is voluntary in Sweden; the Psychological Association offers guidelines on validity, reliability, and fairness. However, these are not mandatory, and many employers develop their own policies. International standards offer best-practice guidance for fair assessments, including for emerging artificial intelligence tools.
The intelligible world of machines and predictive modelling is an omnipresent and almost inescapable phenomenon. It is an evolution where human intelligence is being supported, supplemented or superseded by artificial intelligence (AI). Decisions once made by humans are now made by machines, learning at a faster and more accurate rate through algorithmic calculations. Jurisprudent academia has undertaken to argue the proposition of AI and its role as a decision-making mechanism in Australian criminal jurisdictions. This paper explores this proposition through predictive modelling of 101 bail decisions made in three criminal courts in the State of New South Wales (NSW), Australia. Indicatively, the models’ statistical performance and accuracy, based on nine predictor variables, proved effective. The more accurate logistic regression model achieved 78% accuracy and a performance value of 0.845 (area under the curve; AUC), while the classifier model achieved 72.5% accuracy and a performance value of 0.702 (AUC). These results provide the groundwork for AI-generated bail decisions being piloted in the NSW jurisdiction and possibly others within Australia.
Information is a key variable in International Relations, underpinning theories of foreign policy, inter-state cooperation, and civil and international conflict. Yet IR scholars have only begun to grapple with the consequences of recent shifts in the global information environment. We argue that information disorder—a media environment with low barriers to content creation, rapid spread of false or misleading material, and algorithmic amplification of sensational and fragmented narratives—will reshape the practice and study of International Relations. We identify three major implications of information disorder on international politics. First, information disorder distorts how citizens access and evaluate political information, creating effects that are particularly destabilizing for democracies. Second, it damages international cooperation by eroding shared focal points and increasing incentives for noncompliance. Finally, information disorder shifts patterns of conflict by intensifying societal cleavages, enabling foreign influence, and eroding democratic advantages in crisis bargaining. We conclude by outlining an agenda for future research.
This chapter explores bias and fairness in employment testing in Türkiye across governmental and private sectors. It distinguishes fairness – equal opportunity, transparency, and uniform outcomes – from bias, especially in relation to predictive validity. The chapter situates these issues within Türkiye’s cultural, ethnic, and socioeconomic landscape, examining how historical and regional factors shape perceptions and practices. Key legal and regulatory frameworks, such as Turkish Labor Law and constitutional mandates, are reviewed to highlight protections for equal treatment. It also evaluates bias detection methods, including differential item functioning, sensitivity reviews, and predictive bias analyses, and discusses challenges from emerging technologies such as the use of artificial intelligence in personnel selection. The chapter underscores the need for strong validity evidence and proactive strategies to promote fair and equitable hiring in Türkiye.
The human brain makes up just 2% of body mass but consumes closer to 20% of the body’s energy. Nonetheless, it is significantly more energy-efficient than most modern computers. Although these facts are well-known, models of cognitive capacities rarely account for metabolic factors. In this paper, we argue that metabolic considerations should be integrated into cognitive models. We distinguish two uses of metabolic considerations in modeling. First, metabolic considerations can be used to evaluate models. Evaluative metabolic considerations function as explanatory constraints. Metabolism limits which types of computation are possible in biological brains. Further, it structures and guides the flow of information in neural systems. Second, metabolic considerations can be used to generate new models. They provide: a starting point for inquiry into the relation between brain structure and information processing, a proof-of-concept that metabolic knowledge is relevant to cognitive modeling, and potential explanations of how a particular type of computation is implemented. Evaluative metabolic considerations allow researchers to prune and partition the space of possible models for a given cognitive capacity or neural system, while generative considerations populate that space with new models. Our account suggests cognitive models should be consistent with the brain’s metabolic limits, and modelers should assess how their models fit within these bounds. Our account offers fresh insights into the role of metabolism for cognitive models of mental effort, philosophical views of multiple realization and medium independence, and the comparison of biological and artificial computational systems.
Systematic reviews play a critical role in evidence-based research but are labor-intensive, especially during title and abstract screening. Compact large language models (LLMs) offer potential to automate this process, balancing time/cost requirements and accuracy. The aim of this study is to assess the feasibility, accuracy, and workload reduction by three compact LLMs (GPT-4o mini, Llama 3.1 8B, and Gemma 2 9B) in screening titles and abstracts. Records were sourced from three previously published systematic reviews and LLMs were requested to rate each record from 0 to 100 for inclusion, using a structured prompt. Predefined 25-, 50-, 75-rating thresholds were used to compute performance metrics (balanced accuracy, sensitivity, specificity, positive and negative predictive value, and workload-saving). Processing time and costs were registered. Across the systematic reviews, LLMs achieved high sensitivity (up to 100%) and low precision (below 10%) for records included by full text. Specificity and workload savings improved at higher thresholds, with the 50- and 75-rating thresholds offering optimal trade-offs. GPT-4o-mini, accessed via application programming interface, was the fastest model (~40 minutes max.) and had usage costs ($0.14–$1.93 per review). Llama 3.1-8B and Gemma 2-9B were run locally in longer times (~4 hours max.) and were free to use. LLMs were highly sensitive tools for the title/abstract screening process. High specificity values were reached, allowing for significant workload savings, at reasonable costs and processing time. Conversely, we found them to be imprecise. However, high sensitivity and workload reduction are key factors for their usage in the title/abstract screening phase of systematic reviews.
Starting from the evolution of the protection of human rights on the internet, the first part of this chapter analyses the proposals for new digital human rights and the methodology of their creation in different forums such as the Council of Europe and European Union as well as related processes in the United Nations Human Rights Council. The second part focuses on the challenges related to the rapid developments in artificial intelligence, such as ChatGPT, for the protection of human rights and regulatory efforts by the Council of Europe, in particular its Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law adopted in 2024 and the Artificial Intelligence Act of the European Union dating from the same year. Both instruments are analysed for their potential to protect human and fundamental rights in particular through new digital human rights. The contribution finds possible complementarity between the two regulatory approaches. Giving several examples, it concludes that there is an ongoing process of the concretisation of new digital human rights, which are mainly but not exclusively based on existing human rights.
Digital services and artificial intelligence (AI) systems provide children with immense opportunities to communicate, learn, and play, but the use of tech platforms and AI may also pose risks to children’s rights. Rights that might be negatively affected include the right to privacy and data protection, freedom of thought, the right to freedom of expression, and the right to protection from violence and exploitation. Two recent European Union legislative instruments, the Digital Services Act (DSA) and the Artificial Intelligence Act (AIA), aim to regulate platforms and AI systems. This chapter investigates to what extent the protection and fulfilment of children’s rights is addressed in the DSA and the AIA. We analyse the proposals, scrutinise the legislative process, and assess how each instrument contributes to the effective realisation of children’s rights in the digital realm. We find that whereas the DSA holds great promise for advancing children’s rights, depending on actual implementation and enforcement, the potential of the AIA for successfully protecting and promoting their rights in an increasingly AI-driven world is less clear and certain.
Artificial Intelligence (AI) has the potential to revolutionize society, but realizing this potential requires more than technical effort. Developing effective AI systems involves balancing specialized knowledge within disciplines with the cross-disciplinary insights needed to address complex challenges. It also requires bridging fundamental research, which offers generalizable principles, with applied research, which ensures solutions are tailored to specific contexts. Crucially, it demands integrating expert perspectives with the lived experiences of communities, creating systems that are equitable and grounded in real-world needs.
Our research lab was established in 2020 as a collaboration between academia and public institutions to address these gaps. This article reflects on five years of the lab’s work, focusing on insights from studying the school choice algorithm in Amsterdam. School choice is a pressing issue in the city, and policymakers have adapted the well-known Deferred Acceptance algorithm to match students to schools. However, this adaptation led to inefficiencies, with students often placed in schools far down their preference list. This illustrated how a theoretically robust approach, even one that famously earned a Nobel Prize, can lose effectiveness when misaligned with local contexts.
We found that addressing this issue required integrating multiple perspectives: theoretical insights, practical considerations from community stakeholders, and interdisciplinary approaches combining quantitative and qualitative methods from AI, Economics, and Psychology. To articulate this, we propose a conceptual model that bridges three key dimensions in AI research: theory and application, science and society, and qualitative and quantitative inquiry. This project underscored a critical lesson: solutions rooted in a single perspective fail to address real-world complexities, and truly impactful research emerges when diverse approaches are synthesized.
We advocate for a shift in AI research that prioritizes flexibility and allows for fluidly navigating between our three proposed dimensions. Our experience suggests that such flexibility ensures AI research genuinely serves and uplifts society.
The digital transformation of Chinese companies offers a new frontier for organizational research. Widespread use of workplace platforms creates rich archives of unobtrusive data, providing continuous, real-time insights into organizational life that traditional surveys cannot capture. The central challenge for scholars is turning this data abundance into meaningful theory. This special issue highlights three studies that meet this challenge by using innovative methods to convert granular data into valuable knowledge. The papers employ digital-context experiments, real-time behavioral tracking, and machine-learning-assisted theory building to study phenomena from interpersonal dynamics to crisis productivity. Looking ahead, we explore the potential of unstructured multimodal data and new AI tools to make complex analysis more accessible. We conclude with a research agenda calling for methodological rigor, interdisciplinary collaboration, and a firm balance between technological innovation and theoretical depth.
Although people have been making decisions for many thousands of years, it was only since John von Neumann and Oskar Morgenstern wrote Theory of Games and Economic Behavior and Herb Simon wrote of satisficing and bounded rationality, that researchers started to analyze and understand how people make decisions. The mid- and late twentieth century saw an expansion in what is known about the making of decisions, but more recently new areas within decision theory have come under scientific study. This final chapter is forward-looking and considers possible future directions for understanding human decision making and also for the development of decision theory. Among these future directions are emotion, culture, artificial intelligence, and intuition itself.
Bridge the gap between theoretical concepts and their practical applications with this rigorous introduction to the mathematics underpinning data science. It covers essential topics in linear algebra, calculus and optimization, and probability and statistics, demonstrating their relevance in the context of data analysis. Key application topics include clustering, regression, classification, dimensionality reduction, network analysis, and neural networks. What sets this text apart is its focus on hands-on learning. Each chapter combines mathematical insights with practical examples, using Python to implement algorithms and solve problems. Self-assessment quizzes, warm-up exercises and theoretical problems foster both mathematical understanding and computational skills. Designed for advanced undergraduate students and beginning graduate students, this textbook serves as both an invitation to data science for mathematics majors and as a deeper excursion into mathematics for data science students.
This paper argues that interactions with artificial intelligence (AI) chatbots, such as ChatGPT, can mediate genuine mystical experiences. Building off the framework of mystical experiences developed by William James, I argue that interactions with AI chatbots can mediate mystical experiences in a structurally comparable way to how guided meditation can produce mystical experiences. I conclude by raising various concerns about the implementation of AI technologies in our religious lives, including their use as mediators for mystical experiences.