Introduction
Organisations, businesses and governments worldwide are increasingly making use of nudges to steer behaviour (Hallsworth, Reference Hallsworth2023). They have been applied successfully in various sectors and regions of the world, for instance to improve health and sustainability (Hummel and Maedche, Reference Hummel and Maedche2019; Beshears and Kosowsky, Reference Beshears and Kosowsky2020; Hubble and Varazzani, Reference Hubble and Varazzani2023). As an umbrella term, nudge is often used for a large variety of interventions. For instance, highlighting social norms, sending reminders, setting defaults and simplifying forms can be considered nudge interventions (Sunstein et al., Reference Sunstein2014). Accordingly, there have been various attempts to define the scope of nudging (Hansen, Reference Hansen2016; Congiu and Moscati, Reference Congiu and Moscati2022) and categorise different nudge types (e.g., Münscher et al., Reference Münscher, Vetter and Scheuerle2016). The current research contributes to this research by developing a novel, transdisciplinary classification system called META BI (Mapping of Environment, Target group and Agent for Behavioural Interventions). META BI serves as a tool to understand and describes key characteristics of nudge interventions, including their mechanisms, and establishes guidelines for their categorisation.
The need for a comprehensive and transdisciplinary classification of nudges
Although there is no final consensus on what defines nudges (Hansen, Reference Hansen2016; Congiu and Moscati, Reference Congiu and Moscati2022), we follow the seminal definition from Thaler and Sunstein (Thaler and Sunstein, Reference Thaler and Sunstein2008, p. 6) and consider nudges as ‘any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives. To count as a mere nudge, the intervention must be easy and cheap to avoid.’ Nudges are cost-effective in many cases (Benartzi et al., Reference Benartzi, Beshears, Milkman, Sunstein, Thaler, Shankar, Tucker-Ray, Congdon and Galing2017), constitute a comparatively new policy tool that is not well understood (Sunstein, Reference Sunstein2018; Leong and Howlett, Reference Leong and Howlett2022), and can be relatively easy and quick to implement. Other individual-level interventions (e.g. education, deliberative platforms, social marketing) tend to involve more extensive programs and interaction between behaviour change agents and targets. Moreover, they are often built on different theoretical assumptions (John et al., Reference John, Smith and Stoker2009; French, Reference French2011; Hertwig and Grüne-Yanoff, Reference Hertwig and Grüne-Yanoff2017), complicating a shared classification. Therefore, the present study focuses on nudges and similar behavioural interventions such as debiasing (Morewedge et al., Reference Morewedge, Yoon, Scopelliti, Symborski, Korris and Kassam2015). For better readability, we use the terms nudge and behavioural interventions interchangeably.
Generally, classification systems provide decision rules to categorically assign interventions to mutually exclusive and exhaustive groups (Doty and Glick, Reference Doty and Glick1994). In Table 1, we describe some frequently used nudge classification systems. Acknowledging the complex nature of nudges, many such systems are multidimensional, meaning that nudges are assigned to several groups simultaneously (i.e. ‘faceted classifications’; Stavri and Michie, Reference Stavri and Michie2012). TIPPME, for instance, classifies interventions in a matrix-like structure according to the type of intervention and the intervention’s spatial focus. Without a clear scope of the objects to classify, classification systems risk becoming inconsistent or ambiguous. Therefore, the first of three principles that guided the development of META BI was the establishment of a clearly defined scope and consistent classification criteria.
Table 1. Overview of frequently used classifications for nudges

The diverse nature and various disciplines (e.g. psychology, economics, political science) and sectors (e.g. industry, politics) involved in nudge research and application can make communicating about nudges complex and hinder shared understandings. Previous nudge classifications tended to focus on one or a small number of characteristics typically associated with specific disciplines, for instance, underlying cognitive mechanisms (psychology) or welfare effects (economics). However, such aspects are related and involve complex trade-offs necessitating a detailed and rich understanding of nudges and application contexts (Hallsworth, Reference Hallsworth2023). For instance, the Behaviour Change Intervention Ontology (Michie et al., Reference Michie, West, Finnerty, Norris, Wright, Marques, Thomas, Hastings, Johnston and Kelly2021) maps relevant aspects of behavioural interventions, such as their outcome behaviours and context. However, its technical focus on behaviour change means it does not include aspects such as autonomy and the objectives of interventions, frequently the interest of philosophers, policymakers and economists. To complicate things further, different communities use different terminologies to describe the same interventions, demonstrating a lack of common understanding. For example, an influential psychological distinction relies on individuals’ perceptions of nudges as pro-self or pro-social (Hagman et al., Reference Hagman, Andersson, Västfjäll and Tinghög2015), whereas the economics literature employs a similar but more objective distinction between interventions that target consequences for oneself (i.e. internalities; Allcott and Sunstein, Reference Allcott and Sunstein2015) or consequences for others (i.e. externalities; Oliver, Reference Oliver2018; Carlsson et al., Reference Carlsson, Gravert, Johansson-Stenman and Kurz2021). Therefore, we think researchers and practitioners will benefit from a clarifying classification system that integrates disconnected areas of knowledge and bridges different communities acting as a shared reference (Osman et al., Reference Osman, Radford, Lin, Gold, Nelson and Löfstedt2020a). Such a system may facilitate the transdisciplinary debate needed to leverage nudges for addressing complex societal changes such as ill-health and climate change (Lang et al., Reference Lang, Wiek, Bergmann, Stauffacher, Martens, Moll, Swilling and Thomas2012). Consequently, our second principle guiding the development of the classification is to acknowledge and integrate knowledge from diverse disciplines.
Recently, researchers suggested viewing behavioural interventions as part of adaptive systems, emphasising that interventions and the context in which they are deployed mutually influence each other (Schill et al., Reference Schill, Anderies, Lindahl, Folke, Polasky, Cárdenas, Crépin, Janssen, Norberg and Schlüter2019; Hallsworth, Reference Hallsworth2023). That is, interventions are regarded as ‘events in systems’ (Hawe et al., Reference Hawe, Shiell and Riley2009) rather than fixed solutions that have the same or similar effects across contexts (Bryan et al., Reference Bryan, Tipton and Yeager2021; Szaszi et al., Reference Szaszi, Higney, Charlton, Gelman, Ziano, Aczel, Goldstein, Yeager and Tipton2022; Schmidt, Reference Schmidt2024). However, very few classifications view interventions as configurations across system-level elements. The psychological mechanisms that nudge interventions rely on to change behaviour are a key aspect in that regard. Mechanisms are essential for understanding why interventions work (Grüne-Yanoff, Reference Grüne-Yanoff2016; Marchionni and Reijula, Reference Marchionni and Reijula2019) and how they interact with their context (Findley et al., Reference Findley, Kikuta and Denly2021). For example, a nudge can activate different mechanisms and lead to different outcomes depending on who launches it (Tannenbaum et al., Reference Tannenbaum, Fox and Rogers2017). Yet, existing classifications and review studies predominantly focus on the format (e.g. salience of options) of interventions; or they group mechanisms into overly broad categories (e.g. Cane et al., Reference Cane, O’Connor and Michie2012; Connell et al., Reference Connell, Carey, de Bruin, Rothman, Johnston, Kelly and Michie2019). As an example, classifications often distinguish mechanisms according to general psychological processes (e.g. attention, memory; Yoeli et al., Reference Yoeli, Budescu, Carrico, Delmas, DeShazo, Ferraro, Forster, Kunreuther, Larrick, Lubell, Markowitz, Tonn, Vandenbergh and Weber2017; Luo et al., Reference Luo, Li, Soman and Zhao2023). A more detailed overview of mechanisms seems necessary for assessing the external validity of interventions and their suitability for a particular application. Therefore, the third principle for developing the classification system was to adopt a system’s lens and acknowledge the importance of contexts and mechanisms.
Methodological approach
In the early stages of the project, both authors together considered ways of carrying out the project and decided on a stepwise development procedure which was preregistered online (https://doi.org/10.17605/osf.io/duj8v). The steps involved in developing META BI are outlined in Figure 1. We began by drafting an initial version of META BI, drawing on our expert knowledge of relevant theories and evidence. This was followed by a structured Delphi process involving a panel of international nudge experts, who reviewed successive iterations of META BI. Delphi is a widely used research method for achieving expert consensus and was employed to systematically gather feedback and refine the classification system (Linstone and Turoff, Reference Linstone and Turoff1975; Diamond et al., Reference Diamond, Grant, Feldman, Pencharz, Ling, Moore and Wales2014). In addition, we incorporated feedback from practitioners who apply behavioural interventions, and assessed how effectively interventions could be coded using the mechanisms included in META BI. Overall, the development process relied heavily on expert input, including our own, to ensure that META BI was consistent with existing knowledge and met the needs of the community (Norris et al., Reference Norris, Hastings, Marques, Mutlu, Zink and Michie2021).

Figure 1. Development procedure for META BI showing methods (left) and ouptuts (right) per step.
All materials created during this research (e.g. surveys) and results (e.g. versions of META BI, codebooks, search outputs) can be viewed online in the accompanying data repository (https://doi.org/10.17605/osf.io/6yucj). The Ethical Review Board of the Cambridge Judge Business School gave a favourable assessment (23-12).
Step 1: initial development
For this step, we developed an initial classification system based on our knowledge of nudge theories and evidence. Such an integrative step building on previous work is a common starting point in classification development (e.g. Michie et al., Reference Michie, Richardson, Johnston, Abraham, Francis, Hardeman, Eccles, Cane and Wood2013; Hollands et al., Reference Hollands, Bignardi, Johnston, Kelly, Ogilvie, Petticrew, Prestwich, Shemilt, Sutton and Marteau2017). In following the third guiding principle, we defined the structure of the initial version of the classification systems ex ante as encompassing the agent launching an intervention, the intervention, the target group at which the intervention is aimed, the behaviour intended to change and the environment in which all other elements operate (originally named ‘context’) as structural system-level elements. This structure was adapted from the Behaviour Change Intervention Ontology (Michie et al., Reference Michie, West, Finnerty, Norris, Wright, Marques, Thomas, Hastings, Johnston and Kelly2021) and corresponds to aspects commonly integrated in reporting checklists for behaviour change interventions, such as TIDieR (Hoffmann et al., Reference Hoffmann, Glasziou, Boutron, Milne, Perera and Moher2014) and CONSORT (Schulz et al., Reference Schulz, Altman and Moher2010). We then characterised those elements by defining and assigning characteristic dimensions to each element (e.g. “preferences” were assigned to the target group). The dimensions were identified in 31 relevant literature sources, many of which were highly influential classifications of nudges (e.g. high number of citations). They were selected to come from various academic disciplines, in line with principle two. The complete mapping of the literature sources, dimensions and elements is available in the online data repository. The resulting initial version of META BI displayed and described 16 dimensions, each characterising one of the five structural elements (e.g. “legitimacy” of the agent).
Mechanisms (originally named ‘functions’ based on Hawe et al., Reference Hawe, Shiell and Riley2004) were one dimension characterising behavioural interventions, based on our third principle. To arrive at descriptions of mechanisms, we relied on an inductive approach reviewing 188 descriptions of behavioural interventions from five structured attempts to organise behavioural interventions (Johnson et al., Reference Johnson, Shu, Dellaert, Fox, Goldstein, Häubl, Larrick, Payne, Peters, Schkade, Wansink and Weber2012; Münscher et al., Reference Münscher, Vetter and Scheuerle2016; Hollands et al., Reference Hollands, Bignardi, Johnston, Kelly, Ogilvie, Petticrew, Prestwich, Shemilt, Sutton and Marteau2017; Cadario and Chandon, Reference Cadario and Chandon2020; Jesse and Jannach, Reference Jesse and Jannach2021) and four unstructured lists of the most common nudges (Dolan et al., Reference Dolan, Hallsworth, Halpern, King, Metcalfe and Vlaev2012; Datta and Mullainathan, Reference Datta and Mullainathan2014; Sunstein, Reference Sunstein2014, Reference Sunstein2016). We generated definitions of underlying psychological mechanisms in a spreadsheet (see online data repository) for these interventions and added new definitions whenever interventions could not be mapped onto previously defined mechanisms. Interventions could rely on an unlimited number of mechanisms. For ease of comprehension, the resulting 15 mechanisms were grouped into five categories based on topical similarities.
The following steps served to develop and validate the initial version of META BI.
Step 2: rapid expert feedback
Researchers recommend piloting data-collection methods before conducting Delphi studies (Hasson et al., Reference Hasson, Keeney and McKenna2000). Therefore, this step served to obtain initial feedback on the classification system (v.1.0) to start developing it and improve the methods used in the following step.Footnote 1 Specifically, we conducted two interviews and one workshop with four participants who together formed our small convenience sample (n = 6) of academics and practitioners from our networks. Interviews and the workshop were semi-structured, closely following the classification’s structure while inquiring about any missing aspects, coherence and accuracy. During the interviews and the workshop, the researchers realised the limitations of eliciting feedback using a predetermined structure because it required experts to comment on aspects of the classification (e.g. dimensions, mechanisms) they were less familiar with based on their backgrounds. Instead, unstructured interviews used in Step 3.1 encouraged interviewees to take a more active role during the interview and provide more constructive and substantiative feedback. This observation aligns with the distinction between interactional expertise, here allowing interviewees to understand and converse without actively contributing new knowledge, and contributory expertise as the advanced understanding of key theories and methodologies to the extent of being able to use and apply them to contribute new knowledge independently (Collins and Evans, Reference Collins and Evans2007). We believe that we tapped into the second form of expertise using an unstructured interview approach.
For the analysis of our pilot data, field notes summarising the feedback obtained during each interview and the workshop were analysed in Atlas.ti using thematic analysis (Braun and Clarke, Reference Braun and Clarke2006). In this step, we used only 17 feedback codes related to superficial changes (e.g. changing examples), simplifications (e.g. deleting superfluous dimensions) and obvious inconsistencies (e.g. overlapping descriptions of mechanisms) to update the classification system (v.2.0) because of the small sample size. The remaining feedback was included in the analysis in Step 3.1.
Step 3.1: first Delphi round
The Delphi method is a repeated feedback process commonly used in the social sciences to integrate expert views, develop new frameworks and reach agreement (Hasson et al., Reference Hasson, Keeney and McKenna2000; Brady, Reference Brady2015). For the credibility and effectiveness of Delphi studies, the composition of expert panels is crucial (Hasson et al., Reference Hasson, Keeney and McKenna2000; Okoli and Pawlowski, Reference Okoli and Pawlowski2004; Brady, Reference Brady2015). Therefore, in line with the second guiding principle, we aimed for the experts to represent the breadth of relevant academic disciplines and various positions in the field. Interviews were chosen as a method for this step because they signal a personal approach and because they are well-suited to obtain data from busy experts.
We considered experts identified through four complementary approaches (see also Okoli and Pawlowski, Reference Okoli and Pawlowski2004): (1) authors of the literature sources used to develop the initial version of META BI in Step 1; (2) authors of articles published in three field journals; (3) authors of articles identified through a systematic search of the Web of Science; and (4) advisory board members of two relevant scientific associations. To be eligible, experts were expected to have published works relating to either the classification of behavioural interventions or general nudge reviews. From the 218 identified experts, we invited a total of 45 academics to participate in this study with different disciplinary backgrounds and positions, plus one more expert who was recommended by one invitee.Footnote 2 In total, 23 academics provided feedback during an unstructured interview, and five offered written feedback (n = 28; total response rate 61%). All participants received the classification system in advance and were prompted to reflect on any missing aspects, coherence of the system and any changes to improve it. In addition, we presented and discussed the classification system at a scientific conference where we received written, unstructured feedback from another four participants.
A list of all experts who took part in the development process and agreed for their names to be published is available in the Supplementary Information. This list evidences the participation of experts of various positions in the field. Moreover, Table 2 shows that despite most experts having a background in psychology or economics, both disciplines strongly represented in the study of behavioural interventions, we successfully recruited a multidisciplinary sample.
Table 2. Disciplinary background of experts involved in the development of META BI until step 4

The analysis followed the same approach used in Step 2 (i.e. thematic analysis) and included Feedback from Step 2 that had not been used to update the classification system before. As a result, field notes from 30 interviews and two workshops were analysed and yielded 106 distinct feedback points (i.e. codes assigned during the analysis) used to update the classification (v.3.1). Many of those points were associated with the elements of the classification system (total 33, 31%), the ‘mechanisms’ dimension (31 codes, 29%) and the general approach and conceptualisation (9, 8%). We summarised those feedback points and considered them individually in updating the classification (see online data repository). Table 3 illustrates this process showing some frequent feedback points and how we responded to them. Maintaining consistency and staying within the scope of the classification system were essential in this process, in line with our first principle. For instance, experts suggested describing the relations between dimensions included (e.g. how the objectives of interventions are related to their underlying mechanisms). Yet, we decided against this suggestion because claims about such relations, in our view, required theorising and evidence beyond our scope. Moreover, experts pointed to ambiguities in what kind of interventions were meant to be included, which led us to clarify the description of the intervention element, in line with the first principle.
Table 3. Exemplary codes during the first Delphi round and responses

Step 3.2: second delphi round
The purpose of the second feedback round of the Delphi was to give the experts who provided feedback during the first Delphi round (Step 3.1), as well as the two experts interviewed for the rapid expert feedback (Step 2) an opportunity to review the classification system (v.3.1) again and assess its development. To increase the response rate and reduce social desirability, we conducted an anonymous online survey. Of the 30 invited experts, a total of 19 completed the survey (response rate of 63%). Experts were asked to comment on and rate the classification’s (v.3.1) description and content, its comprehensiveness and the coherence of its structure. Answers were analysed again using thematic analysis and descriptive statistics for participant ratings.
The analysis yielded 29 feedback codes used to further improve the classification system (v.3.1) (see online data repository). Nonetheless, the feedback was largely positive, with most finding META BI’s its content meaningful and correct (79% indicating agreement), its structure (v.3.1) coherent (74%) and it providing a comprehensive overview of factors relevant for behavioural interventions (79%), which suggests that principle one was applied successfully. Beyond the agreement rate, both the interquartile range and standard deviation of the experts’ assessments indicated sufficient agreement to conclude consensus regarding these criteria (Giannarou and Zervas, Reference Giannarou and Zervas2014).
Step 4. Practitioner interviews
This step served to investigate the usefulness and comprehensiveness of the classification system (v.3.2) using semi-structured interviews with practitioners developing and applying behavioural interventions. Engaging practitioners can help connect research to practice and generate ‘actionable knowledge’ (Argyris, Reference Argyris1993; Antonacopoulou, Reference Antonacopoulou2009). Because of limited resources, we did not use purposive sampling until saturation was reached, as preregistered initially, but instead recruited a small convenience sample of six practitioners from our networks and using social media. Yet, the sample represented a diverse group of practitioners from different sectors (NGO, research, government, business), varying levels of policymaking (local, supranational) and employing different approaches (analytic, design-based).
The classification system (v.3.2) was shared with interviewees in advance together with questions prompting reflection on its usefulness and comprehensiveness. Interview field notes were again analysed using thematic analysis, which yielded 56 distinct feedback points. Of these, 42 codes related to feedback that was used to update classification details (e.g. changing examples; see online data repository) and to add one dimension reflecting how interventions relate to other levers of change because of this dimension’s high practical relevance. Eight codes concerned the usefulness of the classification system, with practitioners indicating that it can be useful, for instance, to understand and explain what nudges are and to select interventions that will likely be successful in specific target contexts. The remaining six codes indicated that practitioners found META BI clear and comprehensive.
Step 5: intercoder agreement
Assessing intercoder agreement served to investigate how effectively two independent coders could assign mechanisms included in META BI (v.4.0) to a set of 65 behavioural interventions. These interventions were taken from a previous coding task for classification development (Münscher et al., Reference Münscher, Vetter and Scheuerle2016) and complemented by us with 10 additional interventions to ensure all mechanisms were represented a minimum of three times based on the coding of one author. Coders were trained and instructed to code for each intervention an ordered number of mechanisms with no maximum number. Agreement between the two independent coders implies that mechanisms can be assigned consistently and reliably, in line with our first guiding principle. This does not mean, however, that interventions must rely on the coded mechanisms because coding is interpretative.
Analysis of the coded interventions yielded a moderate fuzzy kappa (Kirilenko and Stepchenkova, Reference Kirilenko and Stepchenkova2016) of 0.49 as a measure of agreement. Fuzzy kappa was calculated because it allows assigning multiple labels (here mechanisms) to one entity (here interventions). Conventional kappa was 0.47, when considering only the main mechanism coded by each coder. This suggests that while mechanisms might overlap and interact for a single intervention, primary mechanisms can be assigned with moderate reliability. Complete intercoder agreement (i.e. identical codes being assigned to an intervention by both coders) was achieved for 34% of the interventions. Reasons likely preventing higher intercoder agreement were the unobservable and speculative nature of mechanisms and ambiguities in the descriptions of interventions. Given these explanations and the fact that the short descriptions of interventions focused on intervention formats (e.g. ‘Offering people to commit themselves to a goal online’), we were satisfied with the observed intercoder agreement and changed the description of mechanisms only marginally. In addition, descriptions were short and in applied contexts (e.g. when coding interventions for meta-analyses) more relevant information is likely to be available for coders.
Step 6: refinement
This step involved reviewing the language and display of META BI (v.6.0), polishing the presentation, and creating an interactive document. For this, the research team hired a language editor and a graphic designer to improve readability and overall usability. In the following, we briefly summarise META BI, while the full detailed classification system is provided in the Supplementary Information.
META BI
META BI describes behavioural interventions across five system-level elements (see Figure 2) each characterised by three to five dimensions. In the following, we describe each element and each dimension included in META BI. In addition, we illustrate classifying one intervention (i.e. an energy default based on Ebeling and Lotz, Reference Ebeling and Lotz2015) using META BI in Table 4.

Figure 2. Overview of META BI.
Table 4. Dimensions, classification questions and categories/labels relating to each dimension of the META BI classification system

Intervention
The classification system considers single discrete nudges, rather than repeated or interactive interventions. The mechanisms dimension refers to immediate changes in mental states in response to the intervention sought to influence behaviour. META BI suggests 17 such mechanisms listed in Table 5 grouped into five categories based on topical similarity. Format refers to the appearance of the manipulated component of the intervention (i.e. the nudge). We do not suggest any rules or labels to code formats because there are many possible approaches beyond our scope (e.g. Hollands et al., Reference Hollands, Bignardi, Johnston, Kelly, Ogilvie, Petticrew, Prestwich, Shemilt, Sutton and Marteau2017; Congiu and Moscati, Reference Congiu and Moscati2020). Intrusiveness refers to the extent to which the intervention interferes with or disrupts people’s lives and their goals, and if targets can avoid and exit the intervention (Lemken et al., Reference Lemken, Erhard and Wahnschafft2024). Personalisation refers to whether different members of the target group receive different versions of the same intervention (e.g. using different messengers) based on their characteristics (e.g. gender, postcode).
Table 5. Names and descriptions of mechanisms included in META BI

Agent
Agents are the organisations, institutions and individuals that define the behaviour to be changed, identify the target group and launch interventions. Objectives refer to the agent’s goal motivating the nudge. We differentiate between nudges aiming to improve quality of decision-making (e.g. attention to one’s choice; Dold and Lewis, Reference Dold and Lewis2023), as well as nudges targeting internalities (i.e. consequences for oneself), externalities (i.e. consequences for others) and social or group characteristics (e.g. polarisation, groupthink). Legitimacy refers to the agent being reasonably allowed to deploy the nudge and promote the targeted behaviour. Reputation is a multifaceted construct that refers to the subjective perception of the agent by the target group (e.g. trust, authenticity and attractiveness) (Krijnen et al., Reference Krijnen, Tannenbaum and Fox2017; Tannenbaum et al., Reference Tannenbaum, Fox and Rogers2017). Sameness refers to the relation between the agent and target group, namely whether agents are identical to target groups (e.g. self-nudging; Reijula and Hertwig, Reference Reijula and Hertwig2022), equal (e.g. employees influencing peer behaviour) or different (e.g. government influencing citizen behaviour). Many of these dimensions can be assessed both subjectively and objectively, and assessments may show heterogeneity across individuals.
Target group
The target group consists of those individuals whose behaviour the agent aims to influence. Those individuals evaluate the agent, experience the intervention and show a response in behaviour. Preferences refer to the target group’s evaluations of the target behaviour and the objective of a nudge. They can differ in direction (aligned vs unaligned with the goal of the nudge) and perceived importance. Moreover, they can be homogeneous (e.g. positive smell and positive taste), heterogeneous (e.g. negative smell but positive taste) or time-inconsistent (e.g. alcohol now, but no hangover tomorrow) (Sunstein, Reference Sunstein2015a; de Ridder et al., Reference de Ridder, Kroese and van Gestel2021). Engagement refers to the target group’s involvement with the nudge, namely the extent to which the target group becomes aware of the intervention (e.g. its purpose, source, mechanism; Hansen and Jespersen, Reference Hansen and Jespersen2013) and if the target group takes an active role in its development and/or delivery (Richardson and John, Reference Richardson and John2021). Autonomy refers to a multifaceted concept that describes the extent to which choice options are constrained (freedom of choice), how the intended behavioural outcome aligns with the interests, preferences and desires of the target group (outcome autonomy), and to what extent the decision-making process in response to the nudge is fair and well-reasoned (process autonomy) (Engelen and Nys, Reference Engelen and Nys2020; Vugts et al., Reference Vugts, van den Hoven, de Vet and Verweij2020). Lastly, ability is defined as the extent to which the target group has the means to process and act upon the nudge as planned (Baldwin, Reference Baldwin2014; Howlett, Reference Howlett2018).
Behaviour
This element refers to the behaviours targeted by the nudge. Temporality refers to the behaviour’s relationship to time. We differentiate between one-off behaviours and frequently repeated behaviours. It also refers to whether there is a specific period during which behaviour is influenced or not (one-off vs repeated behaviour change) (Chatterton & Wilson, Reference Chatterton and Wilson2014). Nature of behaviour change refers to the direction of and type of change. Nudges can aim to avoid, reduce, maintain or intensify existing behaviours. Additionally, they can aim to instigate novel behaviours or not. Mental mode refers to the mental processes of the target group that bring about the desired behaviour change. We do not suggest any rules and labels to code mental modes because there are many different approaches and theoretical orientations that researchers can rely on to describe them (e.g. Gollwitzer et al., Reference Gollwitzer, Heckhausen and Steller1990; Kahneman, Reference Kahneman2011). Discreteness refers to the degree to which target behaviours are linked to other behaviours, for instance, through one’s identity, motivations or the context (Chatterton & Wilson, Reference Chatterton and Wilson2014). Collective behaviour refers to the extent to which the behaviour is influenced by others and linked to their behaviour.
Environment
The environment encompasses all other structural elements, influences them and is influenced by changes within them. It includes the cultural, political, social, physical, economic and technological aspects of the immediate choice situation as well as the wider context. Opportunities and affordances refer to the extent to which the environment supports and invites or blocks and limits the intended behaviour change (Schill et al., Reference Schill, Anderies, Lindahl, Folke, Polasky, Cárdenas, Crépin, Janssen, Norberg and Schlüter2019; Van Dessel et al., Reference Van Dessel, Boddez and Hughes2022; Schmidt, Reference Schmidt2024). Resources refer to the means that the agent uses to support the intended behaviour change beyond the nudge. These resources include information provided alongside the intervention, using authority to ban or permit options, financial sanctions or incentives, services and goods offered to the target group and symbols and signals of approval or disapproval sent alongside the intervention (John, Reference John2013; Howlett, Reference Howlett2018). Interplay refers to the relationship between nudges and structural/systemic levers of change (Brownstein et al., Reference Brownstein, Kelly and Madva2022; Chater and Loewenstein, Reference Chater and Loewenstein2023). Nudges can be neutral to such levers when they do not affect each other, they can be supportive when they reinforce or complement each other and they can be countervailing when they prevent or limit each other.
Discussion
We developed META BI as a multidisciplinary, context-rich classification system of nudge interventions. It is based on a stepwise development procedure where an initial version of the classification system drawing on our expert knowledge was subsequently developed further and validated in a Delphi exercise with an interdisciplinary group of academic experts who reviewed several versions of it. We call the result META BI, which stands for Mapping of Environment, Target group and Agent for Behavioural Interventions. META BI is a tool for understanding and describing interventions across 20 dimensions, each relating to one of five system-level elements. As one of the 20 dimensions, we also generated a list of 17 underlying mechanisms (see Table 5). The goal of META BI is to provide a common language for a fruitful exchange about behavioural interventions across disciplines and professions. It helps prevent an overemphasis on buzzword-driven behavioural interventions (Oliver, Reference Oliver2025) and potentially facilitates evidence synthesis and systematic reviews. In the following, we contrast META BI with previous classification systems, illustrate potential use cases and reflect on our methodological approach.
Previous classifications listed in Table 1 focus on maximum four aspects of nudges at a time, often suggesting a limited number of clearly defined nudge types. In contrast, META BI ‘zooms out’ to integrate 20 dimensions across five system-level elements (e.g. behaviour, target group) used to classify nudges. It thus considers nudges as configurations across those dimensions, providing a syntax to describe and understand nudges in their context from various perspectives. The few classification systems capturing similar systems level elements (e.g. Ly et al., Reference Ly, Mažar, Zhao and Soman2013; Stenger and Schmidt, Reference Stenger and Schmidt2025), tend to be less comprehensive. Without a comprehensive system, however, those interested in nudges risk thinking about them in an overly simplistic way, focusing on a few salient attributes while not considering others (Hauser et al., Reference Hauser, Gino and Norton2018). Addressing this, META BI provides an organising device and coding scheme for its users to reduce the messy nature of real-world interventions to 20 key dimensions, shifting the focus from single true effects of specific interventions (‘Nudge type A produces result B’) to more contextualised assessments (e.g. ‘When C and D are present, a nudge relying on mechanism E produces result B’). Such a shift is extremely important when estimating the likely effects of interventions in implementation and scaling, that is applying interventions in novel contexts (Pawson and Tilley, Reference Pawson and Tilley1997; Schill et al., Reference Schill, Anderies, Lindahl, Folke, Polasky, Cárdenas, Crépin, Janssen, Norberg and Schlüter2019; Soman, Reference Soman2024). To illustrate, the framework might explain why a planning intervention for tax payments successfully scaled even though the goal-setting component was excluded when scaling the intervention (goal-setting refers to the Mechanisms and Engagement dimensions in META BI). Namely, because the tax context meant that fines and penalties ensured extrinsic motivation (Resources dimension) compensating for individually set goals (Robitaille et al., Reference Robitaille, House, Mažar and Soman2024). It is difficult to imagine how any of the previous classifications listed in Table 1 could inform scaling in a similar fashion.
Secondly, META BI adds several dimensions not included in previous classifications. For instance, the classification system’s ‘resources’ dimension distinguishes between interventions steering free choice (e.g. consumption choices) and interventions increasing adherence with pre-existing laws and regulations (e.g. tax payment). This way, META BI helps overcome the misconception that behavioural interventions are incompatible with traditional regulatory tools such as bans and sanctions (Sunstein, Reference Sunstein2018). In fact, such traditional instruments may be essential for the effectiveness of behavioural interventions; a notion supported by a recent review finding that financial incentives combined with nudges are highly effective in encouraging pro-environmental behaviour (Alt et al., Reference Alt, Bruns, DellaValle and Murauskaite-Bull2024). Many previous classification systems view nudges as standalone approaches, lacking conceptual links to other interventions with which nudges might form policy mixes (Howlett et al., Reference Howlett, Mukherjee and Woo2015; Mukhtarov, Reference Mukhtarov2024). Put differently, META BI acknowledges that traditional instruments can be re-analysed and re-interpreted from a behavioural perspective (see Schneider and Ingram, Reference Schneider and Ingram1990; John, Reference John2013; Howlett, Reference Howlett2018) and as part of configurations that make up behavioural interventions.
Thirdly, META BI integrates knowledge and experience from different disciplinary and sectoral perspectives leading to consensus. An example is the ‘objectives’ dimension, where the distinction between nudges targeting internalities or externalities present in several classification systems from economics (Oliver, Reference Oliver2018; Carlsson et al., Reference Carlsson, Gravert, Johansson-Stenman and Kurz2021) is complemented by two novel categories, namely nudges targeting decision-making procedures irrespective of outcomes (Sunstein, Reference Sunstein2015b, Reference Sunstein2017; Dold and Lewis, Reference Dold and Lewis2023) and nudges aiming to change social and group characteristics (e.g. groupthink, polarisation; Dudley and Xie, Reference Dudley and Xie2022; Mattis et al., Reference Mattis, Groot Kormelink, Masur, Moeller and van Atteveldt2024). Thanks to the latter two categories, META BI can accommodate nudges that do not fall squarely into pre-defined economic categories. In addition, the classification system is evidently transdisciplinary with several dimensions that can be linked to specific disciplines (e.g., “mechanisms” and psychology; “interplay” and public policy).
Fourthly, META BI was developed through an empirically robust, stepwise procedure that incorporated several best practices to enhance the credibility and reliability of our findings. We began by pilot testing and refining our Delphi data collection method (Hasson et al., Reference Hasson, Keeney and McKenna2000). To ensure practical relevance, we actively involved practitioners experienced in applying behavioural interventions. Moreover, we assessed interrater agreement for the most concrete component of META BI (i.e. the mechanisms dimension) demonstrating that this aspect can be coded reliably and consistently by future users. This stepwise process of iterative refinement likely led to clearer definitions, more precise labels and well-defined boundaries for our classification system. Compared to many earlier classifications, which often lack empirical validation during development (see Table 1), META BI stands out for its methodological transparency and validation. That said, it is worth acknowledging that some widely used frameworks, such as EAST (Behavioural Insights Team, 2024), have gained practical legitimacy through extensive application in the field (e.g. Arboleda et al., Reference Arboleda, Jaramillo, Velez and Restrepo2024), despite not being developed through a formal empirical process. In sum, META BI is a comprehensive and context-sensitive classification

Figure 3. Usage of META BI.
system, shaped by interdisciplinary input and empirical testing.
Usage of META BI
While META BI offers a novel theoretical perspective on nudges, it may also hold potential for practical innovation. As illustrated in Figure 3, the classification system can support the application of individual nudges, the comparison of nudges and learning from relevant nudge applications. First, META BI might enable users to think more systematically and comprehensively about the nudges they are applying. Its 20 dimensions can serve as a mental organising device, helping practitioners and researchers focus on the most relevant aspects of an intervention. By answering the classification questions outlined in Table 4, users can structure their thinking and avoid common pitfalls, such as overlooking contextual influences, negative spillovers or unintended backfiring effects (Meder et al., Reference Meder, Fleischhut and Osman2018; Osman et al., Reference Osman, McLachlan, Fenton, Neil, Löfstedt and Meder2020b). Second, META BI might be used to structure and search behavioural evidence. A repository of coded interventions could function similarly to filters used when selecting a new laptop to buy: instead of filtering by screen size, processor speed and working memory, users could filter by mechanisms, agent objectives and target group preferences, for instance. This would allow for more targeted evidence retrieval and facilitate the identification of comparable interventions. For illustration, we have coded a set of well-known interventions in the Appendix. Third, META BI can support field mapping and evidence synthesis . By analysing a carefully selected and consistently coded set of interventions, researchers might better understand the current landscape of behavioural science. This is particularly valuable for literature reviews and meta-analyses. META BI captures many of the key factors that influence external validity, such as mechanisms, the nature of the intervention, its target group and behavioural outcomes – factors identified by Findley et al (Reference Findley, Kikuta and Denly2021) as essential for generalisability. Yet, META BI does not explicitly code for time as the last factor identified by Findley and colleagues. As such, it may help explain the substantial heterogeneity often observed in review studies of nudge interventions. We recommend users with limited resources to start with the first of the three use cases as it is least resource intensive.
Limitations
There are at least four potential limitations concerning the methods and results of this study. First, in our approach to developing the classification system we relied on our subjective judgement. Specifically, the initial version of the classification was shaped by our understanding of the need for a classification and early design choices. For instance, we conceptualised the mechanisms of interventions on the individual level (e.g. conformity with social norms) rather than the social level (e.g. herd behaviour). Although the following steps served to further develop and validate META BI, a different initial version is likely to have produced other results. In addition, in developing the classification system and integrating views from different disciplinary experts, we relied on our professional expertise and judgement – an approach known as ‘integration by leader’. This approach to transdisciplinary integration is different from more collaborative approaches, such as group deliberation and negotiation (Rossini and Porter, Reference Rossini and Porter1979). However, we considered the practical constraints of other integration modes, which often demand more from the participating experts (e.g. scheduling efforts, travel, resolving group conflicts) making it unlikely to achieve similar levels of expert involvement. Consequently, META BI might be viewed prescriptive, offering subjective recommendations, rather than an objective description of dimensions and rules describing nudges. Yet, the agreement from academics and practitioners may give credibility to this classification.
As a second limitation, our sample was mostly Western, educated, industrialised, rich and democratic (‘WEIRD’; Henrich et al., Reference Henrich, Heine and Norenzayan2010). This limitation likely stems from our reliance on sampling academic experts who are statistically more likely to exhibit those characteristics. Exploring and achieving agreement among those experts was deemed more important than generalisability to other samples. Consequently, a different classification may have emerged from a different panel.
Third, the classification system may be considered overly abstract and disconnected from individual disciplines, which is a common challenge for transdisciplinary research (Lang et al., Reference Lang, Wiek, Bergmann, Stauffacher, Martens, Moll, Swilling and Thomas2012). Evidence of this may be seen in the fact that users of META BI will need to operationalise the classification’s dimension and agree on observable criteria before codifying interventions (e.g. using objective or subjective criteria to assess “objectives” of an intervention). Yet, this also applies to several previous classifications (e.g. Baldwin, Reference Baldwin2014; Behavioural Insights Team, 2024).
Fourth, we caution readers against the risk of treating META BI’s classification codes as fixed or self-contained labels without considering underlying dynamics. It is important to recognise that the dimensions are unlikely to be entirely independent. For example, a shift in one dimension (e.g. “reputation” of the agent) may moderate others (e.g. “mechanism” of the intervention). These interdependencies are difficult to capture using the structure of META BI. Moreover, the relevance of specific dimensions may vary across different applications, complicating count-based comparisons of coded interventions. Two interventions may appear similar across most dimensions, yet differ significantly if the one divergent dimension holds greater contextual importance. However, this limitation reflects a broader challenge in classification systems: the tension between analytical clarity and the complex, often fluid nature of real-world phenomena (Medin and Ortony, Reference Medin, Ortony, Vosniadou and Ortony1989) and is likely to apply in a similar fashion to previous classifications.
Conclusion
To sum up, META BI offers a conceptual synthesis between literature and application. It aims to avoid misunderstandings, improve implementation and support evidence synthesis by striking a balance between an exhaustive mapping of factors influencing nudges and overly simplistic descriptions. Researchers and practitioners can employ the classification system to investigate and make sense of specific behavioural interventions, ensuring that the most relevant aspects of a nudge are considered. We hope that it might help close the gap between highly detailed technical classifications ‘zooming in’ on specific aspects (e.g. intervention techniques; Michie et al., Reference Michie, Richardson, Johnston, Abraham, Francis, Hardeman, Eccles, Cane and Wood2013) and general behaviour-change typologies, in which nudges are one intervention type among many (e.g. behaviour change wheel; Michie et al., Reference Michie, Atkins and West2014).
Supplementary material
The supplementary material for this article can be found at https://doi.org/10.1017/bpp.2025.10015.
Acknowledgments
The authors are grateful to Aiswarya Sunil, Mariam Abdelnabi and Sorin Thode for their excellent research assistance, and to numerous conference and seminar participants for their helpful comments and discussions. We particularly thank the experts for the time they devoted to the Delphi interviews.
Funding statement
This work was supported by the Novo Nordisk Foundation (Grant number NNF21SA0069203).
Competing interests
The authors declare no competing interests.
Author contributions
M.D.: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Writing – original draft. L.A.R.: Conceptualization, Funding acquisition, Investigation, Methodology, Writing – review & editing.
Data availability statement
All data, analysis code and research materials including all versions of the classification are available in an online repository (https://doi.org/10.17605/osf.io/6yucj).