Impact Statement
We advocate for the adoption of Hybrid Digital Twinning (HDT) as a main enabler for transforming strategic decision-making and enhancing system resilience within the domain of infrastructure. In clarifying the modus operandi of DT technologies, this paper highlights the strengths and potential of digital twin technologies and aspires to lay the foundations for the development of next-generation digital twins for smart infrastructures. This study summarizes the insights gained from a round-table discussion on Decision Support for Infrastructural Asset Management, which was held as a joint initiative of the Future Resilient Systems (FRS) program at the Singapore-ETH Centre and the DESCARTES interdisciplinary excellence program at CNRS@CREATE.
1. Introduction
Engineering infrastructures form the backbones of our society. Under the mandate of Industry 4.0, the digital revolution has brought about a paradigm shift in how we design, produce, and interact with physical assets (Oztemel and Gursev, Reference Oztemel and Gursev2020). While digitization has been broadly adopted in the context of manufacturing and production technologies and the handling of industrial assets, it remains underutilized in large-scale built environments, such as infrastructures. Building Information Modelings (BIMs) dominate the field, primarily serving as static images for the design and construction phases (Sacks et al., Reference Sacks, Girolami and Brilakis2020). However, also on this scale, the concept of Digital Twinning (DT) has the potential not only to deliver information on the state of the system “as is,” but also to inform decision support frameworks further. These frameworks operate throughout the structural life cycle, namely from the stage of manufacturing/construction, to the stage of operation under standard as well as extreme loads and hazards, and finally to the decommissioning phase. To add value, DT representations should enable a closed-loop exchange between digital and physical assets. This involves extracting information garnered from operating physical systems (e.g., by means of monitoring) and distilling this information via the use of digital representation. Finally, this analysis would be exploited to act on the physical asset to protect critical infrastructure and guarantee its resilience (Argyroudis et al., Reference Argyroudis, Mitoulis, Chatzi, Baker, Brilakis, Gkoumas, Vousdoukas, Hynes, Carluccio, Keou, Frangopol and Linkov2022).
Infrastructure resilience is used here as the main criterion based on which strategic decision-making can be made. It can be defined as the ability to anticipate, prepare for, and adapt to environmental changes, as well as cope with, respond to, and recover rapidly from extreme disruptions (Cimellaro et al., Reference Cimellaro, Renschler, Reinhorn and Arendt2016). Numerous studies in recent years have focused on infrastructure resilience under adverse environmental impacts and exposure to extreme events. These studies put forth frameworks for quantifying and enhancing resilience across scales, from components and individual assets, to interconnected networks (Ouyang et al., Reference Ouyang, Dueñas-Osorio and Min2012; Cimellaro et al., Reference Cimellaro, Renschler, Reinhorn and Arendt2016; Dhar and Khirfan, Reference Dhar and Khirfan2017; Koliou et al., Reference Koliou, van de Lindt, McAllister, Ellingwood, Dillard and Cutler2020; Blagojević et al., Reference Blagojević, Hefti, Henken, Didier and Stojadinović2023; Liang et al., Reference Liang, Blagojević, Xie and Stojadinović2023). This analysis is typically conducted in the pre-incident phase using simulated scenarios with stochastic deterioration/fragility and restoration models, without accounting for information that is gathered from the actual system over time. Our primary focus lies on decision-making in the context of during-incident and post-incident phases, which usually require fast (sometimes even real-time) decision-making. The premise for such an investigation assumes the availability of data from infrastructural assets and systems. This is nowadays justified by the growing availability of information, which includes not just digitized logs with inspection information on structural systems, but also the increasing use of sensing technologies to monitor these systems, on both a periodic (e.g., Non Destructive Evaluation) and continuous (e.g., Structural Health Monitoring) evaluation (Kamariotis et al., Reference Kamariotis, Chatzi, Straub, Dervilis, Goebel, Hughes, Lombaert, Papadimitriou, Papakonstantinou, Pozzi, Todd and Worden2024).
Currently, there is no integrated framework for quantifying and enhancing infrastructure resilience based on the fusion of such data within DT techniques. Hence, the objective of this paper is to
-
• clarify the current landscape in terms of available DT representations,
-
• define Hybrid Digital Twins (HDTs) as a class of DTs that is particularly suited for infrastructural assets, when viewed under the prism of cyber-physical systems,
-
• illustrate the potential application of HDTs in support of decision-making for performant and resilient infrastructures,
-
• and finally, highlight the associated challenges and opportunities in this respect.
2. Motivation for integrating HDTs in infrastructural management
DT refers to the development the creation of virtual representations of physical assets that integrate sensor data, system simulations, and analytics. This integration provides decision-makers with unprecedented, often real-time, insights into the condition and behavior of physical assets, which refer to any physical object, system, or infrastructure holding economic value to an organization(e.g., building, bridge, wind energy structures). Unlike traditional periodical and reactive decision-making methods, the integration of DT introduces predictive analytics, which are informed based on real-time and historical data collected from the physical asset in operation. This forecasting potential supports proactive management of operations, maintenance, and resilience against risks and hazards. A significant limitation of purely data-driven Digital Twin (DT) models is their lack of generalizability and interpretability. This issue often arises from the insufficiency of representative data, which can lead to overfitting and poor performance in unseen scenarios. Additionally, the absence of physical knowledge integration in these models can hinder the ability to interpret model predictions and accurately capture intricate dynamics needed for accurate predictions, limiting their effectiveness in decision support for infrastructure management.
Aiming for greater accuracy and effective decision-making, we use the term hybrid digital twinning (HDT) to refer to an advanced form of digital twinning that explicitly incorporates a physics-based model of the system (which can be numerical, analytical, or empirical) within a process that further feeds from data. The integration of physics-based models fundamentally distinguishes HDTs from purely data-driven DTs by enhancing predictive capabilities and ensuring physically grounded predictions. HDTs uniquely enable the generalization of predictions beyond the positions of sensor observations, facilitating virtual sensing of system responses in critical, unmonitored locations (Papatheou et al., Reference Papatheou, Tatsis, Battu, Agathos, Haywood-Alexander, Chatzi, Dervilis and Worden2023; Vettori et al., Reference Vettori, Gomes, Di Lorenzo, Peeters and Chatzi2023). This generalization capability is essential for managing assets under extreme and changing conditions, where reliance solely on available sensor data would be insufficient. By grounding predictions in physical principles, HDTs enhance interpretability, ensuring that predicted outcomes can be validated and trusted, which is particularly crucial for critical decision-making processes. Consequently, HDTs empower decision-makers to respond swiftly to disruptions and adapt to dynamic conditions by providing insights into both monitored and unmonitored parts of the system, supporting a proactive and transparent decision-making process. This transparency, essential for accountability and trust, is vital in sectors where decisions have significant economic and safety impacts.
This integrated approach allows decision-makers to develop a scalable framework, which is adaptable across dimensions such as asset size and degrees of freedom, interdependency, and throughout the life cycle of assets. Adaptability refers to the capacity of the framework to remain effective whether applied to a single asset or when scaled up to encompass an entire network of assets, and its potential to be consistently implemented throughout all phases, from design and construction to operation and end-of-life management. During the manufacturing and construction phase, HDT technology facilitates the integration of real-time data and model-based predictive analytics, allowing for the optimization of design processes and the embedding of resilience measures tailored to anticipated operational challenges. As assets transition into the operational phase, HDT technology continuously learns operational strategies in real time to effectively manage emerging risks and minimize downtime, especially under extreme conditions. Finally, in the decommissioning phase, HDT technology provides a data-rich basis for executing cost-effective strategies by leveraging the comprehensive historical data accumulated over the operational lifetime of the asset. An HDT embodies a closed-loop, dynamic, possibly real-time, data-driven approach to asset management that not only accounts for complex interdependencies and curbs assessment uncertainty but also operates based on the current state of the system rather than its initial deployment conditions.
Despite the clear advantages of the use of DTs and, in particular, HDTs in the context of infrastructure management and resilience, their adoption has been slow in practice. This reluctance often stems from the diverse interpretations and lack of clarity surrounding the definition and applicability of a DT/HDT, as well as the relative lack of standards and protocols for formally framing the use of such tools. This statement paper aims to clarify the definition and potential use of HDTs within the domain of smart infrastructures, exploring the need to enhance their utility and maximize their uptake.
3. Hybrid digital twins—HDTs
The implementation of digital twins presents its own set of challenges. Data integration, modeling complexity, transparency, communication among agents, and ethical concerns relating to automated decision-making are significant challenges that must be addressed to ensure an actionable application. However, the first step is to propose a framework for cross-disciplinary understanding that sets the foundation for any future development.
3.1. Definition and interpretation of digital twins
The concept of digital twin (DT) finds its roots in NASA’s Apollo XIII project, where digital simulators and a physical replica were connected to the real spaceship to receive information from it to update its operating condition and propose mission rules based on its state, especially in critical conditions (Shafto et al., Reference Shafto, Conroy, Doyle, Glaessgen, Kemp, LeMoigne and Wang2010). As reported in this document, this was the case with the explosion of the oxygen tanks that damaged the engine during the mission, a situation in which the simulators helped to evaluate damage and solutions to perform informed crisis management.
With the surge of Industry 4.o, DTs became a go-to term in several fields; however, the definition of the term may still appear blurred and unclear (Wright and Davidson, Reference Wright and Davidson2020). Certain sources ((Alam and Saddik, Reference Alam and Saddik2017; Hughes, Reference Hughes2018; Platenius-Mohr et al., Reference Platenius-Mohr, Malakuti, Grüner, Schmitt and Goldschmidt2020) to name a few) define DTs as models, simulators, replicas of existing phenomena, i.e., digital replicas of real assets. Although partially correct, this definition lacks an essential element, namely the interaction with the physical asset. More recent frameworks within the engineering context describe a DT as a process that defines a closed loop between the physical entity and the digital replica (AIAA, 2020; McClellan et al., Reference McClellan, Lorenzetti, Pavone and Farhat2022). This requires a digital workflow of information, parametrized models, diagnostic and prognostic algorithms, and control tools, often aggregated in a visualization layer, which generates value for the user and facilitates decisions.
The origin of the DT concept may be traced back to a presentation by Michael Grieves at the University of Michigan in 2002, which aimed to establish the so-called Product Lifecycle Management (PLM) framework (Grieves, Reference Grieves2002). However, the first known definition for the DT is considered to be the one published by NASA in (Shafto et al., Reference Shafto, Conroy, Doyle, Glaessgen, Kemp, LeMoigne and Wang2010). In this definition, a DT is claimed to be an integrated multiphysics, multiscale, probabilistic simulation that uses the best available physical models, sensor updates, fleet history, etc., to mirror the life of its flying twin for recommending changes in mission profile to increase both the life span and the probability of mission success, already signifying the key aspect of two-way interaction between the physical and digital counterpart.
Following this spirit, similar descriptions have been assigned to DTs (Glaessgen and Stargel, Reference Glaessgen and Stargel2012; Saddik, Reference Saddik2018; Xu et al., Reference Xu, Sun, Liu and Zheng2019; Liu et al., Reference Liu, Fang, Dong and Xu2021; Kenett and Bortman, Reference Kenett and Bortman2022). The recent AIAA position paper (AIAA, 2020) defines a digital twin as:
A set of virtual information constructs that mimics the structure, context and behaviour of an individual/unique physical asset, or a group of physical assets, is dynamically updated with data from its physical twin throughout its life cycle and informs decisions that realise value.
We discern three main characteristics of a DT in the various definitions offered:
-
• A physical asset from which information is extracted, implying the presence of a monitoring system.
-
• A digital (virtual) representation of the physical element, represented by a model that captures the behavior of the physical counterpart. Here we distinguish four levels of description: component, asset, system, and process.
-
• A one- or two-way information flow process, depending on the application, that links the digital and physical counterparts to ensure continuous tracking of the behavior of the physical asset. This is used to update the status of the digital replica, offering valuable augmented information on the state of the system, and allows for acting on it with improved confidence margins. A one-way process is also called a “digital shadow” (Bergs et al., Reference Bergs, Gierlings, Auerbach, Klink, Schraknepper and Augspurger2021).
Dynamic Data-Driven Application Systems (DDDAS) (Blasch et al., Reference Blasch, Seetharaman and Reinhardt2013), is proposed as a framework for the dynamic update of simulators (models) with data obtained from sensor networks and monitoring devices. Although this framework focuses on the aspect of updating a digital mirror (essentially) of the operating physical system, the purpose of DTs extends beyond computational modeling and updating to include performance and condition assessment, analysis, and optimization of physical assets throughout their life cycle.
In the life cycle of infrastructure systems, we can distinguish five main phases: design, construction, operation, maintenance, and decommissioning. Each of these phases can be coupled with digital twins, accompanying the evolution of the system and enhancing its management and optimization throughout its life cycle. Following (Grieves and Vickers, Reference Grieves, Vickers, Kahlen, Flumerfelt and Alves2017), in this work, we define DTs using a classification in three essential categories (classes), according to the purpose served by the twin throughout the life cycle:
-
• the Digital Twin Prototype (DTP)
-
• the Digital Twin Instance (DTI)
-
• the Digital Twin Aggregate (DTA)
The first DT class we refer to here is the Digital Twin Prototype (DTP), which reflects a virtual representation of a physical object, encompassing the essential information sets needed to characterize and fabricate a physical counterpart (for instance, requirements, 3D models, lists of materials, processes, services, and disposal procedures). This class is typically used during the design phase and is closely associated with the features and goals of Building Information Modeling (BIM) (Definition, Reference Definition2014).
In the work of Grieves and Vickers (Reference Grieves, Vickers, Kahlen, Flumerfelt and Alves2017), a Digital Twin Instance (DTI) is described as a specific physical asset to which a digital counterpart remains linked throughout the life of that physical product. Here, we adopt the interpretation of McClellan et al. (Reference McClellan, Lorenzetti, Pavone and Farhat2022) in relation to the notion of an instance and define a DTI as the DT of an individual instance of the product, once it is manufactured and equipped with sensors that generate data. This implies that the DTI embodies the notion of information flow between the physical and digital counterparts.
The Digital Twin Aggregate (DTA) (Grieves and Vickers, Reference Grieves, Vickers, Kahlen, Flumerfelt and Alves2017; McClellan et al., Reference McClellan, Lorenzetti, Pavone and Farhat2022) is described as the aggregation and analysis of data from numerous DTIs, allowing for review and possible intervention regarding a set of assets. Essentially, it describes a computing construct that allows to gather and analyze data from various DTIs to gain insights with respect to a broader range of physical products or processes. A DTA can aggregate instances ranging from different DTIs of components comprising an assembly, to multiple instances from similar systems that have aggregated a collected behavior. In the latter, DTA relates to the concept of learning from fleets or populations (Worden et al., Reference Worden, Bull, Gardner, Gosliga, Rogers, Cross, Papatheou, Lin and Dervilis2020), reflecting a more massive collection of data, which can enhance predictive and prognostic capabilities at the system level.
Each class of DTs will require different levels of depth, abstraction, and enrichment to properly accompany the original twin throughout various phases of the asset’s life cycle. Figure 1 has now been revised to illustrate these DT classes, delineating the systematic application of DTPs, DTIs, and DTAs across various stages of physical assets. The figure employs the example use case of wind turbine operations: DTPs aid in the design phase by simulating and refining turbine structures. Multiple DTIs represent real-time operational units equipped with sensors, facilitating ongoing monitoring and immediate adjustments. The DTA synthesizes insights from individual DTIs to guide system-wide performance assessments and predictive maintenance strategies, enhancing overall operational efficiency and the longevity of the assets.

Figure 1. Life cycle integration of digital twin technologies for physical assets, using the example of wind farm management. DTPs aid in the design and decommissioning phases by simulating and optimizing turbine structures and the decommissioning process, while multiple DTIs represent real-time operational units equipped with sensors, facilitating ongoing monitoring and immediate adjustments. The DTA synthesizes insights from individual DTIs to guide system-wide performance assessments and predictive maintenance strategies, enhancing overall operational efficiency and longevity of the assets. DTI and DTA can evolve on a temporal scale depending on the frequency of the collecting data, where Real-Time Digital Twins (RTDTs) are specific DTs that are updated in a more frequent, real-time manner.
Based on the description for each DT and the needs specific to each phase of the life cycle, varying levels of detail, abstraction, and enhancement will be necessary to effectively accompany the original twin. This evolutionary spirit of DTs is reflected in Figure 2. In these definitions, information flow is assumed to be available throughout the asset’s life. Models that do not continuously follow a physical asset are merely snapshots, not true DTs. In engineering, Real-Time Digital Twins (RTDTs) are digital representations updated online, in real or near real-time, as data become available.

Figure 2. Landscape of the DT paradigm. The HDT includes hybrid modeling to enrich simulations with aspects of physics and machine learning (ML) to accurately mimic the behavior of real systems. Such a construct offers higher interpretability. Finally, cognitive digital twin (CDT) would combine previous technologies with scene understanding and autonomous decision-making. As a result, the DT progressively increases in complexity and opportunities.
DTs are powered by the use of simulators/models that provide representations of complex systems, processes, or phenomena of interest. Currently, BIM (Building Information Modeling) representations seem to prevail in terms of adoption in practice, despite them largely comprising geometric representations and metadata repositories of built objects. This observation is primarily evidenced by insights gathered from industry roundtables, where experienced practitioners emphasized the robustness and integration capabilities of BIM in the construction and engineering sectors. Whereas BIMs, as mainly adopted today, are closer to what one would define as “as-designed geometric models,” effective DTs require more computational capabilities. Such more efficient models can be obtained via the use of structural (finite element) models and well-established formulations such as fluid mechanics, transient dynamics, and degradation models. To make such models actionable within a twinning framework, it is necessary to deliver reliable, yet reduced-order representations that can incorporate physics in a way that is manageable for the process at hand. Reduced Order Models (ROMs) significantly contribute by offering swift emulations of a monitored system with manageable computational expenses (Frangos et al., Reference Frangos, Marzouk and Willcox2010; Chinesta et al., Reference Chinesta, Ladeveze and Cueto2011; Amsallem et al., Reference Amsallem, Zahr and Farhat2012; Farhat et al., Reference Farhat, Bos, Avery and Soize2018; Kapteyn et al., Reference Kapteyn, Knezevic and Willcox2020; Vlachas et al., Reference Vlachas, Tatsis, Agathos, Brink and Chatzi2021; Agathos et al., Reference Agathos, Tatsis, Vlachas and Chatzi2022, Reference Agathos, Vlachas, Garland and Chatzi2024; Idrissi et al., Reference Idrissi, Praud, Champaney, Chinesta and Meraghni2022). ROMs are mathematical representations of complex systems that aim to provide simplified but accurate predictions of system behavior. When incorporating physics principles, such ROMs are often referred to as intrusive (Chinesta and Cueto, Reference Chinesta and Cueto2014). Although there are nonintrusive, that is, purely data-driven techniques that employ data from simulations or experiments to bypass physics (Ibáñez et al., Reference Ibáñez, Abisset-Chavanne, Ammar, González, Cueto, Huerta, Duval and Chinesta2018; Hernandez et al., Reference Hernandez, Badias, Gonzalez, Chinesta and Cueto2021), the imposition of physics biases is often desirable to ensure interpretability (Vlachas et al., Reference Vlachas, Najera-Flores, Martinez, Brink and Chatzi2012; Bacsa et al., Reference Bacsa, Lai, Liu, Todd and Chatzi2023; Liu et al., Reference Liu, Lai, Stoura, Bacsa and Chatzi2025).
Accompanying the real asset along its useful life requires the capacity of adaptation and re-engineering along its different phases, with flexible configurations that may have to respond to previously unseen conditions. In this regard, McClellan et al. (McClellan et al., Reference McClellan, Lorenzetti, Pavone and Farhat2022) also highlight the role of current developments such as artificial intelligence (AI), machine learning (ML), deep learning (DL), and data analytics to correctly fill the gap between the simulation model, usually defined by known physics, and the real behavior perceived as a manner to extend the capabilities of the original ROMs that reproduce the physics of the real asset.
AI-informed ROMs strongly depend on data quality and availability. To overcome this limitation, new techniques driven by physical knowledge may find patterns and reconstruct missing information. This involves embracing the smart data regime, which involves the right information, at the right moment, and right place. ML and DL can synergistically be combined with hybrid models, enhancing their explainability and predictive potential (Montáns et al., Reference Montáns, Chinesta, Gómez-Bombarelli and Kutz2019; Champaney et al., Reference Champaney, Chinesta and Cueto2022; Kenett, Reference Kenett2024). Such an instance has emerged in physics-enhanced or physics-informed modeling, which capitalizes on the fusion of physics principles, data, and ML, with this mixing assigning different weights to the mixed components, as explained in (Haywood-Alexander et al., Reference Haywood-Alexander, Liu, Bacsa, Lai and Chatzi2023). Physics-informed digital twins (PIDT) are those digital twin representations that incorporate domain-specific knowledge of physics principles and laws, offering interpretable models that effectively capture the system’s inherent dynamics (Kapteyn and Willcox, Reference Kapteyn and Willcox2020; Liu et al., Reference Liu, Lai, Bacsa and Chatzi2025). While PIDTs require more development effort, they provide transparency and fidelity, making them well-suited for applications where understanding and certifiability are essential. The choice between these approaches depends on the specific requirements of the problem at hand, which balance predictive power with interpretability and reliability. Some versatile examples are those that employ known descriptions of the system, such as partial differential equations, or algorithms founded in known physical laws (Tatsis et al., Reference Tatsis, Agathos, Chatzi and Dertimanis2022; Vlachas et al., Reference Vlachas, Tatsis, Agathos, Brink, Quinn and Chatzi2022; Haywood-Alexander and Chatzi, Reference Haywood-Alexander and Chatzi2023; Zhang and Zhao, Reference Zhang and Zhao2023; Yang et al., Reference Yang, Kim, Hong, Yee, Maulik and Kang2024), such as those of thermodynamics (Hernandez et al., Reference Hernandez, Badias, Chinesta and Cueto2022; Cueto and Chinesta, Reference Cueto and Chinesta2023), and preservation of physical quantities (Kirchdoerfer and Ortiz, Reference Kirchdoerfer and Ortiz2016; Bacsa et al., Reference Bacsa, Lai, Liu, Todd and Chatzi2023).
Under this premise, we refer to hybrid digital twins (HDTs) as twin constructs that create a more comprehensive and accurate representation of a system or process. Here, accuracy reflects the ability of the digital twin to remain aligned with real-world behavior, including in previously unseen contexts or response to evolving loads and environments. As provided in Figure 2, HDTs integrate multiple modeling paradigms—combining physics-based (white-box) models that offer transparent insights into underlying physical mechanisms with data-driven ML (black-box) approaches that enhance predictive accuracy. The resulting grey-box models fuse interpretability with adaptability, enabling a richer and more robust digital representation of physical assets or systems. Specifically, HDTs may incorporate physics knowledge as a hard constraint(physics-guided or physics-encoded) by directly embedding differential equations within the neural network architecture, ensuring that predictions adhere to known physical laws. Alternatively, HDTs can treat physics knowledge as a soft constraint (physics-informed) by adding the residual of physics-based models to the loss function to guide the learning process or to refine the outputs of ML algorithms (Chinesta et al., Reference Chinesta, Cueto, Abisset-Chavanne, Duval and Khaldi2020; Haywood-Alexander et al., Reference Haywood-Alexander, Liu, Bacsa, Lai and Chatzi2023). This integration enhances both the explainability and transparency of the twins’ outputs, while improving their capacity to adapt to varying loads and environments (Wagg et al., Reference Wagg, Burr, Shepherd, Conti, Enzer and Niederer2025). Furthermore, hybrid modeling allows interpretable diagnostics and generalization of their predictive ability of the twin, while maintaining computational efficiency. Purely physics-based models, while strong in interpretability, typically lack practical efficiency due to slower computational speeds required for precise simulations. HDTs thus present a compelling advantage by combining the strengths of both physics-based models and data-driven approaches to deliver more reliable predictions and enable real-time monitoring and decision support across a wide range of applications (Wagg et al., Reference Wagg, Keith and Gardner2020).
In this paradigm, there is an incipient subclass of DTs that is expected to lead the next developments in the domain: the cognitive digital twin (CDT) (Abburu et al., Reference Abburu, Berre, Jacoby, Roman, Stojanović and Stojanovic2020; Unal et al., Reference Unal, Albayrak, Jomâa and Berre2022). Cognition refers to the set of abilities that encompass sensing, thinking, and reasoning (Bundy et al., Reference Bundy, Chater and Muggleton2023). Although research applications that mimic cognition are still limited (the most common use case being large language models), the appropriate design of algorithms can lead to the integration of some of these abilities. The emerging concept of cognitive, or smart, digital twins (CDT) refers to systems that can interact with both physical and virtual environments to autonomously make smarter decisions based on context (Abburu et al., Reference Abburu, Berre, Jacoby, Roman, Stojanovic and Stojanovic2020; Zheng et al., Reference Zheng, Lu and Kiritsis2022). Although both HDTs and CDTs use ML to enrich themselves, HDTs tend to use data and ML to fill in gaps in the knowledge of the system. In contrast, CDTs use data for complex interpretation—also called perception (Moya et al., Reference Moya, Badías, González, Chinesta and Cueto2023)—reasoning (autonomously making decisions about their performance), automatic calibration for improved decision-making (Arcieri et al., Reference Arcieri, Wölfle and Chatzi2021), and interaction with the user. Although one of the outcomes can be the enrichment of HDTs, we expect CDTs to more comprehensively capture the relationship between data and physics models. The expert in the loop complements the cognitive and interoperability requirements of CDTs (Niloofar et al., Reference Niloofar, Lazarova-Molnar, Omitaomu, Xu and Li2023). The incorporation of the human cognitive dimension within the digital twin paradigm leverages the expertise and experiential knowledge, serving as a crucial facilitator in understanding the underlying rationale of decisions and their appropriateness within a specific context. Consequently, the expert-in-the-loop paradigm underscores the significance of model explainability, a salient feature during various interaction phases within a Cognitive Digital Twin (CDT).
When extending prediction/estimation at the system level, DTs may require the incorporation of representations and simulations of interconnected systems or components (Heussen et al., Reference Heussen, Koch, Ulbig and Andersson2011; Ouyang, Reference Ouyang2014; Schluse et al., Reference Schluse, Priggemeyer, Atorf and Rossmann2018; Liang and Xie, Reference Liang and Xie2021). Such representations are defined as System-Level Models. For example, energy system network models (Heussen et al., Reference Heussen, Koch, Ulbig and Andersson2011; Ouyang et al., Reference Ouyang, Xu, Zhang and Huang2017) provide a detailed understanding of how energy flows through various components, helping to optimize energy consumption and identify potential inefficiencies.
AR (augmented reality), VR (virtual reality), and DT technology connect the physical and digital worlds (Badías et al., Reference Badías, Curtit, González, Alfaro, Chinesta and Cueto2019; Moya et al., Reference Moya, Badías, Alfaro, Chinesta and Cueto2022; Vettori et al., Reference Vettori, Gomes, Di Lorenzo, Peeters and Chatzi2023), enhancing user interfaces to improve understanding, collaboration, and decision-making in various fields (Michalik et al., Reference Michalik, Kohl and Kummert2022).Specifically, AR allows users to overlay digital information onto the real world, enhancing the ability to understand complex systems and processes in situ. However, VR creates a completely immersive simulation environment that is ideal for training scenarios, safety drills, and visualization of scenarios that are either dangerous or impractical to replicate in the real world. Together, AR and VR enhance DTs by improving visualization, interaction, and simulation capabilities, allowing stakeholders to analyze potential outcomes in a controlled virtual setting, facilitating more informed decision-making. This proactive approach transforms industry practices in forecasting, troubleshooting, and optimizing operations, further establishing digital twins as essential in digital transformation.
Virtual environments often use virtual sensing to simulate the behavior of sensors that exist in the real world. Although remote sensing facilitates the creation of accurate DTs of infrastructure systems (Dorafshan et al., Reference Dorafshan, Thomas and Maguire2018; Phillips and Narasimhan, Reference Phillips and Narasimhan2019; Bado et al., Reference Bado, Tonelli, Poli, Zonta and Casas2022; Kaartinen et al., Reference Kaartinen, Dunphy and Sadhu2022), there are still scenarios where it is impractical, expensive, or insufficient, such as the case of assessing the load and prediction of the performance of DTs of wind turbine blades (Vettori et al., Reference Vettori, DiLorenzo, Peeters and Chatzi2022). These virtual sensors generate data within a virtual environment, which can then be used to simulate realistic scenarios, test algorithms for sensor data processing and analysis, and perform dynamic adaptation within virtual environments.
3.2. Role of Internet of Things, real-time data analytics
The Internet of Things (IoT) involves sensor selection, deployment, acquisition, and connectivity. IoT represents not only the deployed sensing network, but also the purpose of connecting and transferring information. Most of the information comes in the form of time series or image-based representations, collected via appropriate compression schemes. IoT regimes often involve multiple and heterogeneous or multimodal data sources. Hence, DTs must be designed to flexibly tackle diversified types of data input, which is usually tackled via the aspect of fusion. Even though some measurements (strains, pressure, temperature) can be directly correlated to quantities of interest, this is not true for other sources, which deliver indirect information (such as vibration-based ones). Physically infused hybrid modeling is required to extract physical insights from diverse and indirect data.
In this context, we revisit the previously introduced concept of RTDTs, which is based on real-time performance, reflecting a growing desire of the industry. It is important to properly define what real time implies in practice and to consider the appropriate time scale to assess the performance of the system and the required data flow rate. We define an RTDT as a digital twin that evolves synchronously to its physical counterpart, measuring and processing the changes that occur in the physical counterpart and correspondingly updating the virtual replica, and possibly implementing feedback (in the form of actions) to the physical asset, in an online fashion (Zipper and Diedrich, Reference Zipper and Diedrich2019). However, achieving perfectly synchronous, hard real-time response with minimal delays and high sampling rates can be inefficient, requiring excessive resources and infrastructure, and increasing risks of overhead and latency. Thus, “real-time” performance in a DT varies depending on its purpose, ranging from immediate to periodic updates, influenced by data collection rates and timing for related actions or decisions.”
3.3. The smart data paradigm
The data collection process can pose challenges that require a comprehensive framework for intelligent data collection, processing, and use. Table 1 summarizes primary sources of data used in the construction of DTs. System loads and response data are critical because they provide real-time feedback on infrastructure performance and condition, forming the basis for operational digital twins; external environment data help to understand how external factors influence infrastructure performance; historical and domain knowledge helps to identify patterns and trends that inform predictive maintenance and operational optimizations; geospatial and connectivity data are essential for simulating scenarios in digital twins and improving the accuracy of the interactions and dependencies modeled. In data acquisition and communication in DT, wireless technology plays a key role, and in the future prospects of this technology, 6G networks can be potential enablers in the commitment to synchronization-delay-accuracy (Bariah et al., Reference Bariah, Sari and Debbah2023).
Table 1. Summary of main sources of data for DTs

Early definitions of the incipient concept of smart data refer to the extraction of valuable information from Big Data to support decision-making (Iafrate, Reference Iafrate2014; Lenk et al., Reference Lenk, Bonorden, Hellmanns, Roedder and Jaehnichen2015). However, this terminology has evolved to refer to the formulation of data practices that focus on answering four questions, as detailed in (Chinesta et al., Reference Chinesta, Cueto, Abisset-Chavanne, Duval and Khaldi2020): (1) what data to collect, (2) where to deploy sensors to extract relevant information, (3) when and for how long to deploy the system, and (4) at what scale. As a result, the so-called smart data pipeline possesses some specific characteristics. A key trait is trustworthiness, ensuring reliability, accuracy, and credible sources through robust data collection, quality assurance, and adherence to governance standards (Bicevskis et al., Reference Bicevskis, Bicevska and Karnitis2017; Hong and Huang, Reference Hong and Huang2017; Kirchen et al., Reference Kirchen, Schütz, Folmer and Vogel-Heuser2017).
The smart data paradigm improves downstream tasks related to cognitive capabilities (advanced analysis, interpretation, and learning) (Abburu et al., Reference Abburu, Berre, Jacoby, Roman, Stojanovic and Stojanovic2020; Zheng et al., Reference Zheng, Lu and Kiritsis2022). A very intuitive classification of digital representations in relation to their function is offered in (Wagg et al., Reference Wagg, Worden, Barthorpe and Gardner2020). Using techniques such as AI, ML, and natural language processing, cognitive data systems understand and derive insights from complex data sets to reach a desired characteristic, namely, interpretability. This is a pivotal characteristic for hybrid modeling (Champaney et al., Reference Champaney, Chinesta and Cueto2022) and can typically be achieved through the appropriate exploitation of prior knowledge on the system and its behavior (Chinesta et al., Reference Chinesta, Cueto, Abisset-Chavanne, Duval and Khaldi2020). Akin to the concept of gathering meaningful data is the concept of active learning, which allows for targeting maximal information extraction based on minimal data (Settles, Reference Settles2009; Chabanet et al., Reference Chabanet, El-Haouzi and Thomas2022). Active learning makes use of human expertise (Khamesi et al., Reference Khamesi, Shin and Silvestri2020) or ML schemes, allowing selective guidance on labeling specific unlabeled samples, optimizing resource use, and integrating human insights into the learning process.
However, an important challenge is the fact that not all required data can be measured. Internal variables, such as energy, entropy, and strain, cannot be directly measured, and some variables, such as stress and damage, are difficult to access accurately. Partial observations also occur in space and time, and it is important to understand where and when to measure, to optimize data collection efficiency and ensure data relevance (Bigoni et al., Reference Bigoni, Zhang and Hesthaven2020; Di Lorenzo et al., Reference Di Lorenzo, Champaney, Marzin, Farhat and Chinesta2023). Data completeness refers to the notion of ensuring the availability of all relevant information for informing the digital asset, to enhance the reliability and applicability of the model prediction. With the appropriate data collected and a proper understanding of the system, hidden patterns and information can be recovered (Schöbi and Chatzi, Reference Schöbi and Chatzi2016; Liang et al., Reference Liang, Liu, Zhao, Liu, Gu, Sun and Dong2020; Champaney et al., Reference Champaney, Amores, Garois, Irastorza-Valera, Ghnatios, Montáns, Cueto and Chinesta2022; Moya et al., Reference Moya, Badias, Gonzalez, Chinesta and Cueto2022; Bermejo-Barbanoj et al., Reference Bermejo-Barbanoj, Moya, Badías, Chinesta and Cueto2024; Liu et al., Reference Liu, Lai, Stoura, Bacsa and Chatzi2025). Data quality and observation stochasticity also need to be considered in the hybrid modeling paradigm (Vettori et al., Reference Vettori, Di Lorenzo, Peeters and Chatzi2024; Liu et al., Reference Liu, Lai, Stoura, Bacsa and Chatzi2025), to propagate and evaluate uncertainty in the prediction and asses its value and trustworthiness.
3.4. Hybrid digital twin assessment
Evaluation of the digital twin in both the design and operation phases is essential for its real application. Through this study, we not only ensure the trustworthiness and usefulness of our proposal but also suggest a method for potential certification. For this purpose, Key Performance Indicators (KPI) may be defined to correctly evaluate and verify the validity of the twin (Papacharalampopoulos et al., Reference Papacharalampopoulos, Giannoulis, Stavropoulos and Mourtzis2020; Yang et al., Reference Yang, Langley and Andrade2022). In this work, we propose five main categories to develop such indicators:
-
• Accuracy, Reliability, and Robustness: It is imperative that the principal category indicators precisely assess how faithfully the DT corresponds to the real-world counterpart. Post-training, the predictive performance of HDT models can be appraised using established metrics, including accuracy, precision, recall, F1-score, and the confusion matrix. Such evaluations are crucial not only for confirming the validity of the approach during the design verification phase but also for ensuring the reliability and operational readiness of HDT designs and implementations in forecasting infrastructure failures and maintenance requirements. In this case, reliance on physics in the enrichment of the HDT could be crucial to achieve appropriate accuracy standards.
-
• Synchronization: As discussed in the previous section, RTDTs provide information that matches the correct time scale of the real twin system to correctly assess decisions using the information of the virtual replica. In this case, some important KPIs include synchronization latency, update frequency, and twin response time (Psarommatis and May, Reference Psarommatis and May2023).
-
• Scalability and flexibility: These terms pertain to the DT methodology and are independent of specific use cases. Assessing the flexibility of the DT is crucial for comparing various DT methodologies and for deriving significant insights through examining their flexibility (Psarommatis and May, Reference Psarommatis and May2023). This may also relate to the increasing complexity of infrastructure and the evolving behaviors. Hence, HDTs serve as a robust mechanism to meet the flexibility requirements, aligning with the state of the real twin throughout its entire life cycle.
-
• Interoperability with other Systems: Interoperability refers to the seamless cooperation and data exchange between different systems without manual intervention. For DT systems in infrastructure, it ensures effective functioning within a broad network of tools and technologies (Budiardjo and Migliori, Reference Budiardjo and Migliori2021; Klar et al., Reference Klar, Arvidsson and Angelakis2023).
-
• Cost effectiveness: These KPIs assess the financial impact of implementing and maintaining the DT against potential savings in operations and maintenance. They include costs for sensor networks, computing infrastructure, and software development, usually represented by metrics such as the return on investment (ROI) (Chauhan, Reference Chauhan2020; Bassey et al., Reference Bassey, Opoku-Boateng, Antwi and Ntiakoh2024).
Recent developments have placed a growing emphasis on sustainable objectives. These goals seek to align with sustainable development goals and the needs of people and territories, ensuring that progress and innovations promote enduring ecological and societal balance (González et al., Reference González Chávez, Bärring, Frantzén, Annepavar, Gopalakrishnan and Johansson2022).
4. Applications in management and resilience of smart infrastructures
The information generated and transformed by HDTs is expected to support long-term decision-making through the life cycle of an asset. What needs to be further highlighted is that these assets are usually organized in an interdependent manner to supply specific service or functionality. Hence, hybrid twin–enhanced knowledge on components should be assembled and transferred onto the system-level for enabling informed and comprehensive decisions in support of infrastructure management and recovery from extremes, as mandated by the need for resilience.
The term resilience is commonly employed in infrastructure engineering to assess the capacity of a system to endure and bounce back from disturbances or disruptions (Bruneau et al., Reference Bruneau, Chang, Eguchi, Lee, D O’Rourke, Reinhorn, Shinozuka, Tierney, Wallace and Von Winterfeldt2003; Ouyang et al., Reference Ouyang, Dueñas-Osorio and Min2012; Labaka et al., Reference Labaka, Hernantes and Sarriegi2016). For better understanding and visualization, Figure 3 depicts a time-evolution resilience curve in terms of performance/functionality of an infrastructure system, under the impact of both long-term effects (e.g., climate change/ aging/ corrosion/ fatigue/ deflection) and short-term extreme events (e.g., earthquake/ flood/ high gusts) throughout its life cycle. Infrastructure resilience is commonly quantified using metrics and indicators (e.g., residual functionality, downtime, and recovery time) that can be computed from actual data or simulated based on corresponding resilience curves (Poulin and Kane, Reference Poulin and Kane2021). Notably, under normal circumstances, the loss of functionality in an infrastructure system is typically not significant and takes a long time for performance to degrade below the performance threshold. This is attributed to the low probability of multiple component failures occurring simultaneously within the same infrastructure system. However, the situation changes under extreme conditions, where several components become more likely to fail. Consequently, the functionality of the network may experience a sudden and unexpected reduction below the predefined target threshold during such extreme conditions (Mohammadi and Taylor, Reference Mohammadi and Taylor2021). This highlights the importance of considering and preparing for exceptional scenarios that could lead to simultaneous failures, ensuring the resilience of infrastructure systems under adverse circumstances (Francis and Bekera, Reference Francis and Bekera2014; Didier et al., Reference Didier, Broccardo, Esposito and Stojadinovic2018; Rehak et al., Reference Rehak, Senovsky and Slivkova2018; Fang and Sansavini, Reference Fang and Sansavini2019; Blagojević and Stojadinović, Reference Blagojević and Stojadinović2022; Arcieri et al., Reference Arcieri, Hoelzl, Schwery, Straub, Papakonstantinou and Chatzi2023).

Figure 3. Time-evolution resilience curve of component/asset/infrastructure network exposed to various environmental changes throughout their life cycle, with and without the monitoring system. Under long-term impacts of climate change, the performance degrades gradually: a red curve represents minimal maintenance leading to the lowest service life; a yellow curve signifies periodic maintenance misaligned with optimal timings, resulting in medium service life levels; a green curve indicates proactive maintenance based on health monitoring, which maximizes service life by predicting and addressing declines at critical thresholds. Under short-term impacts of extreme events, different strategies affect performance decline and recovery: a black curve represents typical scenarios; a red curve depicts poor repair sequencing that reduces efficiency; a yellow curve depicts optimized repairs for faster recovery; and a green curve shows how pre-disaster fortification minimizes damage and speeds up recovery.
4.1. Benefits and status of DT-powered decision-making
DTs are becoming indispensable in the asset management process, offering substantial benefits in decision-making across various life cycle stages. Starting in the design phase, DTs facilitate rapid prototyping and testing, allowing for iterative refinement based on simulated outcomes rather than solely retrospective analyses. Traditional methods, often constrained by slower feedback loops and high costs of physical prototyping, are significantly outpaced by DT-enabled processes. As the project transitions into the construction phase, DTs seamlessly integrate real-time data from various sources, improving coordination across teams and technology systems. This integration helps predict and mitigate potential failures, reducing delays and associated costs (Medina and Hernandez, Reference Medina and Hernandez2025). During the operational and maintenance phase, conventional methods that depend solely on historical data, such as past performance logs, maintenance records, and component failure rates, can limit predictive capabilities, leading to suboptimal policies that may not anticipate future challenges. In contrast, DTs utilize AI to blend historical data with real-time operational data, improving predictive capabilities and enabling proactive policies by predicting failures before they happen, unlike traditional methods that react to problems as they occur. This predictive capacity not only reduces downtime but also extends the asset’s life expectancy. Additionally, limited data integration in conventional decision-making, involving disparate sources of information, hampers holistic decision-making, particularly for complex and interconnected infrastructure systems. This fragmentation increases the risk of infrastructure mismanagement and potential failures. On the contrary, DTs enable adaptation to evolving conditions, technological advancements, and infrastructure changes, enhancing system resilience. As assets approach the decommissioning phase, DTs contribute to sustainability by optimizing resource use and reducing emissions. They provide simulations that predict the environmental impacts of decommissioning processes, ensuring that the methods employed minimize waste and adhere to environmental standards.
To this end, the integration of hybrid digital twining in decision-making motivates a paradigm shift by providing proactive, on-time, and simulation-driven insights, promoting adaptability, and improving the overall understanding of the system compared to traditional decision-making frameworks (Makhoul et al., Reference Makhoul, Roohi, van de Lindt, Sousa, Santos, Argyroudis, Barbosa, Derras, Gardoni and Lee2024). This evolution is particularly significant in complex and dynamic environments where a more responsive and accurate decision-making process is crucial. A so-called smart decision refers to a policy that informs the optimal sequence of actions that enhance the resilience at the system level at minimal cost, dictating which actions to take, along with the timing and location from a system-level perspective.
The growing recognition of the unparalleled efficacy of digital and hybrid twin models is manifesting in their escalating deployment within tangible infrastructure systems. As asset owners and managers increasingly acknowledge the transformative impact these models exert, there is a discernible trend toward incorporating DTs in diverse sectors of real-world infrastructure (Kuo et al., Reference Kuo, Pilati, Qu and Huang2021; Zhao et al., Reference Zhao, Feng, Chen and de Soto2022). This surge in adoption is a testament to the significant advantages these models confer in terms of predictive maintenance capabilities, operational efficiency improvement, smart city planning, and overall resilience improvement in the face of dynamic challenges. This trend is expected to continue and expand as DT technologies continue to evolve, offering innovative solutions to complex problems within the realm of infrastructure sustainability and emergency management.
4.2. Use cases
This section aims to elucidate the transformative impact of DT applications on strategic decision frameworks and the overall enhancement of infrastructure system resilience.
Predictive Maintenance stands as a primary use case for DT technology. Interactive digital representations allow for continuous monitoring, analysis, and intervention on infrastructure components. The integration of sensor and historical data with predictive models empowers decision-makers to optimize system performance and anticipate failure. This facilitates proactive scheduling of maintenance activities, minimizing risk, and enhancing the reliability and longevity. Recent representative case studies include the condition-based maintenance planning of a railway system based on the geometric measurement of track recorded periodically by a mobile sensing system on the train (Arcieri et al., Reference Arcieri, Hoelzl, Schwery, Straub, Papakonstantinou and Chatzi2023); diagnostics and prognostics of wind turbine structure health based on time-series environmental measured data, vibration data (Bogoevska et al., Reference Bogoevska, Spiridonakos, Chatzi, Dumova-Jovanoska and Höffer2017), and supervisory control and data acquisition (SCADA) data (Schlechtingen et al., Reference Schlechtingen, Santos and Achiche2013; Urmeneta et al., Reference Urmeneta, Izquierdo and Leturiondo2023); fault diagnosis and condition based maintenance of overhead power transmission lines utilizing the Cablewalker robotic system consisting of a laser scanner, a stereo camera, or a magnetic scanner (Tajnsek et al., Reference Tajnsek, Pihler and Roser2011; Gitelman et al., Reference Gitelman, Kozhevnikov and Kaplin2020); predictive maintenance of manufacturing facility by monitoring parameters from sensors embedded within equipment, such as real-time temperature, vibration, and lubricant condition of the motors, bearings, and gearboxes (Olivotti et al., Reference Olivotti, Dreyer, Lebek and Breitner2019; Yu et al., Reference Yu, Dillon, Mostafa, Rahayu and Liu2019); autonomous flaws detection of bridge based on images collected through an inspection robot or unmanned aerial systems (Dorafshan et al., Reference Dorafshan, Thomas and Maguire2018; Galdelli et al., Reference Galdelli, D’Imperio, Marchello, Mancini, Scaccia, Sasso, Frontoni and Cannella2022); BIM augmented models based on drone-imaged damage detection enhanced with AI (To et al., Reference To, Liu, Bin Muhammad Hairul, Davis, Lee, Hesse and Nguyen2021); temperature prediction from the the building scale (BIM buildings) to city scale (CityGML) taking into consideration major anthropogenic heat sources and wind fluid dynamics through the Virtual Singapore digital twin (VSdt) (Gobeawan et al., Reference Gobeawan, Lin, Tandon, Yee, Khoo, Teo, Yi, Lim, Wong, Wise, Cheng, Liew, Huang, Li, Teo, Fekete and Poto2018; Ignatius et al., Reference Ignatius, Wong, Martin and Chen2019).
Operation Optimization in the context of logistics and supply chain management, DT simulation based on distributed agents can be performed by integrating real-time logistics data, trends of external needs, and optimization algorithms, helping to streamline operations and optimize inventory (Park et al., Reference Park, Son and Noh2021). In smart manufacturing, real-time manufacturing data, historical performance metrics, and dynamic simulation models can be integrated into the deep reinforcement learning (DRL)-based digital model to identify bottlenecks and refine manufacturing practices, leading to increased efficiency and cost savings (Xia et al., Reference Xia, Sacco, Kirkpatrick, Saidy, Nguyen, Kircaliali and Harik2021). For building energy management, digital twin–based methods can use building sensor networks and heating/ cooling data to optimize energy design, improve user satisfaction, and reduce energy costs (Bortolini et al., Reference Bortolini, Rodrigues, Alavi, Vecchia and Forcada2022). In the context of traffic management, the DL algorithm can be used using real-time traffic data and dynamic simulation models to optimize signal timings under disturbance and reduce congestion (Rasheed et al., Reference Rasheed, Yau and Low2020).
Urban Planning undergoes a revolutionary transformation with the application of digital twin technology, particularly in the realm of Smart and Green City Development (Deng et al., Reference Deng, Zhang and Shen2021; Caprari et al., Reference Caprari, Castelli, Montuori, Camardelli and Malvezzi2022). DTs can be used to create virtual representations of entire cities by incorporating weather conditions, geospatial data, traffic flow simulations, building structure, and infrastructure models, to ensure a more sustainable and efficient urban environment. Recent representative case studies underscore the imperative of reevaluating urban planning in light of climate change repercussions (as observed in Dublin DT (White et al., Reference White, Zink, Codecá and Clarke2021)), evolving energy needs (exemplified by research in Cambridge DT (Nochta et al., Reference Nochta, Wan, Schooling and Parlikad2021)), biodiversity preservation initiatives (as evidenced in Singapore DT (Gobeawan et al., Reference Gobeawan, Lin, Tandon, Yee, Khoo, Teo, Yi, Lim, Wong, Wise, Cheng, Liew, Huang, Li, Teo, Fekete and Poto2018; Ignatius et al., Reference Ignatius, Wong, Martin and Chen2019)), governance frameworks (as analyzed in studies focused on Cambridge, Singapore, and Zurich DTs (Ignatius et al., Reference Ignatius, Wong, Martin and Chen2019; Schrotter and Hürzeler, Reference Schrotter and Hürzeler2020; Nochta et al., Reference Nochta, Wan, Schooling and Parlikad2021)), land allocation dynamics and social equity considerations (as exemplified in Herrenberg, Nigeria, and Zurich DTs (Dembski et al., Reference Dembski, Wössner, Letzgus, Ruddat and Yamu2020; Schrotter and Hürzeler, Reference Schrotter and Hürzeler2020; Enoguanbhor et al., Reference Enoguanbhor, Gollnow, Walker, Nielsen and Lakes2021)), and environmental quality assessments (as illustrated in Nigeria and Helsinki DTs (Enoguanbhor et al., Reference Enoguanbhor, Gollnow, Walker, Nielsen and Lakes2021; Hämäläinen, Reference Hämäläinen2021)). These studies advocate for urban planning strategies that prioritize flexibility, adaptability, and incremental adjustments to effectively address the multifaceted challenges facing modern cities.
Extreme event handling is receiving growing interest given the increasingly frequent extreme events (e.g., earthquake, tornado, wildfire) that we have recently experienced. DTs can play a crucial role in supporting decision-making by reducing the uncertainty of condition assessment and, in turn, facilitating the efficiency of emergency response (Makhoul et al., Reference Makhoul, Roohi, van de Lindt, Sousa, Santos, Argyroudis, Barbosa, Derras, Gardoni and Lee2024). Use cases include developing HDTs (Dabrowski et al., Reference Dabrowski, Pagendam, Hilton, Sanderson, MacKinlay, Huston, Bolt and Kuhnert2023) that can simulate and predict the spread of wildfires in real time by enhancing the physics-based fire characteristic model (Spark) (Miller et al., Reference Miller, Hilton, Sullivan and Prakash2015) with spatial and forcing as well as weather information in a hybrid modeling structure, allowing decision-makers to efficiently plan evacuation routes, deploy firefighting resources strategically, and communicate timely warnings to the community (Zhong et al., Reference Zhong, Cheng, Kasoar and Arcucci2023); developing a deep reinforcement learning (RL)-based decision framework to make rational decisions for transportation management under hurricanes based on the monitoring of weather information and traffic flow (Li and Wu, Reference Li and Wu2022); introducing a spatial–temporal graph DL model that uses heterogeneous community features (physics-based data and human-sensed data), to predict urban flooding in real time. This model improves risk mapping for better situational awareness and response strategies, verified using 2017 Hurricane Harvey in Harris County (Farahmand et al., Reference Farahmand, Xu and Mostafavi2023).
The common thread across these applications is the ability of DT to provide a dynamic and data-driven foundation for informed decision-making. In essence, the combination of robust data, advanced modeling, and diverse use cases exemplifies the multifaceted impact and potential of DT to revolutionize decision-making processes.
5. Future outlook
5.1. Future goals
As distilled in the analysis, two main objectives have been identified for future DTs. First, it is imperative to develop future-proof systems that not only draw insights from previous experience but also anticipate and adapt to forthcoming changes. This capability would enable proactive adjustment and resilience in unpredictable circumstances. Additionally, DTs are expected to work on multiple cross-connected levels, including infrastructure components, assets, individual systems, and system of systems. These levels reflect a hierarchical and integrated approach, where DTs not only replicate individual components but also encompass broader systemic interactions and dependencies, providing a scalable framework for proactive adjustments and resilience.
To set the foundation to achieve these goals, it is necessary to first work on a common language, yet equally essential is implementing frameworks to effectively organize the vast array of metadata and model information. This is where knowledge basis emerges as an indispensable tool for housing vital information, insights, and models (Marykovskiy et al., Reference Marykovskiy, Clark, Day, Wiens, Henderson, Quick, Abdallah, Sempreviva, Calbimonte and Chatzi2024). By leveraging knowledge bases and establishing uniform data models and vocabularies, organizations can promote smooth communication and cooperation within digital twin ecosystems. This common language not only encourages standardization and coherence for models but also fosters cooperation among stakeholders from different fields and sectors. In essence, it sets a solid foundation for more efficient and flexible digital twin solutions that can address complex real-world problems. This idea could also be expanded by having a high-fidelity repository of assets (BIM, visual platforms) across domains.
Next, it would be necessary to create a basis for actionably implementing hybrid modeling techniques and intelligent algorithms within a DT framework that can cater to creating value for assets. To this end, DTs must evolve toward decision support, with a focus on analysis tasks, such as independently analyzing data, evaluating scenarios, and recommending actions with or without direct human intervention. A profound and interpretable use of AI, ML algorithms, and online data streams will allow DT to independently evaluate the present condition of a system, forecast future results, and recommend the best course of action to attain pre-established goals.
All actions being considered, a final goal in the development of DTs for smart infrastructures will be quantifying the ROI. Modeling the long-term benefits of DTs involves assessing both tangible and intangible factors over an extended period. The ultimate aim would be to optimize the strategies of the stakeholders for DTs to consolidate their implementation, develop future opportunities, and create value. To this end, a number of approaches for quantifying the Value of Information have been recently put forth and serve as foundational work (Memarzadeh and Pozzi, Reference Memarzadeh and Pozzi2016; Kamariotis et al., Reference Kamariotis, Chatzi and Straub2022; Zhang et al., Reference Zhang, Qin, Lu, Thöns and Faber2022; Saifullah et al., Reference Saifullah, Andriotis and Papakonstantinou2023).
5.2. Challenges
Driven by industrial demands on technological readiness and maturity, formal frameworks for the exploitation of DTs are coming forth. Nevertheless, challenges persist in rendering DTs practical for use in real-world applications, as discussed below.
Adaptation to changing climates. Climate-related data, such as future weather patterns and extreme events, often involves uncertainties and may be incomplete. Inaccurate or insufficient data can compromise the reliability of digital twin prediction. Also, the amount and rate of data produced by sensors and IoT devices can exceed current infrastructure capabilities (Mashaly, Reference Mashaly2021), necessitating scalable strategies to handle and analyze the data flow to reduce latency in the response.
Open data exchange. Challenges in open data exchange include ambiguous data ownership, data privacy concerns (Wang et al., Reference Wang, Su, Guo, Dai, Luan and Liu2023), data quality and consistency variability, leading to potential disputes and limiting the availability of relevant data for digital twin systems.
Security and trustworthiness of algorithms/data. Data may be corrupted, tampered with, or manipulated; algorithms used in DTs may exhibit bias and may not undergo thorough validation processes; the explainability of AI models is often limited. All these can lead to inaccurate representations and flawed decision-making outcomes (Amerirad et al., Reference Amerirad, Cattaneo, Kenett and Luciano2023).
Standardization and certification of DT. Current digital twin standards, including the IFC and ISO series (ISO.ISO/TR 24464-2020; ISO.ISO 23247-2021; ISO.ISO 19650-1:2018; ISO.ISO 37100-2016; ISO.ISO/IEC AWI 30173; ISO.ISO/IEC AWI 30172), IEEE series (IEEE.IEEE SA-P2806.1; IEEE.IEEE SA-P3144), IEC series (IEC.IEC 61850-2024; IEC.IEC 62832-2020) and ITU series (ITU.ITU-TY.3090; Interoperability framework of digital twin systems in smart cities and communities), encounter limitations hindering their widespread adoption and effectiveness. One notable challenge is the lack of comprehensive coverage across industries and application domains, leading to interoperability issues. Additionally, the rapid evolution of digital twin technologies outpaces standard development, resulting in outdated guidance for emerging use cases. Achieving consensus among stakeholders and allocating resources for compliance also pose significant challenges, especially for smaller organizations or those with legacy systems (Bicevskis et al., Reference Bicevskis, Bicevska and Karnitis2017; Hong and Huang, Reference Hong and Huang2017; Kirchen et al., Reference Kirchen, Schütz, Folmer and Vogel-Heuser2017; Burns et al., Reference Burns, Cosgrove and Doyle2019).
Dealing with false positives/ responsibility for the decision. In a legal context, the attribution of responsibility becomes a crucial aspect, as stakeholders may question accountability for any adverse effects resulting from false positives or erroneous decisions. This challenge is exacerbated by the evolving nature of digital twin technologies, making it essential to navigate legal frameworks that may not have caught up with the rapid advancements.
Human element/ ethics to alleviate dangers from automation. Balancing the advantages of automation with ethical considerations, such as fairness, accountability, and transparency, is essential to prevent dangers stemming from unchecked automation, and a robust framework is needed for the integration of human expertise and ethical guidelines into automated decision-making in DTs to mitigate risks and build trust. In addition, training users to understand and work with the twin is crucial for the appropriate interpretation and use of its information.
Addressing these challenges requires collaborative efforts from stakeholders across industries, involving policymakers, standards organizations, technology providers, and end-users, to develop frameworks, standards, and best practices that promote the responsible and effective use of DTs for decision-making in a rapidly evolving technological landscape.
5.3. Opportunities
Recent perspective papers have highlighted the limitations of current digital twin tools in urban planning, particularly regarding their focus on short-term goals versus the long-term focus of city planning policies (Batty, Reference Batty2024; Bettencourt, Reference Bettencourt2024). They note issues such as staticity, limited aggregation capacity, and a primary focus on visualization. Emphasizing the need for improvement, they advocate for modeling multilevel and multidomain as well as multi-spatiotemporal scale networks better to capture interactions and the dynamic nature of urban environments facing various stressors. Furthermore, these papers underscore the importance of robust verification, validation, and uncertainty quantification methods to enhance the reliability and accuracy of digital twin models. In addition, authors in (Mohammadi and Taylor, Reference Mohammadi and Taylor2021) discuss the importance of utilizing Smart City DT for disaster decision-making in cities facing various stressors. They emphasize the integration of fast and slow modes in decision-making processes and highlight the need for capturing, predicting, and adapting to urban dynamics at varying paces to effectively manage disaster-related mortality and economic losses.
The ongoing standardization of DTs presents numerous opportunities for industries and stakeholders. Standardized frameworks and protocols facilitate seamless interoperability and integration, fostering collaboration and innovation while reducing implementation costs and risks through clear guidelines and best practices. In addition, standardized data formats and communication protocols enhance data quality, consistency, and security, building trust and confidence.
Finally, the demand for open platforms that integrate existing technologies is growing in the fast-changing tech landscape. (Robles et al., Reference Robles, Martín and Díaz2023). These platforms are designed to facilitate the integration of various data sources, sensors, devices, and applications within a smart city environment. Platforms like iTwinJS (Incorporated Bentley Systems) and Opentwins (Robles et al., Reference Robles, Martín and Díaz2023) exemplify the pivotal role of openness in fostering collaboration, innovation, and interoperability within the digital realm. Another example is the Digital Twin Platform (DTCC Platform), developed at the Digital Twin Cities Centre, that incorporates a DTCC builder (Logg et al., Reference Logg, Naserentin and Wästberg2023) (Somanath et al., Reference Somanath, Naserentin, Eleftheriou, Sjölie, Wästberg and Logg2023), model and simulation, and visualization. An example of the implementation of the project is that of the city of Gothenburg (Gonzalez-Caceres et al., Reference Gonzalez-Caceres, Hunger, Forssén, Somanath, Mark, Naserentin, Bohlin, Logg, Wästberg, Komisarczyk, Edelvik and Hollberg2024).
The study of automation may result in the replacement of human labor in a positive sense. Although human expertise is pivotal in the digital twin cycle, the proposed new technology can intervene to automate fast decision-making in crucial scenarios and improve the efficiency, safety, and well-being of potential human users.
DTs must be built to empower the human, not the machine. The exploitation of AR, VR, or virtual spaces (metaverse) as facilitators can democratize access to information and insights, enabling a broader audience, including stakeholders with varying levels of technical expertise, to interact with and understand complex systems and data. This fosters cross-functional collaboration, accelerates decision-making processes, and improves the overall effectiveness of digital twin initiatives.
6. Conclusion
This statement paper aims to set the foundations for the development of next-generation DTs and their application to smart infrastructures. We have identified challenges in the data acquisition and simulation that could be addressed through the so-called smart paradigms. The smart use of data enhances data collection and processing efficiency by selecting what, when, where, and at what scale to avoid problems derived from big data. This, combined with analytics enriched with physics, improves the interpretation and quality of the results. Additionally, hybrid modeling provides an effective strategy for integrating diverse modeling methodologies, including physics-based and data-driven approaches, thereby improving the precision, adaptability, and effectiveness in simulating complex real-world systems.
Our analysis highlights the need to unify languages to improve communication among platforms and stakeholders handling various types of data. Furthermore, we advocate for exploring the integration of elements and agents within the digital twin framework to fully account for operational interactions and connections at different levels. Lastly, we recommend further investigation into the development of the smart digital twin framework to facilitate automation and intelligent decision-making processes that would enhance reaction to unpredictable, and possibly crucial, new scenarios.
We advocate for a paradigm shift from traditional decision-making practices in infrastructure management towards more proactive, data-driven approaches. We propose developing digital twin–enabled decision-making frameworks throughout the project’s life cycle and discuss advanced applications including autonomous management, predictive maintenance, adaptive behavior, and resilience enhancement. Furthermore, we outline the future outlook for augmenting such digital twin–enabled decision-making frameworks by applying expert-guided paradigms, forming system-level perspectives, and considering unexpected extreme events, to make more informed and comprehensive decisions in support of infrastructure resilience.
Acknowledgments
This position paper has been developed as part of a roundtable session on the theme of Digital Twinning and Decision Support for Asset Management. The roundtable was held in the context of joint collaboration between the Future Resilient Systems (FRS) of the Singapore-ETH Centre and the DESCARTES interdisciplinary program of excellence by CNRS@CREATE. All involved sector stakeholders, including TÜV SÜD, ARUP, MEINHARDT, CETIM-Matcor, NAVAL Group, Ministry of National Development (MND), Land Transport Authority (LTA), and GOVTECH, are acknowledged for their participation and active feedback.
Author contribution
Conceptualization: H.L; B.M; F.C; E.C. Methodology: H.L; B.M; F.C; E.C. Project administration: F.C; E.C; D.B; J.J. Data curation: H.L; B.M; F.C; E.C. Resources: E.S; A.W; X.Z; F.C; E.C. Data visualization: H.L; B.M; E.C. Writing original draft: H.L; B.M. Supervision: F.C; E.C. Writing – review/editing: H.L; B.M; E.S; A.W; D.B; J.J; X.Z; F.C; E.C. All authors approved the final submitted draft.
Competing interests
None.
Data availability statement
In this manuscript, no data were produced or used to pursue the research stated.
Funding statement
The research was conducted at the Singapore-ETH Centre, which was established collaboratively between ETH Zurich and the National Research Foundation Singapore, and CNRS@CREATE through the DESCARTES program; both research programs supported by the National Research Foundation, Prime Minister’s Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) programme. E. Chatzi would also like to acknowledge the support of the InBlanc project, titled “INdustrialisation of Building Lifecycle data Accumulation, Numeracy and Capitalisation,” funded under the Horizon Europe programme with the Grant Agreement ID 101147225. B. Moya acknowledges support from the French government, managed by the National Research Agency (ANR), under the CPJ ITTI.
Ethical standards
The research meets all ethical guidelines, including adherence to the legal requirements of the study country.
Comments
No Comments have been published for this article.