Hostname: page-component-54dcc4c588-42vt5 Total loading time: 0 Render date: 2025-09-29T11:26:31.214Z Has data issue: false hasContentIssue false

Interactive machine learning framework enabling affordable and accurate prototyping for supporting decision-making

Published online by Cambridge University Press:  27 August 2025

Qiyu Li*
Affiliation:
Texas A&M University, USA
Daniel McAdams
Affiliation:
Texas A&M University, USA

Abstract:

This study proposes an ML-based interactive framework for early-stage design, addressing the challenge where physical prototypes are accurate but costly, and virtual prototypes are affordable but less reliable. The NN-based human-in-the-loop framework integrates pre-training and fine-tuning techniques to reduce reliance on extensive physical prototyping while maintaining model accuracy. Using projectile motion as an example, the framework demonstrates its ability to guide design by iteratively updating models based on limited experimental data and human expertise. The results highlight the framework’s effectiveness in achieving performance comparable to models trained on larger datasets, offering a cost-effective solution for creating accurate design models.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
© The Author(s) 2025

1. Introduction

Engineering design is a complex process involving a series of decisions and trade-offs (Reference Renzi, Leali and Di AngeloRenzi et al., 2017). Decisions made during the early stages significantly impact on the entire design process. Consequently, designers dedicate substantial efforts in the early design stages to facilitate better decision-making and minimize future costs in terms of time, money, and other resources. Prototyping is a crucial step in supporting decision-making, typically positioned between the concept generation and design verification stages (Reference Christopher Lewis, Matthew, Brock, Bradley Adam, Richard and DanielChristopher Lewis et al., 2014). Prototyping includes creating representations of a design or concept for purposes such as learning, analysis, refinement, and communication. Prototypes, which can be either physical or virtual, embody these representations prior to the final product (Reference Christopher Lewis, Matthew, Brock, Bradley Adam, Richard and DanielChristopher Lewis et al., 2014; Reference Snider, Kent, Goudswaard and HicksSnider et al., 2022).

Physical prototypes are tangible artifacts created at varying levels of fidelity, with partial or complete functionality (Reference Christopher Lewis, Matthew, Brock, Bradley Adam, Richard and DanielChristopher Lewis et al., 2014). In contrast, virtual prototypes represent products in non-physical forms, such as simulations or mathematical models. Both physical and virtual prototypes have distinct advantages and limitations depending on the context. A simple illustration of tradeoffs to support decision making is shown in Figure 1 evaluates cost and accuracy to decide prototype selection. The green and blue areas represent tradeoff regions that support more confident decision on the design and prototyping process: the green area corresponds to conditions where physical prototypes are better choice based on expenses and ability to inform a design decision, while the blue area represents the opposite. However, there are situations where neither physical nor virtual prototypes provide a clear logical path forward in the engineering decision making process as the tradeoff between affordability and accuracy is unclear. In Figure 1, “difficult problem” indicates that choosing between a virtual or physical prototype is unclear.

Figure 1. Matrix about prototyping selection (adapted from Reference Ulrich and EppingerUlrich and Eppinger (2016))

In this paper, we focus on developing design support tools to address challenges highlighted by the red circle in Figure 1, where physical prototypes offer high fidelity but are expensive, and virtual prototypes are affordable but may lack the required accuracy for effective decision-making. We propose an interactive machine learning (ML) framework that integrates multiple information sources to enhance engineering decision-making. By combining limited physical prototypes with simplified virtual models, the framework reduces costs while obtaining high-fidelity data. A key aspect is leveraging engineering intuition, which informs both the prototyping approach and final design decisions. By synergizing engineers, models, and prototypes, the framework improves the design process and outcomes. Using ML techniques such as pre-training and fine-tuning, it minimizes reliance on large datasets, reducing prototyping costs while maintaining decision accuracy.

A simplified design problem is developed in this preliminary study to evaluate the feasibility of the proposed framework, highlighting the framework’s potential to standardize and streamline the design process by introducing a novel perspective. The chosen illustrative example considers the projectile motion of a ball as a design problem, aiming to achieve a target trajectory. The objective of study is to demonstrate how designers can iteratively combine limited physical prototypes with low-fidelity virtual prototypes, step-by-step guiding the ML model to achieve a relatively accurate solution for decision-making. In principle, any design problem is sufficient, so a highly simplified one is selected.

This paper starts to review related work on engineering design, focusing on prototyping, physics-integrated ML, and human-in-the-loop ML. Then details a projectile motion experiment as a step-by-step application of the proposed framework. The results and discussion analyze the experiment’s outcomes, demonstrating the framework’s effectiveness. Finally, the paper concludes by summarizing key findings and its contribution to early-stage design processes.

2. Literature review

Engineering Design (ED) is a structured process aimed at solving technical problems often with defined requirements and constraints, leading to the creation of new products, processes, and systems (Reference Pahl, Beitz, Feldhusen and GrotePahl et al., 2007). Tasks within the ED process, including design optimization (Reference Regenwetter, Nobari and AhmedRegenwetter et al., 2022), design space exploration, and ‘what if’ analysis (Reference Pan, Mason and MatarPan et al., 2022), rely heavily on model-supported engineering analysis. Models including physical or non-physical ones serve as indispensable tools, enabling engineers to predict the behavior and performance of candidate system designs and facilitating the comparison and evaluation of alternatives (Reference HazelriggHazelrigg, 1999).

Prototyping can be classified as physical or virtual. Physical prototypes are tangible representations used to explore form, test performance, and validate virtual models, with fidelity ranging from simple foam models to high-precision systems near production (Reference Kent, Snider, Gopsill and HicksKent et al., 2021). While they offer valuable insights into user interaction and functionality (Reference Liker and PereiraLiker & Pereira, 2018), their fabrication is costly, time-consuming, and inflexible (Reference Snider, Kent, Goudswaard and HicksSnider et al., 2022). Virtual prototypes are digital mock-ups (computer simulations and/or analytical models) of physical products that can be analyzed, tested, and presented in order to serve the principal purposes of prototyping in the product development process (Reference Christopher Lewis, Matthew, Brock, Bradley Adam, Richard and DanielChristopher Lewis et al., 2014). Virtual prototype offer a higher degree of flexibility via quick and reconfigurable manipulation of form or parameters allowing multiple versions to be created and tested rapidly compared to physical prototype (Reference Snider, Kent, Goudswaard and HicksSnider et al., 2022). However, virtual prototypes have limitations in emulating physical properties like scale and tactile feel and often require steep learning curves for effective use (Reference Kent, Snider, Gopsill and HicksKent et al., 2021). As well, even fairly simple design solutions contain physics of interest that are difficult if not impossible to capture well with virtual prototypes. Studies for prototyping (Reference Ege, Goudswaard, Gopsill, Steinert and HicksEge et al., 2024; Reference Liker and PereiraLiker & Pereira, 2018) are made by many people, but there are not many specific approaches developed focused on our “target problem” as illustrated in the lower left of Figure 1.

A key limitation of applying ML in engineering design is the scarcity of design data (Reference Yüksel, Börklü, Sezer and CanyurtYüksel et al., 2023). Prototyping generates decision-informing data, but creating large datasets with physical prototypes is costly and often impractical. Integrating physics knowledge into ML models offers a promising solution by embedding known physical laws, allowing models to compensate for data limitations and improve accuracy with fewer data points. Physics can be integrated into ML algorithms through customized model architecture (Reference Muralidhar, Bu, Cao, He, Ramakrishnan, Tafti and KarpatneMuralidhar et al., 2020), pre-training to guide neural networks (Reference Jia, Willard, Karpatne, Read, Zwart, Steinbach and KumarJia et al., 2021), and more (Reference Gallup, Gallup and PowellGallup et al., 2023; Reference Willard, Jia, Xu, Steinbach and KumarWillard et al., 2022). In this study, we integrate pre-training in the framework to reduce the demand of experimental data. Pre-training, used for decades, serves as a bridge between source and target tasks. One widely explored pre-training approach is feature transfer, which operates on the assumption that source and target tasks can share model parameters or prior distributions of hyperparameters (Reference Han, Zhang, Ding, Gu, Liu, Huo, Qiu, Yao, Zhang, Zhang, Han, Huang, Jin, Lan, Liu, Liu, Lu, Qiu, Song and ZhuHan et al., 2021). To adjust parameters of models for target tasks, fine tuning is a good strategy (Reference Shen, Liu, Qin, Savvides and ChengShen et al., 2021). It could be viewed as a mechanism to correct for mismatches between source data and the population of target data (Reference Church, Chen and MaChurch et al., 2021).

In addition, humans participate in various stages in ML process and the decisions made by humans during the process can influence the ML learning efficiency and effectiveness (Reference Holzinger, Plass, Kickmeier-Rust, Holzinger, Crişan, Pintea and PaladeHolzinger et al., 2019). Research about the collaboration between human and machine is often referred as human-machine collaboration, human-in-the-loop machine learn etc. In this study, we adopt the definition and categorization of human-in-the-loop ML by Reference Mosqueira-Rey, Hernández-Pereira, Alonso-Ríos, Bobes-Bascarán and Fernández-LealMosqueira-Rey et al. (2023), as illustrated in Figure 2. Our focus is specifically on learning with humans, which can be divided into three categories based on who controls the process. Interactive machine learning (IML) actively incorporates human input, assigning roles such as domain experts, non-expert users, and data scientists to tasks where human insight is most effective (Reference Mosqueira-Rey, Hernández-Pereira, Alonso-Ríos, Bobes-Bascarán and Fernández-LealMosqueira-Rey et al., 2023). Through an incremental approach, the model is iteratively updated based on user feedback, enabling continuous adaptation. This structured human-machine collaboration allows IML to refine ML systems dynamically and efficiently (Reference Liu, Li, Shan and LiuLiu et al., 2023).

3. Research method

In this section, we will use projectile motion for a study to present how engineers can apply the framework. The design goal is to achieve the desired flight trajectories of a ball by adjusting the launch angle and velocity. Additionally, a flowchart of the proposed framework will be explained in detail.

3.1. Methods introduction

To address the challenge mentioned in the introduction—where physical prototypes are accurate but expensive, and cheap virtual prototypes are less accurate — an ML-based interactive framework is proposed. In this study, a FNN is applied within the framework for modeling, but it can be replaced by other ML methods if needed.

As depicted in Figure 3, the framework incorporates human-in-the-loop model training to achieve an accurate virtual prototype while minimizing costs. The human in this framework is expected to be an engineer with expertise in design, approximate models, decision-making under uncertainty, etc. The process begins with Step 1, where engineers apply their expertise to identify the appropriate physics for generating data to pre-train the model. This expertise combines an understanding of physical principles with practical design knowledge, ensuring that the preliminary model is grounded in simplified physics tailored to the specific engineering problem. The pre-trained model transitions to Step 3 as the base model for fine-tuning, establishing a foundation for iterative refinement. In Step 2, engineers use their expertise to design and conduct experiments with physical prototypes, focusing on collecting critical data points that maximize the model’s improvement while minimizing the need for extensive experimentation. This process forms part of the iterative loop, highlighted by the orange lines, where the engineer designs experiments, collects data through physical prototypes, and updates the model. In Step 3, the pre-trained model is fine-tuned using the newly collected data. The updated model generates predictions, which are evaluated by the engineers in the judging phase of the loop. Based on these evaluations, engineers decide whether additional data collection is required or whether the model is sufficiently reliable to support design decisions. This iterative loop ensures that the model is continuously refined until it achieves the desired level of accuracy. Once the model reaches this stage, it can be used repeatedly to guide final design decisions effectively.

Figure 3. Proposed framework

As shown in Figure 3, the framework consists of foundational flows (blue lines) and iterative processes (orange lines). Blue lines represent structural elements, such as applying engineering expertise in pre-training or transitioning to the final model, while orange lines indicate dynamic interactions where engineers refine experiments and update models to balance cost and accuracy. By integrating engineering expertise with ML, the framework enables flexible, cost-effective virtual prototyping, reducing reliance on extensive physical tests. Neural networks streamline early-stage design by efficiently exploring a broader design space, with iterative fine-tuning ensuring accuracy and adaptability for reliable design outcomes.

Pre-training and fine-tuning are commonly used operations in transfer learning, where the pre-training step allows the model to learn general features or behaviors, fine-tuning then adapts the pre-trained model to a smaller, high-fidelity dataset, making the model more accurate for a specific application. An FNN utilized in the example consists of one input layer, one output layer, and two hidden layers, each with 32 neurons. A full learning strategy is integrated into the framework. The layers in the FNN trained on the pre-training dataset are fully frozen, meaning their weights remain unchanged during further training. For fine-tuning, three additional layers are added to the FNN, as illustrated in Figure 4, and these layers are trained iteratively using experimental data.

Figure 4. Learning strategy (blue represents the freezing layer and orange represents the trainable layer)

3.2. Projectile motion

To assess the performance of the proposed NN-based interactive framework. We employ projectile motion for our study. Projectile motion is a classical dynamic problem that describes the motion of an object projected at a certain angle and velocity. In an ideal scenario without air resistance, the analytical solution, a well-known parabolic trajectory, is depicted in Figure 5 (Reference ChudinovChudinov, 2011). This model is referred to as model A. In Figure 5, v 0 and θ 0 represent the initial velocity and launch angle, respectively. Ha denotes the maximum height, and La indicates the horizontal distance between the initial and endpoint of the trajectory. All other parameters can be calculated using the equations provided below (Equation 1 and 2), where ‘g’ represents gravitational force.

(1) $$H_a = {{v_0 \sin \theta _0 ^2 } \over {2g}}$$
(2) $$L_a = {{v_0 ^2 \sin 2\theta _0 } \over g}$$

The model represented in Equations 1 and 2 assumes that drag from air is negligible and will not impact the design decision. For the illustrative example here, we treat this simplified model as both an initial back of the envelope type calculation as well as the highest fidelity model the designer can construct without building expensive experimental models. For this example, we introduced another more accurate model. This more accurate model includes air resistance, referred to as Model B (Reference ChudinovChudinov, 2011). In comparison to Model A, Model B more closely mimics real-world conditions and is characterized by its higher complexity. For the purposes of this illustrative study, we consider Model B as the experimental physical prototype that provides high fidelity data but for a limited number of design cases. Here, we present the specific details of Model B for completeness. But, it is important to note that we are using Model B as our surrogate for a physical prototype in the framework introduced in Figure 3. The trajectory of Model B, defined by the initial conditions v 0 and θ 0, is illustrated in Figure 5. Furthermore, we can calculate the maximum height Hb and the horizontal distance Lb , using Equation 3 and Equation 4 respectively. In these calculations, ‘g’ represents gravitational force, ‘ρ’ is air density, ‘c’ is the drag coefficient, and ‘s’ is the cross-sectional area.

(3) $$H_b = {{v_0 ^2 \sin \theta _0 ^2 } \over {g(2 + kv_0 ^2 \sin \theta _0 )}}$$
(4) $$L_b = 2v_0 \cos \theta _0 \sqrt {{{2H} \over {g + gkv_0 \bigg(\sin \theta _0 + \cos \theta _0 ^2 \ln \tan \bigg({{\theta _0 } \over 2} + {\pi \over 4}\bigg)\bigg)}}} $$

Where $k = {{\rho cs} \over {2mg}}$

Figure 5 below illustrates the different trajectories the designer would explore using the design calculation and the physical prototype. The right side of the figure shows that the trajectory produced by the physical prototype is different than the design model (Model A) and would produce a different design output of distance traveled with the same design input of angle.

Figure 5. Projectile motion (left: without air drag, right: with air drag)

3.3. Metrics

To evaluate the quality and accuracy of models generated by frameworks for design support, we use mean square error (MSE) as the criterion (Equation 5). MSE provides a quantitative measure of how well each model captures the real-world behavior of projectiles under gravitational forces and air resistance. In Equation 5, ytrue represents the data from the physical prototype (Model B). A smaller MSE indicates higher model accuracy, leading to better support for design decisions since the model is more accurate.

(5) $$MSE = {{\mathop \sum \nolimits_{i = 0}^n (y_{true} - y_{predict} )} \over n}$$

In addition to MSE, we introduce the concept of similarity to evaluate design quality. The Similarity of Horizontal Distance (SHD), defined by Equation 6, measures the difference between the predicted horizontal distance and the true horizontal distance, HDtrue , calculated by Model B. The design objective is to create a system capable of achieving some target horizontal distance using specific design parameters (launch angle and velocity). A higher SHD indicates closer alignment between the predicted and true horizontal distances, leading to the notion of a better design for this illustrative example here.

(6) $$SHD = \bigg(1 - \gt{{|HD_{true} - HD_{predict} |} \over {HD_{true} }}\bigg) \times 100\% $$

4. Example and results

This section presents the two experiments and their results, followed by analysis and discussion. The design objective of the two experiments is a system capable of achieving any desired distance by adjusting the launch angle and velocity. Here, velocity is fixed, and only the launch angle is considered as the design parameter. A model predicting distance from launch angle supports the design process, where higher accuracy leads to better decisions and outcomes. Each launch angle and its corresponding distance forms a tuple, which serves as a data point. Launch angle is input and distance should be output of NN. The data points for building models will be generated from Model A and Model B. Results show that the NN-based interactive framework performs well, reducing data requirements while maintaining high accuracy. Compared to an FNN trained on a balanced dataset, the framework effectively captures the relationship between launch angle, velocity, and trajectory, guiding precise design outcomes.

4.1. First experiment

The first experiment examines the impact of incorporating human-in-the-loop interaction for dynamic model updates. The data generated from Model A is used for pre-training. The condition is explicitly set that the designer must achieve the design objective mentioned above using limited real-world data generated from Model B. In the first experiment, the condition allows the designer to select only two data points, representing two design configurations and their fly performance.

Table 1 outlines the differences in modeling steps for methods with and without human-in-the-loop interaction, resulting in varying dataset choices. In each plot, the horizontal axis represents the launch angle, and the vertical axis represents the fly distance. For the proposed method, the designer chooses the second point relying on the first data point. Steps 2 and 3 form the first iteration to update the pre-trained model, producing an intermediate model. As shown in Figure 6 (right), the intermediate model exhibits significant deviations around the 45-degree region. While air drag generally reduces distance for the same launch angle, the around 25% reduction in fly distance observed in the intermediate model (black) compared to the pre-trained model (orange) is unexpected. The significant reduction means that there are errors around the mid-angle range. Therefore, any data points with an angle of around 45 degrees are a good choice to reduce the error. As an example, 48 degrees as the second data point is chosen here. Steps 4 and 5 then form the second iteration, refining the model to produce the final version of the proposed method.

In contrast, without human-in-the-loop interaction, the designer independently selects two reasonable data points for physical prototype experiments. If the first data point is fixed at 15 degrees for better comparison with the proposed method, one option could align with the proposed method by selecting 15 degrees in the lower angle range and 48 degrees in the mid-range. Alternatively, the designer might choose 15 degrees and 75 degrees, relying on symmetry around the 45-degree angle, which is also a reasonable choice. Without the human in the loop, there will not be any intermediate model for humans to access and get information for the next experiment design. Designers only get one final model.

Table 1. Proposed method steps for projectile motion

The models trained on different datasets are presented in Figure 6. For the proposed method, the intermediate model is shown on the right, and the final model is in the center. In the case without humanin-the-loop interaction, the final model shows two possible outcomes. For Option 1, the performance matches the proposed method, while for Option 2, the final model is shown on the left. In this case, the central result is desired, where the final model closely aligns with Model B, representing the ground truth in this example. Without human-in-the-loop interaction, there is a chance the designer might choose Option 2, resulting in a poor final model even with two data points. Human-in-the-loop ensures that data points are selected more effectively, as in Option 1, leading to a better final model.

Figure 6. Models performance with different datasets

In addition to Figure 6 showcasing the qualitative results, Table 2 is provided to present the quantitative findings. The final model from the proposed framework and the model without human interaction (option 2) are compared, with the corresponding SHD values listed. The final model guides system design, where the input is the launch angle (design parameter) and the output is the distance. More accurate predicted distances which mean higher SHD make the model more effective in supporting design decisions to determine the optimal launch angle required to achieve a specific target distance. To ensure a comprehensive evaluation across various scenarios, we examine three distinct launch angles: Angle 1 (20 degrees), representing the upward phase where the distance increases with the angle; Angle 2 (45 degrees), corresponding to the maximum theoretical distance without air drag; and Angle 3 (65 degrees), representing the downward phase.

Table 2. SHD for models with different angles in the first experiment

4.2. Second experiment

The second experiment is designed to demonstrate how the proposed framework reduces the demand for data. The reference model is trained on a relatively sufficient, balanced dataset of 30 data points generated from model B. Under the proposed method, the first iteration begins with a pre-trained model fine-tuned using a single data point at a 15-degree angle, and the intermediate model is evaluated as in Step 4 of the first experiment. The designer then selects an additional data point to update the model in a second iteration. This iterative process is repeated until the model is trained on 30 data points, matching the dataset of the reference model. These repeated iterations allow us to evaluate loops required for the proposed framework to achieve performance comparable to the baseline model.

Using the proposed framework, three data points with angles of 15, 48, and 80 degrees—are selected to fine-tune the pre-trained model that achieves similar performance of the reference model. Table 3 compares MSE and Average SHD between the two models. Average SHD is calculated as the mean of SHD values for the three selected angles in the first experiment.

Table 3. Comparison for two models

4.3. Discussion

The results of the first experiment highlight the limitations of a method without human-in-the-loop interaction. In the experiment, without human in the loop structure, engineers may select just like the proposed framework or data points 15 and 75 degrees based on symmetry (option 2). As shown in Figure 6 (left), fine-tuning the pre-trained model using only these two points performs poorly around the 45- degree region where the black line representing the final model deviates significantly from the blue line, which represents real-world conditions. In contrast, the designer uses intermediate model predictions shown in Figure 6 (left) in black line to guide data collection decisions in the proposed framework. After fine-tuning the pre-trained model on the first data point (15 degrees), the intermediate model predictions reveal unexpectedly low values around the 45-degree launch angle compared to Model A as shown in Figure 6 (right). While Model A overestimates relative to the physical prototype data (referred to as Model B) due to ignoring air drag, the intermediate model indicates a discrepancy in this critical region. Based on this observation, the designer selects 48 degrees as the second data point to improve the model’s accuracy, particularly in the mid-range launch angles. This improvement is evident in Figure 6 (center), where the final model aligns much more closely with the physical prototype data.

Additionally, Table 2 presents the quantitative results for the two methods (with and without human-in-the-loop interaction). A higher SHD indicates a model that better matches the physical prototype data, enabling more reliable design decisions for achieving the desired horizontal distance by adjusting angles. The proposed framework demonstrates slightly better accuracy at all angles compared to the method without human interaction, highlighting the value of iterative human input in guiding the process. In the second experiment, Table 3 compares the performance of an FNN trained on a sufficient dataset with an FNN trained on only three data points using the proposed framework. The FNN with sufficient data performs slightly better in terms of MSE, but the average SHD for the proposed framework is very close. The results indicate that the proposed method provides information comparable to an FNN trained on a sufficient dataset, as the model predicts distances with reasonable accuracy across different design parameters (launch angles). Notably, the proposed framework achieves comparable performance using only 3 data points, whereas a standard FNN requires significantly more—30 data points here, effectively reducing the demand for extensive datasets.

These experiments demonstrate the effectiveness of the proposed framework in reducing the number of costly physical prototypes. The study confirms the feasibility of using this approach to address real world design problems. By guiding initial designs with reliable models based on limited physical experiments and low-fidelity virtual prototypes, the framework provides a cost-effective and accurate solution for prototyping in early-stage design.

5. Conclusion

In this study, we introduce a novel framework for the early design stage that facilitates interaction between humans and NN. The NN model informs the human, enabling them to decide whether and what additional physical prototyping is necessary or whether the model can be trusted for decision-making. If further physical prototyping is required, the NN model is updated with the new data points. Additionally, the framework integrates pre-training and fine tune techniques, allowing humans to transfer their professional knowledge to the model before real-world data is introduced. The proposed interactive framework with the techniques reduces the need for high-fidelity data. The framework provides a feasible solution to scenarios where physical prototypes are accurate but expensive, and virtual prototypes are affordable but less accurate. By combining limited physical prototypes with human guidance, pre-training, and fine-tuning techniques, the framework achieves a balance between cost and accuracy.

To demonstrate the framework’s application, we use projectile motion as an example, illustrating how designers can follow its steps. Furthermore, we show through simple experiments that the framework requires fewer data points to achieve relatively high accuracy. The integration of pre-training and fine-tuning techniques lowers the data demand, while human decision-making within the iterative loop further enhances model performance.

Future work will focus on collecting empirical data from real-world projectile experiments to validate the framework. Additionally, we will develop more complex models that incorporate features like object shape, expand output factors, and examine how human decisions in the training loop impact model performance over more iterations. Lastly, we aim to improve the integration of physics and expert design knowledge to generate data that better supports the design process before extensive simulations and experiments.

Acknowledgement

This work is partially supported by the Uniformed Services University of the Health Sciences (DOD) under cooperative agreement number HU00012120093. The viewpoints articulated in this material exclusively belong to the authors and do not necessarily reflect the opinions, findings, conclusions, or recommendations of the sponsors.

References

Alomari, O., Jaradat, E., Aloqali, A., Habashneh, W., & Jaradat, O. (2022). SOLUTION FOR PROJECTILE MOTION IN TWO DIMENSIONS WITH NONLINEAR AIR RESISTANCE USING LAPLACE DECOMPOSITION METHOD. Computational Geosciences, 126. https://doi.org/10.28919/jmcs/7127 CrossRefGoogle Scholar
Amershi, S., Cakmak, M., Knox, W. B., & Kulesza, T. (2014). Power to the People: The Role of Humans in Interactive Machine Learning. AI Magazine, 35 (4), 105120. https://doi.org/10.1609/aimag.v35i4.2513 CrossRefGoogle Scholar
Christopher Lewis, H., Matthew, G. G., Brock, D., Bradley Adam, C., Richard, H. C., & Daniel, D. J. (2014, 2014/6/15). Virtual or Physical Prototypes? Development and Testing of a Prototyping Planning Tool. https://peer.asee.org/23294 http://dx.doi.org/10.18260/1-2-23294 CrossRefGoogle Scholar
Chudinov, P. (2011). Approximate analytical investigation of projectile motion in a medium with quadratic drag force. International Journal of Sports Science and Engineering, 5.Google Scholar
Chudinov, P., Eltyshev, V., & Barykin, Y. (2019). Highly precise analytic solutions for classic problem of projectile motion in the air. Journal of Physics: Conference Series, 1287 (1), 012032. https://doi.org/10.1088/1742-6596/1287/1/012032 CrossRefGoogle Scholar
Church, K. W., Chen, Z., & Ma, Y. (2021). Emerging trends: A gentle introduction to fine-tuning. Natural Language Engineering, 27 (6), 763778. https://doi.org/10.1017/S1351324921000322 CrossRefGoogle Scholar
Ege, D. N., Goudswaard, M., Gopsill, J., Steinert, M., & Hicks, B. (2024). What, how and when should I prototype? An empirical study of design team prototyping practices at the IDEA challenge hackathon. Design Science, 10, e22, Article e22. https://doi.org/10.1017/dsj.2024.16 CrossRefGoogle Scholar
Gallup, E., Gallup, T., & Powell, K. (2023). Physics-guided neural networks with engineering domain knowledge for hybrid process modeling. Computers & Chemical Engineering, 170, 108111. https://doi.org/10.1016/j.compchemeng.2022.108111 CrossRefGoogle Scholar
Han, X., Zhang, Z., Ding, N., Gu, Y., Liu, X., Huo, Y., Qiu, J., Yao, Y., Zhang, A., Zhang, L., Han, W., Huang, M., Jin, Q., Lan, Y., Liu, Y., Liu, Z., Lu, Z., Qiu, X., Song, R.,… Zhu, J. (2021). Pre-trained models: Past, present and future. AI Open, 2, 225250. https://doi.org/10.1016/j.aiopen.2021.08.002 CrossRefGoogle Scholar
Hazelrigg, G. (1999). On the role and use of mathematical models in engineering design.CrossRefGoogle Scholar
Holzinger, A., Plass, M., Kickmeier-Rust, M., Holzinger, K., Crişan, G. C., Pintea, C.-M., & Palade, V. (2019). Interactive machine learning: experimental evidence for the human in the algorithmic loop. Applied Intelligence, 49 (7), 24012414. https://doi.org/10.1007/s10489-018-1361-5 CrossRefGoogle Scholar
Jia, X., Willard, J., Karpatne, A., Read, J. S., Zwart, J. A., Steinbach, M., & Kumar, V. (2021). Physics-Guided Machine Learning for Scientific Discovery: An Application in Simulating Lake Temperature Profiles. ACM/IMS Trans. Data Sci., 2 (3), Article 20. https://doi.org/10.1145/3447814 CrossRefGoogle Scholar
Kent, L., Snider, C., Gopsill, J., & Hicks, B. (2021). Mixed reality in design prototyping: A systematic review. Design Studies, 77, 101046. https://doi.org/10.1016/j.destud.2021.101046 CrossRefGoogle Scholar
Liker, J. K., & Pereira, R. M. (2018). Virtual and Physical Prototyping Practices: Finding the Right Fidelity Starts With Understanding the Product. IEEE Engineering Management Review, 46 (4), 7185. https://doi.org/10.1109/EMR.2018.2873792 CrossRefGoogle Scholar
Liu, J., Li, D., Shan, W., & Liu, S. (2023). Continual learning classification method with human-in-the-loop based on the artificial immune system. Engineering Applications of Artificial Intelligence, 126, 106803. https://doi.org/10.1016/j.engappai.2023.106803 CrossRefGoogle Scholar
Mosqueira-Rey, E., Hernández-Pereira, E., Alonso-Ríos, D., Bobes-Bascarán, J., & Fernández-Leal, Á. (2023). Human-in-the-loop machine learning: a state of the art. Artificial Intelligence Review, 56 (4), 30053054. https://doi.org/10.1007/s10462-022-10246-w CrossRefGoogle Scholar
Muralidhar, N., Bu, J., Cao, Z., He, L., Ramakrishnan, N., Tafti, D., & Karpatne, A. (2020). PhyNet: Physics Guided Neural Networks for Particle Drag Force Prediction in Assembly. In Proceedings of the 2020 SIAM International Conference on Data Mining (SDM) (pp. 559567). . https://doi.org/doi:10.1137/1.9781611976236.63 CrossRefGoogle Scholar
Pahl, G., Beitz, W., Feldhusen, J., & Grote, K.-H. (2007). Engineering Design: A Systematic Approach. https://doi.org/10.1007/978-1-84628-319-2 CrossRefGoogle Scholar
Pan, I., Mason, L. R., & Matar, O. K. (2022). Data-centric Engineering: integrating simulation, machine learning and statistics. Challenges and opportunities. Chemical Engineering Science, 249, 117271. https://doi.org/10.1016/j.ces.2021.117271 CrossRefGoogle Scholar
Regenwetter, L., Nobari, A. H., & Ahmed, F. (2022). Deep Generative Models in Engineering Design: A Review. Journal of Mechanical Design, 144 (7). https://doi.org/10.1115/1.4053859 CrossRefGoogle Scholar
Renzi, C., Leali, F., & Di Angelo, L. (2017). A review on decision-making methods in engineering design for the automotive industry. Journal of Engineering Design, 28 (2), 118143. https://doi.org/10.1080/09544828.2016.1274720 CrossRefGoogle Scholar
Shen, Z., Liu, Z., Qin, J., Savvides, M., & Cheng, K.-T. (2021). Partial Is Better Than All: Revisiting Fine-tuning Strategy for Few-shot Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35 (11), 95949602. https://doi.org/10.1609/aaai.v35i11.17155 CrossRefGoogle Scholar
Snider, C., Kent, L., Goudswaard, M., & Hicks, B. (2022). Integrated Physical-Digital Workflow in Prototyping – Inspirations from the Digital Twin. Proceedings of the Design Society, 2, 17671776. https://doi.org/10.1017/pds.2022.179 CrossRefGoogle Scholar
Ulrich, K. T., & Eppinger, S. D. (2016). Product design and development. McGraw-hill.Google Scholar
Willard, J., Jia, X., Xu, S., Steinbach, M., & Kumar, V. (2022). Integrating Scientific Knowledge with Machine Learning for Engineering and Environmental Systems. ACM Comput. Surv., 55 (4), Article 66. https://doi.org/10.1145/3514228 CrossRefGoogle Scholar
Yüksel, N., Börklü, H. R., Sezer, H. K., & Canyurt, O. E. (2023). Review of artificial intelligence applications in engineering design perspective. Engineering Applications of Artificial Intelligence, 118, 105697. https://doi.org/10.1016/j.engappai.2022.105697 CrossRefGoogle Scholar
Figure 0

Figure 1. Matrix about prototyping selection (adapted from Ulrich and Eppinger (2016))

Figure 1

Figure 2. Human in the loop machine learning (adapted from Mosqueira-Rey et al. (2023))

Figure 2

Figure 3. Proposed framework

Figure 3

Figure 4. Learning strategy (blue represents the freezing layer and orange represents the trainable layer)

Figure 4

Figure 5. Projectile motion (left: without air drag, right: with air drag)

Figure 5

Table 1. Proposed method steps for projectile motion

Figure 6

Figure 6. Models performance with different datasets

Figure 7

Table 2. SHD for models with different angles in the first experiment

Figure 8

Table 3. Comparison for two models