Hostname: page-component-54dcc4c588-smtgx Total loading time: 0 Render date: 2025-09-12T17:59:38.520Z Has data issue: false hasContentIssue false

Maritime Head-up Display (mHUD): a safety-enhancing navigational tool for ship bridges and remote operation centres

Published online by Cambridge University Press:  27 August 2025

Felix-Marcel Petermann*
Affiliation:
Faculty of Architecture and Design, Department of Design, Norwegian University of Science and Technology, Trondheim, Norway
Ole Andreas Alsos
Affiliation:
Faculty of Architecture and Design, Department of Design, Norwegian University of Science and Technology, Trondheim, Norway
Eleftherios Papachristos
Affiliation:
Faculty of Architecture and Design, Department of Design, Norwegian University of Science and Technology, Trondheim, Norway
Clas Olaf Steibru Andersen
Affiliation:
Faculty of Information Technology and Electrical Engineering, Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway
Andreas Nygard Madsen
Affiliation:
Faculty of Engineering, Department of Ocean Operations and Civil Engineering, Norwegian University of Science and Technology, Ålesund, Norway
*
Corresponding author: Felix-Marcel Petermann; Email: felix.m.petermann@ntnu.no
Rights & Permissions [Opens in a new window]

Abstract

Maritime navigation in low visibility presents a significant challenge, jeopardising seafarers’ situational awareness and escalating collision risks. This study introduces a maritime head-up display (mHUD) to address this issue. The mHUD, a 2-m diameter aluminium ring with dual rows of LEDs, enhances visibility for autonomous ships in adverse conditions on ship bridges and remote operating centres (ROCs). Displaying various modes such as shallow waters, land, lighthouses, beacons, buoys and maritime traffic, the mHUD was evaluated in a ship bridge simulator by 12 navigation students. Results revealed that the mHUD substantially improved situational awareness, proving more efficient and effective than navigating without it in poor visibility conditions. Participants found the mHUD easy to learn and expressed willingness to use it in real-world situations. The study highlights the mHUD’s potential to enhance situational awareness on ship bridges and ROCs for autonomous ships, while suggesting potential enhancements to increase usability and user satisfaction.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (https://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided that no alterations are made and the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use and/or adaptation of the article.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of The Royal Institute of Navigation

1. Introduction

Ensuring safety in maritime navigation is crucial, given the far-reaching consequences of accidents, including loss of life, environmental damage and disruptions to economic activities. Understanding why an accident occurred is crucial for developing new tools, strategies and measures to enhance safety. As the maritime sector shifts towards autonomous operations (Psarros, Reference Psarros2018), the focus on this question must encompass both conventional and autonomous ships. A human crew mainly operates conventional ships, while autonomous ships can vary in terms of human involvement. The International Maritime Organization (2021) identified four different autonomy levels. At the first level, the crew on board operates a ship with automated processes and decision support. At the second level, the ship with a crew on board is remotely monitored and controlled from a different location. At the third level, the ship is remotely controlled and monitored, but without a crew on board. At the final level, the vessel operates fully autonomously, making decisions and determining actions. The definition includes, similar to Rødseth et al. (Reference Rødseth, Nordahl and Hoem2018), the autonomy level (i.e. operator-controlled, automatic, partly autonomous, constrained autonomous and fully autonomous) and the levels of human involvement (i.e. continuously manned, periodically unmanned and fully unmanned). Moreover, monitoring can range from no shore control centre (SCC) to remotely controlled (RC) and supervisory control (SC). In both conventional operations, where a fully autonomous, remotely supervised or controlled vessel is involved, the human is an essential factor, which causes 71% of the major accidents (Grech et al., Reference Grech, Horberry and Smith2002). The following are a few factors highlighted. Technical failures can pose significant risks in maritime navigation, including ship equipment failures and malfunctions in navigation systems. Environmental factors such as narrow waterways and unpredictable weather conditions like heavy rain, fog, snowfall and low light settings at night can significantly impact visibility (Glomsvoll and Bonenberg, Reference Glomsvoll and Bonenberg2017; Kalagher et al., Reference Kalagher, de Voogt and Boulter2021). These environmental and technical factors are closely associated with several Operational factors that contribute to accidents.

  1. 1. Lack of situational awareness: situational awareness refers to understanding and accurately predicting one’s immediate environment and circumstances (Endsley, Reference Endsley1995). Insufficient situational awareness can manifest itself as poor lookout, delayed response and misinterpreting situations (Jaram et al., Reference Jaram, Vidan, Vukša and Pavić2021; Psarros, Reference Psarros2018).

  2. 2. Inattention, fatigue and poor concentration (Jurkovič et al., Reference Jurkovič, Kalina, Turčan and Gardlo2017).

  3. 3. Lack of communication: ineffective communication between crew members, vessels and shore-based authorities (Norway and Norway, Reference Norway and Norway2019; Jaram et al., Reference Jaram, Vidan, Vukša and Pavić2021).

  4. 4. Poor training and procedures: inadequate training and failure to adhere to standard guidelines (Norway and Norway, Reference Norway and Norway2019; Jaram et al., Reference Jaram, Vidan, Vukša and Pavić2021).

  5. 5. Non-compliance with safety rules: ignoring safety regulations, disregarding navigational parameters and making navigation errors (Kirilova, Reference Kirilova2020).

To illustrate how the significance of these factors can vary, we present two real-world examples.

The first case involves the difference in visibility between a ship’s bridge and CCTV cameras when streamed to a shore-based Remote Operation Centre (ROC) during daytime and nighttime. Figure 1 depicts the port of Ålesund (Norway) in daylight, where a lighthouse and a navigational beacon are visible. However, during nighttime, the lights from the city in the background gradually obscure the navigational aids, potentially affecting the operator’s situational awareness. The same effect occurs when CCTV footage is streamed to a Remote Operations Centre (ROC) onshore.

Figure 1. Navigational lights on Ålesund port, compared during the day and night with light background noise (illustration by Norvald Kjerstad (Kjerstad, Reference Kjerstad2024)).

The second case involves the collision between the Norwegian frigate KNM Helge Ingstad and the tanker Solas TS in 2018. The frigate was sailing northbound without transmitting AIS data, while crew members were undergoing training on optical positioning. The tanker had a pilot on board and had departed from the Sture oil terminal, still with deck lights switched on for ongoing preparation work. Multiple errors, including erroneous situational awareness, miscommunication, lack of teamwork and insufficient training among the crew, contributed to the collision. The crew on the frigate mistook the tanker’s deck lights for static lights at the oil terminal, while the staff at the Vessel Traffic Centre (VTS) failed to properly track the frigate and the tanker as they commenced their journeys. Both ships also neglected to pay attention to radar information. The crew on the tanker relied heavily on the pilot’s work, and although the crew on the frigate discussed the appearance of lights, they failed to explore the issue further (Norway and Norway, Reference Norway and Norway2019). Figure 2 shows how the situation might have looked from Helge Ingstad.

Figure 2. A screenshot of a video recording on the bridge of HNoMS Roald Amundsen on the observation voyage before the accident shows an approximate situation and how it appears later that night (Norway and Norway, Reference Norway and Norway2019).

These cases underscore the importance of maintaining situational awareness, effective communication and proper training in maritime navigation, and motivates research on alternative navigation aids. A significant challenge in this domain is navigating in low-visibility conditions, which impairs seafarers’ situational awareness and escalates collision risks. To mitigate these risks, vessels are equipped with a suite of technologies. Traditionally, navigators rely on screen-based tools, such as radar and Electronic Chart Display and Information Systems (ECDISs), to operate in poor visibility. More advanced systems may include thermal cameras or LiDAR to detect objects not visible to the naked eye. However, each of these systems presents information on separate screens, requiring the operator to divide their attention, mentally integrate disparate data sources and interpret complex visual information, which can increase cognitive load and the potential for error.

To address the challenge of data integration and improve intuitive understanding, modern approaches are trending towards the use of information augmentation. Augmented Reality (AR) has emerged as a promising technology to address the challenges of low visibility by overlaying critical digital information directly onto an operator’s view of the real world. A review of current research reveals several ways AR is being applied in maritime navigation, using different augmentation techniques.

Within the range of AR technologies, this study focuses on a Maritime Head-up Display (mHUD). The decision to develop and evaluate an mHUD stems from a notable gap in the existing research. Most current AR solutions require the use of additional screens or augmented reality glasses (HMDs). However, the usability of wearing such devices for extended periods, particularly in the low-light conditions common on a ship’s bridge, requires further investigation.

The mHUD was conceived as a non-wearable alternative designed to complement, rather than replace, existing systems such as radar and ECDIS. Its primary goal is to minimise the ‘head-down time’ of operators by fusing critical data from various sensors and presenting it in a single, intuitive and head-up format. By displaying relevant navigational artefacts and filtering out distracting light noise from land, the mHUD aims to reduce the operator’s cognitive load and enhance situational awareness. This study, therefore, evaluates the potential of the mHUD as a novel navigational aid for ship bridges and remote operation centres, particularly under challenging visibility conditions.

1.1. Study aim

This study assesses the potential benefits and feasibility of implementing the maritime Head-up Display (mHUD) technology in marine navigation, in accordance with ISO 9241 standards (ISO, 2019). The focus is on assessing the efficiency, effectiveness and user satisfaction of the mHUD concept. The study aims to investigate its impact on reducing error rates and improving operational efficiency, thereby preventing collisions, groundings and close calls. Furthermore, it seeks to assess the mHUD’s performance under different visibility conditions, encompassing favourable and unfavourable scenarios. Additionally, the study aims to explore the subjective operator experience, usability and perceptions of the mHUD, including its user-friendly nature, intuitiveness, and comprehension of displayed patterns and information. It also investigates whether the mHUD can alleviate emotional stress levels during navigation and enhance operators’ perceptions of both objective and subjective safety. The study further examines the role of individual factors, such as operator experience and their willingness to embrace new technology, specifically the adoption of navigational technology like the mHUD. Lastly, the feasibility of implementing the mHUD technology in ship bridges and control rooms will be evaluated, taking into account the physical and operational constraints inherent in these environments. By addressing this, we aim to contribute to advancing maritime navigational safety and offer potential solutions to the challenges posed by operational factors, situational awareness limitations and poor visibility conditions.

In the following sections of this paper, we will describe the process of developing the mHUD, starting from the initial concept creation and progressing through the evaluation phase. We will discuss the design considerations, usability testing and the overall impact on maritime safety. The findings from this study can provide valuable insights for future developments in naval navigation systems, supporting the shift from the ship bridge to a remote-controlled setting while enhancing safety and mitigating the risks of accidents.

2. Maritime head-up display design

The design phase of the project was divided into the following five phases, as shown in Figure 3.

  1. 1. Domain Expert Interviews: this phase involved engaging experts from the maritime industry, design, and research and development domains. The primary goal of these interviews was to gather insights that would inform the initial design concepts. Six domain experts were individually presented with the idea and asked to provide feedback on its feasibility, identify user needs and challenges, and suggest potential features. Each interview lasted approximately 15 min and included an introduction to the concept, followed by a structured discussion for feedback and additional ideas. The input gathered during these interviews was systematically recorded and categorised into relevant topics. To prioritise the functions and features of the prototype, the MoSCoW model (Vestola, Reference Vestola2010) was applied, categorising features into ‘must have’, ‘should have’, ‘could have’ or ‘would not have now.’ This approach enabled a clear identification of the essential aspects of the concept.

  2. 2. Concept Ideation: various concept sketches were created during this phase to visualise the initial design and its intended functionalities. These sketches served as a foundation for further discussions with domain experts, providing a more tangible understanding of the envisioned prototype.

  3. 3. Initial Prototype Development: an initial prototype was constructed using LED strips with varying LED densities per metre, controlled by an Arduino. This prototype was presented during individual workshop sessions, where experts were invited to review both the concept sketches and the prototype. Feedback obtained in these sessions was instrumental in refining the design.

  4. 4. Iterative Development Process: based on the feedback from the workshop sessions, the prototype underwent iterative development. Each iteration integrated the insights and suggestions from experts, ultimately leading to the construction of the first study artefact. This artefact served as a tangible realisation of the concept and laid the foundation for subsequent phases of the project.

Figure 3. Overview of the design and development process of the mHUD prototype.

2.1. Study artefact

An aluminium ring with a diameter of 2 m was constructed to create the prototype. The ring served as the physical structure for the concept, providing a circular shape that facilitated the plotting of angular distances. This circular design was chosen because it aligns with other navigational equipment, such as radar and compass, using circular visualisations. Additionally, the circular shape ensures that the distance from the centre to the edges remains consistent in all positions along the edge (Figure 4).

Figure 4. In the design phase, two forms were considered for the mHUD, a circular or rectangular shape. Comparing the distance from the centre to the border of the two forms (representing the mHUD) reveals a consistent distance from the centre in the ring-shaped design and varying lengths to the borderline in the rectangular form.

The aluminium ring was equipped with two rows of programmable LEDs. The first row consisted of 360 LEDs, each representing one degree of the circle. The second row contained 720 LEDs, with each LED representing 0.5 degrees of angular resolution. The LEDs were affixed to the aluminium ring, allowing for precise control and visualisation of angular information. Five 5 V/DC 18 A 90 W AC/DC power supplies were used to power the LEDs. A USB serial connection was established between a computer and an Arduino microcontroller to control the LEDs. This setup enabled the computer to send commands to the Arduino, which in turn controlled the illumination of the LEDs, creating dynamic visual displays (Figure 5).

Figure 5. mHUD (lights highlighted with red frame) installed in the ship bridge simulator over the officer on watch.

The size of the prototype was determined based on two key considerations. First, it needed to be compact and adaptable to fit within various control rooms and ship bridges. This ensured that the prototype could be easily integrated into existing environments. Second, it was essential to ensure user comfort during prolonged usage. Drawing from the work of Grandjean and Kroemer (Reference Grandjean and Kroemer1997), which suggests that the distance between the screen and the table edge should be between 400 and 1150 mm, the prototype’s size was adjusted accordingly. This measurement was evaluated and refined within the context of the Remote Operations Centre (ROC) and ship simulator, ensuring that the chosen distance was suitable for the environment and comfortable for extended viewing. The initial prototype was not explicitly guided by additional ergonomic considerations, as the study was exploratory in nature.

To provide an optimal viewing experience for the user, the ring was positioned around the seating location of the officer on watch, ensuring that they would always have the visualisation centred within their field of view. This placement further enhanced the usability and effectiveness of the prototype in supporting navigation tasks.

2.2. Data visualisation on the mHUD

The mHUD can display any object with a known position that is relevant to the ship’s navigation or operation. This includes other vessels, buoys, navigation poles, beacons, lights, lighthouses, shallow waters and land (Figure 6a–g). Figure 6a–d illustrates the representation of ships on the mHUD. In contrast, Figure 6e depicts the visualisation of navigation fires and lights (buoy), and Figure 6f showcases lighthouses. The mHUD aims to provide a mental model that aligns with users’ previous experiences and knowledge of navigational lights (Zhang and Wang, Reference Zhang and Wang2005; Aehnelt et al., Reference Aehnelt, Peter and Müsebeck2012). In the initial concept, objects were displayed based on their angular position relative to the ship, with no explicit consideration for distance. The primary objective was to evaluate the comprehensibility of the concept and the clarity of the light patterns. During the initial design phase, obstruction handling was not explicitly addressed. The system currently prioritises the nearest object when overlaps occur. Conceptually, the mHUD translates the three-dimensional maritime environment into a two-dimensional representation through an azimuthal projection. The primary spatial data point that is directly mapped is the bearing of an object, which corresponds to a specific LED’s position on the 360-degree ring.

Figure 6. Visualisation of possible use cases where the mHUD can be used: (a) vessel from port side; (b) the vessel from the starboard side; (c) vessel from the stern side; (d) vessel from the bow side; (e) two buoys; (f) lighthouse indicating navigating in a green sector; and (g) showing shallow water and land in a narrow water passage.

To re-introduce a third dimension, a basic form of vertical information was encoded by using two rows of LEDs. The upper row is reserved for objects with vertical elevation, such as the top lanterns of another vessel, while the lower row represents objects on the waterline. This creates a simplified spatial mapping where horizontal position on the ring indicates bearing, and the vertical row indicates a rudimentary elevation. However, the crucial element of distance (depth) was not explicitly encoded in the initial prototype, as the primary goal was to first test the comprehensibility of the core concept.

In developing the mHUD, adherence to Nielsen’s heuristics for design (Nielsen, Reference Nielsen1995) was a priority. According to these heuristics, the design should correspond to the real world. This principle influenced the decision to display different objects on the mHUD, as Figure 6 depicts.

The mHUD visualises several object types essential for navigation, as summarised in Table 1. For this study, the focus was on ships, buoys, lighthouses and lateral markers. Objects were displayed with distinct colours and light characteristics to align with real-world objects, e.g. blinking patterns for buoys. The mHUD uses two lines of LEDs to represent objects at different vertical levels. The upper line displays objects with multiple lights, e.g. the top lantern of a vessel or a buoy that uses two lights, while the lower line represents objects on the waterline.

Table 1. Object types, their visual representations and characteristics

2.2.1. Ships

Ships are equipped with various navigation lights that serve different purposes and indicate different modes or aspects of the vessel. The arrangement of lights on a ship follows a spatial order, with a green light placed on the starboard (right) side, a red light on the port (left) side and one (or two, depending on the size) white master headlight shining forward on the vessel’s centre line. A white stern light also shines aft (Crosbie, Reference Crosbie2006; Ratcliffe and Lyon, Reference Ratcliffe and Lyon1986; Baevsky and Berseneva, Reference Baevsky and Berseneva2008). Larger and specialised vessels may have additional lights. However, a streamlined representation was employed to simplify the concept and avoid information overload on the mHUD. The mHUD displays three lights representing a vessel: red, green and white. Figure 5a–d demonstrates how the mHUD represents a ship based on its travel direction. When a ship moves from port to starboard or vice versa, the red or green LED simulates an after-glow with a shorter or longer trail, visually conveying the vessel’s speed in relation to itself.

2.2.2. Aids to Navigation

Aids to Navigation (AtoN) are crucial in assisting vessels with safe navigation. These include lighthouses, beacons and buoys, each serving a specific purpose and located in different areas. To accurately represent these AtoN installations, the mHUD incorporates their distinctive light characteristics, i.e. the light colour, flashing pattern and period. Figure 6e, f visually depicts how the mHUD showcases these characteristics.

2.2.3. Shallow waters

When navigating near coastlines or in areas with exposed islands, it is essential to be aware of shallow waters and the surrounding land. The mHUD addresses this by visualising water depths and elevations (Figure 6g). Nautical charts use various representations to depict water depths based on their intended purpose. Typically, darker blue shades represent shallow waters, while lighter shades indicate deeper depths. It is essential to consider that water depths can change due to tidal effects; thus, the mHUD always displays the minimum available depth. To ensure safe navigation, the mHUD displays water depths of up to 35 m in various shades of blue. Elevations, however, are displayed using a yellow shade.

2.2.4. Other objects

In addition to the objects mentioned in this section, the mHUD can also be used in various scenarios, such as showing the last known or predicted man overboard position (MOB scenario). Alternatively, if the mHUD is installed on the service ship of remotely operated or autonomous surface, underwater or aerial vehicles, the position of these in relation to itself can be displayed on the mHUD.

2.3. Data-to-mHUD

The mHUD integrates multiple data sources to provide comprehensive situational awareness. Table 2 summarises the types of inputs processed by the system and their roles in the visualisation. These sources provide relevant data, which was then processed in a JavaScript application developed for this purpose (Figure 7). The program collects entity positions from the various data sources mentioned above, synchronises them based on the system timestamps and maps them onto the mHUD. Some of the data are extrapolated, for example, the AIS data, which is transmitted at different frequencies depending on the ship type. The data of the mHUD are updated every 500 ms if there are no new data points; the extrapolation is done based on the position of the previous data points, heading and speed. According to standards (Commission, Reference Commission2018, Reference Commission2017; Union, Reference Union2014), Class A ships should transmit their AIS data every two to ten seconds, Class B ships every 30 s, and if the ship is moving less than 2 knots, the AIS position should be updated every 3 min. The study used data from the simulator to simulate ship positions and directions. The software must be configured with the number of LED strips, the number of LEDs per strip and the angles covered by each strip. The program collects all positions and filters them based on their distance from the ship, removing entities behind the ship or too far away to be relevant. For example, additional filters can be implemented to remove entities behind land masses.

Table 2. Inputs processed by the mHUD

Figure 7. The program transfers the position of the objects to the mHUD. On the left side, the position of each object within the range is visible; in the middle, the map displays the objects and the ship. The software snapshot shows a smaller frame of the scenario displayed in Figure 8 and marked for better comprehensibility; on the top, the LEDs are shown to verify the information displayed on the mHUD for debugging purposes (highlighted with a red frame).

(1) The bearing between the ship and the target is calculated, where ‘lats’ and ‘lngs’ represent the latitude and the longitude of the vessel and ‘latt’ and ‘lngt’ of the target:

(1) $${x = {\rm{sin}}\left( {lngt - lngs} \right)\; {\rm{* \; cos}} \left( {lngt} \right)}$$
(2) $${y = {\rm{cos}}\left( {lats} \right){\rm{\; * \; sin}}\left( {latt} \right){\rm{sin}}\left( {lats} \right)\,{\rm{*\,cos}}\left( {latt} \right)\,{\rm{*\,cos}}\left( {latt} \right)\,{\rm{*\,cos}}\left( {lngt - lngs} \right)}$$

(2) Remove the ship heading, and normalise the value between 0 and 360 degrees:

(3) $${\beta R = \left( {\beta - h + 360} \right)mod36}$$

(3) This calculation provides the relative bearing of the target entity with respect to the ship’s heading.

3. Method

3.1. Study procedure

To test the prototype, we conducted user tests in a ship simulator, which allowed us to closely replicate the intended use context while maintaining a reproducible setting and minimising participant risk (Pieroni et al., Reference Pieroni, Lantieri, Imine and Simone2016). The participants were welcomed and introduced to the study’s procedure, the data to be collected, the tasks they needed to fulfil and were given a short explanation of the mHUD. All participants provided their informed consent prior to the commencement of the study and were informed that they could withdraw from the study at any time. After the introduction, each participant completed a form that collected information about their experience, training and openness to new technology. After the introductory interview, the participants were equipped with an Empatica E4 wristband (Empatica, 2023) to collect their stress response through heart rate (HR) and galvanic skin response (GSR). Then, the participants were guided into the Kongsberg K-SIM NAVIGATION Ships bridge simulator (Digital, Reference Digital2023) (Figure 4). They were seated in the officer’s chair, with a view of the instruments, ECDIS, vision outside the ship’s bridge windows and the mHUD installed with its origo above the officer’s chair. The radar was deactivated during the studies to enforce visual navigation. Before starting the study, participants received a brief introduction to the study, its tasks and the concept of the mHUD.

We asked the participants to look out during the vessel’s journey. The ship was navigating on autopilot along a predefined route. They were instructed not to interact with the vessel’s controls. They got a list of 13 lighthouses, beacons, buoys and ships they should identify during the journey (Figure 8). During the trip, the visibility condition changed into one of two different conditions: good visibility or poor visibility. During some parts of the journey, they would navigate the ship with the mHUD turned on and in other parts without it. The combination of the visibility and usage of the mHUD was randomly changed every 3 min (Figure 9), employing a Latin square design (Winer et al., Reference Winer, Brown and Michels1971) to randomise the testing conditions across participants, ensuring that each condition (good/poor visibility with/without mHUD) was presented in a balanced manner. This approach was chosen to minimise potential learning effects and biases related to the sequence of conditions.

Figure 8. The path each participant followed during the journey with the 13 ships and aids to navigation they should identify.

Figure 9. Each participant encountered four conditions during the study on different journey passages: (a) navigation with mHUD and good visibility; (b) navigation without mHUD and good visibility; (c) navigation with mHUD and good visibility; and (d) navigation without mHUD and poor visibility.

Each navigation scenario lasted three minutes, after which the simulation was paused for the participants to complete the National Aeronautical and Space Administration Measures Task Load Index (NASA-TLX) (Hart and Staveland, Reference Hart and Staveland1988), a subjective rating of six related items related to self-reported cognitive load. After each 3-min block, the navigation scenario was paused and the participant was asked to fill in the NASA-TLX. This immediate evaluation ensured that feedback was collected while the experience was fresh, preventing potential confusion between consecutive scenarios. This approach is aligned with recommendations for minimising cognitive interference in workload assessments. Additionally, collecting NASA-TLX scores after each scenario aligns with the tool’s standard application guidelines. After 12 minutes (4 blocks, each 3 min), the participants completed the simulation, trying all four combinations with or without mHUD in good and bad visibility.

Immediately after the simulation study, the participants attended a post-session interview. In this session, we first asked them to rate the mHUD on the Systems Usability Scale (SUS) (Brooke, Reference Brooke1986), followed by a semi-structured interview to assess their perceptions of the user experience. The purpose of SUS is to provide a quick and reliable measure of the perceived usability of a system, which can be used to identify areas for improvement and to compare the usability of different systems or versions. The SUS instrument is a validated and reliable tool that has been used in various applications, including virtual environments, educational games, clinical decision support systems and e-commerce websites (Pedroli et al., Reference Pedroli, Greci, Colombo, Serino, Cipresso, Arlati, Mondellini, Boilini, Giussani, Goulene, Agostoni, Sacco, Stramba-Badiale, Riva and Gaggioli2018; Marzuki et al., Reference Marzuki, Yaacob and Yaacob2018).

3.2. Participants

The participants for this study were recruited through the vocational school in Møre og Romsdal County Municipality (Fagskolen) located in Ålesund, Norway. All participants (N = 12) were male, with an average age of 24 (SD = 5). They were enrolled as navigational students in either their first or second year of study and had prior experience working onboard a ship. Half of the participants were not certified deck officers; the remaining participants held certifications such as AB, D5L or Class 3.

On average, the participants had approximately six years of experience (SD = 5) serving on various types of ships, including fishing vessels, service vessels or car ferries. They also had an average of 70 h of experience in a ship simulator (SD = 21). During their time on board, their primary responsibility was to act as lookouts, which involved observing the surroundings for potential hazards such as other ships, shallow waters and rocks. Additionally, the participants were responsible for monitoring the ship’s position, lights and radio communication, reporting to the officer on watch, and providing support, particularly in poor visibility conditions.

Given the nature of their duties, the participants frequently encountered poor visibility conditions while sailing. Half of the participants had received training in navigating under poor visibility conditions, which included using, interpreting, and adjusting radar and other tools, such as ECDIS, as well as simulator training in various conditions and real-world vessel operations while maintaining a lookout. The participants were highly familiar with the navigational tools they were trained in, indicating their confidence and proficiency in using them during their duties.

4. Collected measures and data preparation

Multiple measures were collected to evaluate the mHUD, each addressing specific study aims comprehensively. The data preparation and analysis procedure is followed in this section. Table 3 summarises these assessments and their alignment with the study objectives.

Table 3. Study aims and assessment methods

4.1. Pre-session interview

The responses obtained from the pre-session interviews underwent a descriptive analysis. This analysis aimed to gain insights into the participants’ background, experience, openness to new technology, training and experience navigating under poor visibility conditions. This analysis provided valuable information about the participants’ characteristics and factors that may influence their perception and performance during the user tests.

4.2. Task Load Index (NASA TLX)

The NASA Task Load Index (TLX) is a widely used instrument that assesses workload across six dimensions: mental demand, physical demand, temporal demand, performance, effort and frustration (Hart and Staveland, Reference Hart and Staveland1988). In this study, a paper-based version of the NASA TLX was used, where participants were provided with a 12 cm line divided into 20 sections. They were instructed to rate each item along this line, indicating their perceived workload level, ranging from low to high.

The raw (unweighted) scores from each item were summed for each participant for the workload analysis. This yielded an individual workload score. The arithmetic mean of all participants’ workload scores was calculated to determine the overall workload. This calculation provided a comprehensive measure of the average workload experienced by participants.

The overall workload was further analysed in terms of different visibility conditions and mHUD state settings. By examining workload variations across these conditions, we gained insights into the impact of visibility and mHUD usage on the perceived workload levels.

4.3. Heart rate analysis (stress index)

The heart rate measurements were obtained using the photoplethysmography sensor (PPG) of the Empatica E4 wristband (Empatica, 2023). The PPG sensor provided the heart rate data in RR values, representing the time (in milliseconds) between each R peak.

Data cleaning was performed to ensure the accuracy of the heart rate data. The cleaning involved interpolating the heart rate values to correct any inconsistencies. The interpolation process was applied to a maximum of 10% of the data points.

For each participant and each 3-min section of the study, Baevsky’s stress index (SI) (Baevsky and Berseneva, Reference Baevsky and Berseneva2008) was calculated using the Kubios HR software (Tarvainen et al., Reference Tarvainen, Niskanen, Lipponen, Ranta-Aho and Karjalainen2014). This index provided insights into the participants’ stress levels during different study segments. By analysing the heart rate data and applying Baevsky’s stress index calculation, we gained valuable information regarding the physiological responses and stress levels experienced by the participants.

4.4. Time-on-Task analysis

During the study, participants were asked to record the exact ship’s clock time when they first observed each of the 13 objects. These observations could be made either through the window of the ship bridge simulator or with the help of the mHUD display. A specific origin time stamp was established as the reference point for detecting each object.

To assess the accuracy of their observations, the time recorded by the participants was compared with the origin time. The difference between each object’s origin and recorded time was then calculated. This analysis enabled us to assess the participants’ ability to perceive and respond accurately to the objects, both with the aid of the mHUD and through direct visual observation from the ship bridge simulator window. While not a direct measurement of head-down time, this analysis serves as an indirect indicator of efficiency gains by quantifying the time required to perceive and identify objects with and without the mHUD.

4.5. System Usability Scale (SUS)

The System Usability Scale (SUS) responses of the participants were collected online and analysed using the method outlined by Brooke (Reference Brooke1986). To provide a more comprehensive evaluation, the SUS score was supplemented with the qualitative data gathered from the semi-structured post-session interviews, as suggested by Harper and Dorton (Reference Harper and Dorton2021). Combining quantitative and qualitative insights provided a more nuanced understanding of the system’s usability and the participants’ experiences. Integrating the SUS score and interview responses enriches the overall assessment and provides valuable context for interpreting the usability findings.

4.6. Post-session interview

The post-session interview followed a semi-structured format, allowing participants to share their experiences with the mHUD. They were encouraged to discuss the aspects and features they found promising, areas that could be improved and their thoughts on the potential implementation of the mHUD on a ship bridge or in an ROC for autonomous or remotely controlled ships. The participants’ responses were recorded using a voice recorder. Subsequently, the interviews were transcribed and translated for analysis. An affinity diagram of the interviews was created by two of the authors; this process involved the following several steps.

  1. 1. The users’ statements were transferred onto sticky notes on a board in the digital collaboration platform Miro.

  2. 2. A set of initial codes was created to categorise the statements.

  3. 3. The authors familiarised themselves with the content of the statements.

  4. 4. Each author independently assigned relevant codes as tags to the sticky notes on the board.

  5. 5. Each author sorted the sticky notes according to the assigned tag.

  6. 6. The authors engaged in discussions to reach a consensus on sorting the sticky notes.

  7. 7. Concluding topics were identified for the codes through collaborative discussions.

Subsequently, the themes derived from the analysis were revisited, with a focus on identifying constructs that transcended individual themes. These identified constructs were then given appropriate names, defined and coded in the data to complement the themes.

5. Results

5.1. Willingness to use new technologies in navigation

The participants involved in the study demonstrated a reasonable level of familiarity with the existing tools for vessel navigation. They rated their familiarity on a 10-point Likert scale, with a mean of 6.75 (SD = 1.2), indicating moderate knowledge. Interestingly, only one-third of the participants expressed high confidence in their answers. However, measuring participants’ willingness to use new technologies was crucial for evaluating the mHUD concept, as one of the study objectives was to assess user openness to adopting innovative navigational tools. A high willingness to use new technologies (mean = 8.41, SD = 1.97) indicates positive acceptance potential for the mHUD and aligns with the study’s aim to explore individual factors influencing adoption.

5.2. Faster object recognition

Comparing the time users took to identify given objects under different study conditions, the results revealed that without the mHUD and in poor visibility conditions, participants required an average of 86 s (SD = 73) to identify navigational artefacts or ships. Similarly, without the mHUD but in good visibility conditions, participants took an average of 87 s (SD = 65). With the mHUD, the time decreased to 70 s (SD = 43) in poor visibility conditions and 62 s (SD = 44) in good visibility conditions. On average, participants required 76 s (SD = 35) to identify an object. Notably, regardless of weather conditions, participants were, on average, 20.8 s faster in recognising objects when using the mHUD. However, these results should be interpreted cautiously, as they did not reach statistical significance due to the high variation in measurements and the relatively small sample size. When considering visibility conditions, the trend remained consistent: object recognition was 16.3 s faster in poor visibility and 25.6 s faster in good weather conditions (Table 4).

Table 4. The average time (in seconds) the participants needed to identify objects

5.3. Better object recognition

During the studies, it was observed that not all participants could identify all objects. In good visibility conditions with the mHUD, participants missed, on average, six objects, whereas without the mHUD, they missed three objects. In poor visibility conditions, the number of missed objects increased to 9 with the mHUD and 17 without the mHUD. The difference in object identification between using the mHUD and not using it was more significant in poor visibility conditions compared with good visibility conditions. On average, participants missed nine objects (SD = 6) (Table 5).

Table 5. The cumulated numbers of objects the participants have missed or seen in each condition

5.4. Self-reported cognitive load

Participants reported a slightly higher cognitive load while using the mHUD (M = 9.09, SD = 3.6) compared with no mHUD (M = 8.5, SD = 3.6) on the NASA-TLX scale (Figure 10). These results suggest that the added visual and cognitive input from the mHUD may initially increase mental demand. However, participants also noted that this increased load was mitigated as they became more familiar with the system. This trend aligns with previous studies on introducing new navigational aids (Hart and Staveland, Reference Hart and Staveland1988).

Figure 10. General workload the participants reported after each condition (Figure 9).

5.5. Physiological stress measurements

In contrast, physiological data collected using the Empatica E4 wristband showed reduced stress levels with the mHUD (M = 14.1, SD = 6.7) compared with no mHUD (M = 15.2, SD = 7.7). These measures reflect a potential calming effect of the mHUD during tasks, particularly under challenging visibility conditions. Physiological measurements offer an objective perspective on cognitive load, which complements but cannot be directly compared with subjective measures due to differences in underlying constructs (Larmuseau et al., Reference Larmuseau, Vanneste, Desmet and Depaepe2019; Ayres et al., Reference Ayres, Lee, Paas and van Merriënboer2021).

5.6. Acceptable usability of mHUD

The mHUD received a System Usability Scale (SUS) score of 64 (Figure 11), which falls into the ‘acceptable’ range based on established benchmarks (Brooke, Reference Brooke1986). The SUS scale translates numerical scores into qualitative descriptors, ranging from ‘unacceptable’ to ‘excellent’. A score of 64 corresponds to ‘marginally acceptable’ or ‘good’ usability, representing a promising outcome for an early prototype. While there is room for improvement, this score suggests that the mHUD design is on track to meet user needs and usability standards.

Figure 11. The reported SUS from all participants was transformed into comprehensible grading.

5.7. Results of post-session interviews

The analysis of the interviews resulted in 126 codes, which were assigned to 24 initial categories. Six higher-level topics were identified after sorting each card into its assigned topic. In summary, user feedback and observations regarding the mHUD concept can be categorised into ‘User Experience’, ‘Safety and Trust’, ‘Improvement and Feature Requests’, ‘Situational Awareness’ and ‘Context’, which are concluded in the following part of this section.

5.7.1. User experience

The users reported a positive experience with the mHUD, acknowledging its potential and expressing comfort with its incorporation into ship bridges or control rooms. However, the concept was associated with initial confusion and a steep learning curve. The participants needed to get used to the novel idea of data representation in the form of light dots rather than explicit information. Users liked the idea and believed they would have performed better if they were more familiar with the mHUD. The participants highly rated the learnability and comprehensibility of the mHUD.

The mHUD provided precise information that accurately represented the real-world state. It helped users efficiently detect objects, mainly ships, which could be found and identified faster with the mHUD than without. The mHUD also worked well in combination with the ECDIS. Users mentioned the importance of customisation, such as adjusting the range for displaying objects, filtering displayed objects, and the ability to toggle the mHUD on and off. Certain functionalities were remarkably satisfying, such as the flexibility to move around the bridge while still identifying lights and using the mHUD to confirm visually identified objects. Overall, users welcomed the mHUD as a fixed installation on a ship bridge.

5.7.2. Safety and trust

The mHUD’s potential to enhance visibility and performance was recognised, especially in poor visibility scenarios. It provided valuable assistance in identifying and differentiating between lights, contributing to navigation safety. Several users reported feeling a greater sense of safety with the mHUD, particularly in high-stress situations.

5.7.3. Improvement and features

During the post-session interviews, participants provided valuable feedback on areas that could enhance the mHUD’s usability and comprehensibility. From the analysis of their responses, six main themes for improvement were identified: (1) light characteristics; (2) distance to objects; (3) light intensity; (4) personalisation settings; (5) placement and (6) additional functions. This section summarises these overarching topics.

Light Characteristics: the mHUD uses light patterns commonly found in maritime environments, including red, green and white lights. However, some participants expressed concerns that certain representations were too similar, making them difficult to distinguish from one another. They emphasised the need for additional characteristics beyond light colour, such as distinct blinking patterns or shapes, to improve differentiation. Furthermore, participants highlighted the challenge of representing non-illuminated navigation objects. They suggested introducing a unique colour, such as violet, which is less commonly used in maritime contexts, to indicate these objects clearly.

Distance to Objects: participants noted that displaying too many objects simultaneously could clutter the system, leading to information overload and diminishing the mHUD’s effectiveness. They proposed several potential solutions to address this issue.

  • Adjustable Light Intensity: gradually increase the light intensity as objects get closer, starting with dim lights for distant objects.

  • Variable LED Allocation: represent distant objects with fewer LEDs and closer objects with more LEDs, making nearby objects appear larger.

  • Dynamic Filtering: reduce displayed objects by showing only those in the sailing direction, on the intended trajectory, or within dynamic ranges (e.g. between 0.5 and 3 nautical miles).

These suggestions aim to streamline the display while preserving the relevance of the presented information.

Light Intensity: participants had differing preferences regarding the brightness of the lights. Some suggested that the lights could be brighter to enhance visibility, particularly during daytime operations. Conversely, others expressed concerns about excessive brightness at night, which could cause glare or discomfort. To address these varying needs, participants recommended allowing users to manually adjust the brightness of the lights, enabling them to tailor the settings to different conditions and personal preferences.

Personal Settings: participants emphasised the importance of having the ability to switch the mHUD on or off based on the situation or their needs. This functionality would allow users to deactivate the tool when it becomes distracting or if it is displaying incorrect information. However, some participants noted the potential risk of forgetting to turn it back on in critical situations, which could result in losing the tool’s benefits during challenging conditions.

Participants also suggested additional features to enhance usability and customisation. For instance, they proposed displaying additional information, such as the names of navigation objects or ships currently shown, to make the mHUD more informative. Another recommendation was the ability to select objects on the electronic chart, which would then be highlighted on the mHUD for easier identification.

Users expressed a desire to personalise the display to suit their preferences. These suggestions included adjusting the distance for showing objects, filtering specific object types (e.g. displaying only ships or only lighthouses) and customising the colour coding of objects to align with individual colour profiles. These features aim to reduce information overload and improve situational relevance.

Placement: participants discussed various placement options for the mHUD. While some participants appreciated the current design, where the lights are arranged in a ring above the captain’s seat, others suggested alternative placements to enhance usability. Many participants believed that connecting the mHUD’s information directly to the outside environment would be easier if the LEDs were positioned above the ship bridge windows or projected onto the windows.

Opinions were divided regarding the 360° light arrangement. Some participants valued this design, particularly for covering blind spots such as overtaking vessels, which could be critical in future operations. However, the majority suggested that a range of 70°–90° to the left and right would be sufficient for most navigation tasks, as it aligns with the primary field of view of the operator.

Additional Functionalities: participants proposed several additional functionalities to make the mHUD more versatile and tailored to their needs. A common suggestion was the integration of alarms to enhance safety and situational awareness. For example, the mHUD could direct the user’s attention to potential dangers, such as collision warnings, or indicate equipment on the ship requiring immediate attention.

Another frequently mentioned feature was the ability to toggle between different display settings, complementing the personal settings for quick and accessible customisation. Participants appreciated the potential of multiple LED rows, which could allow for enhanced information representation. For instance:

  • the top row is helpful for the display of the top lanterns of ships for easier identification;

  • another row below the centre LED could indicate dangers or objects below the water surface;

  • multiple rows could also make closer objects appear larger, enhancing depth perception.

These functionalities were seen as valuable enhancements that could improve the mHUD’s flexibility and usability across various operational scenarios.

5.7.4. Situation awareness

There were differing opinions regarding the need for a 360° display. Some suggested a range of 70°–90° on each side, while others found a complete view helpful to cover dead spots at the stern of the ship.

A general emphasis from the participants was that the mHUD was a helpful aid for identifying dangerous situations. All participants noted that they would not have seen all ships when looking out of the ship bridge window, both in good and bad visibility conditions, but could detect them after recognising them on the mHUD and visually confirming their presence. The mHUD effectively filtered out light noise from land, displaying only relevant navigational artefacts and increasing the officer’s situation awareness. It was found particularly beneficial in poor visibility conditions, allowing for improved visibility of navigational lights. Users considered it a valuable tool during foggy conditions and other low-visibility settings, such as nighttime, when fatigue and attentiveness are significant factors. One participant mentioned confusion when searching for lights that should have been visible but were not due to poor visibility.

5.7.5. Context

The users highlighted the usefulness of the mHUD in various navigation scenarios and preferred its presence on a ship bridge. They also considered it a valuable tool in ROCs or SCCs, although some participants wondered if it could be installed on smaller vessels due to the limited space available. While the tests were conducted in a ship bridge simulator, many findings are relevant to Remote Operations Centres (ROCs) due to shared operational challenges. For instance, participants highlighted the mHUD’s potential to enhance understanding of the environment, particularly in poor visibility conditions. The ability to display relevant navigational information without requiring direct visual observation supports its application in ROCs, where operators rely heavily on remote sensing tools. Additionally, participants’ suggestions for integrating the mHUD into graphical interfaces align with ROC requirements, ensuring seamless usability in remote settings.

6. Discussion

This section discusses the study’s findings in relation to the results presented, followed by the implications of the mHUD concept in the broader context of navigational assistance technologies. The discussion is organised under the same headings as the results for consistency and clarity. Additionally, the challenges identified by participants are critically examined to guide future iterations of the mHUD.

6.1. Willingness to use new technologies in navigation

The participants demonstrated a high willingness to adopt new technologies, with a mean score of 8.41 (SD = 1.97). This result aligns with the study objective of evaluating user acceptance of the mHUD concept, as outlined in Section 1.1. While no significant differences were found between experience levels and openness, the positive reception suggests that, regardless of experience, operators are receptive to integrating new tools into their workflows. However, user willingness should not be viewed in isolation; instead, it must be supported by addressing usability challenges and ensuring compatibility with existing systems.

6.1.1. Faster and better object recognition

The mHUD’s ability to improve object recognition speed and accuracy was one of the primary findings. Participants consistently identified objects faster with the mHUD, even under poor visibility conditions. Although the differences were not statistically significant, the trend is promising and suggests that the mHUD can enhance navigational efficiency by reducing the cognitive effort required for visual detection.

However, the study highlighted concerns about information overload when too many objects are displayed. Participants noted that cluttered displays could reduce the effectiveness of the mHUD, particularly in complex navigational scenarios. Future iterations should incorporate features such as dynamic object filtering or adjustable display ranges to address these concerns. These improvements would help operators focus on the most relevant information, reducing distractions and supporting situational awareness.

6.2. Lower cognitive load

Self-reported cognitive load, as measured by the NASA-TLX (Hart and Staveland, Reference Hart and Staveland1988), was slightly higher with the mHUD (M = 9.09, SD = 3.6) compared with no mHUD (M = 8.5, SD = 3.6). This increase is likely due to the initial learning curve and the novelty of the system. In contrast, physiological measurements, including heart rate data analysed using Baevsky’s stress index (Baevsky and Berseneva, Reference Baevsky and Berseneva2008), suggested reduced stress levels when using the mHUD. This divergence indicates that while the mHUD may initially increase perceived workload, it could have long-term benefits in reducing operator stress.

This finding highlights the importance of striking a balance between learnability and usability in early prototypes. User training and iterative design improvements, such as simplifying the interface and offering customisation options, are critical to addressing the reported cognitive load and ensuring the mHUD’s effectiveness over extended use.

6.3. Acceptable usability

The System Usability Scale (SUS) (Brooke, Reference Brooke1986) score of 64 indicates marginally acceptable usability for an early prototype. While this score highlights room for improvement, it reflects the participants’ general comfort with the concept. Importantly, participants identified several areas for refinement, such as better distance visualisation, customisable settings and integration with existing navigational systems. Addressing these user suggestions will be essential for enhancing the mHUD’s usability in subsequent iterations.

6.4. Feedback on design and features

Participants provided constructive feedback on the mHUD’s design and functionality. Key areas for improvement include the following.

Placement: while some users appreciated the current ring-shaped design, others preferred integrating the mHUD into the ship bridge windows or projecting it onto screens. These suggestions should guide future design adaptations and should be observed in the next investigations with refined prototypes. Personalisation: participants emphasised the need for customisable features, such as adjustable brightness, dynamic object filtering and tailored colour profiles. Incorporating these options will enhance the mHUD’s adaptability to individual preferences, but also can lead to more confusion, especially if multiple users access the tool at the same time. Additional Functionalities: the inclusion of alarm systems to highlight critical navigation events (e.g. collision warnings) and multi-row LED configurations for better depth representation were also recommended. These features can enhance the mHUD’s value as a navigational aid. However, the decision should also be made carefully to avoid overloading the system.

6.5. The mHUD in the context of navigational assistance

The mHUD offers a novel approach to navigational assistance by providing a non-wearable, continuous display of critical information.

  • On Ship Bridges: the mHUD complements existing tools, such as ECDIS and radar, by providing real-time, unobstructed visual cues in the operator’s field of view. Its ability to filter out irrelevant light noise and highlight relevant objects makes it a valuable addition, particularly in poor visibility conditions. Unlike wearable AR glasses, the mHUD is designed for sustained use over long periods, making it well-suited for the continuous monitoring tasks required on a ship’s bridge.

  • In Remote Operations Centres (ROCs): where operators are physically detached from the vessel, the mHUD can serve as a crucial secondary visual channel to enhance situational awareness. While operators in ROCs rely on a suite of digital interfaces, the mHUD can provide an at-a-glance overview of the immediate navigational situation, reducing the cognitive load associated with interpreting complex screen-based information. Its potential for seamless integration with existing graphical user interfaces further strengthens its applicability in this setting.

Future iterations of the mHUD will need to address the challenge of information overload through features such as dynamic filtering and adjustable display ranges. By doing so, the mHUD has the potential to become a valuable tool for enhancing navigational safety and efficiency in both traditional and remote maritime operations.

6.6. Envisioned mHUD implementation for ROCs

A central claim of this paper is the mHUD’s applicability to both ship bridges and Remote Operation Centres (ROCs). The operational context of a ROC is fundamentally different, as it relies on mediated vision (i.e. camera feeds) rather than a direct view. This necessitates a specific adaptation of the mHUD concept. We envision and are prototyping two primary implementation pathways for the ROC environment.

  • The Physical–Digital Hybrid: this concept uses multiple large screens to create an extended, panoramic field of view from the vessel’s cameras. A physical version of the mHUD would be mounted directly onto the bezel of this screen array, with its LED lights precisely aligned with the bearings shown in the video feeds (Figure 12). The purpose of this hybrid system is twofold. First, it provides a persistent, ‘at-a-glance’ overlay of critical objects without cluttering the video itself. Second, and more importantly, it offers crucial system redundancy. In scenarios where video footage is degraded (e.g. due to weather, low bandwidth) or fails momentarily, the operator can still maintain full spatial awareness of surrounding traffic and hazards via the mHUD’s light dots, which operate on a separate data feed.

    Figure 12. Concept of ROC setup with multiple screens and the mHUD represented by an LED matrix.

  • The Integrated Digital Twin: this is a purely software-based implementation. A dedicated strip at the top of the main operational screen (displaying video feeds or electronic charts) would be reserved for a digital version of the mHUD (Figure 13). This digital strip would use the same core analogy as the physical version: displaying illuminated dots at the correct bearing to represent ships, buoys and other hazards. This approach maintains the head-up principle of keeping critical alerts within the operator’s primary field of view and leverages the mHUD’s benefit of presenting simplified, low-cognitive-load information.

    Figure 13. Rendering of an example of the digital mHUD integrated in video feed.

While the medium of display changes between these concepts (physical LEDs versus on-screen pixels), the core principle remains the same: to provide a continuous, simplified and bearing-based representation of the navigational environment that reduces the cognitive load of interpreting complex visual data. These envisioned implementations form the basis of our future work in empirically validating the mHUD concept within ROC-specific simulators.

6.7. Limitation of the work

The study had methodological limitations, including a small sample size, the use of participants with low experience and the lack of a direct measure of error rates. Future research should address these limitations by including larger and more experienced participant groups, and explicitly measuring error rates to strengthen the statistical validity of the findings. In addition, while the study highlighted the potential benefits of mHUD, it did not explore the practical aspects of implementing this technology in ship bridges and control rooms. Engagement with domain experts and technology providers will be crucial to assess the feasibility of mHUD and address any practical challenges associated with integrating the mHUD into existing maritime systems. However, participants expressed comfort with incorporating the mHUD into these environments and their positive feedback suggests that the mHUD could be implemented in their daily work environment.

6.8. Ongoing and planned development

Building on the feedback from the study, several steps have already been initiated.

  • Redesigning the mHUD: a second version of the mHUD is being developed using an LED matrix instead of LED strips. This allows for the display of light dots in different sizes to represent the distance of objects and potential overlapping objects more accurately.

  • Exploring New Applications: investigations are underway to identify various applications of the mHUD technology beyond the initial scope.

  • Commercialisation Strategy: efforts are being made to develop a commercialisation strategy to transform the mHUD into a market-ready product.

  • Extended Study: plans for an extended study with the new prototype are in place, including a larger sample size and trials conducted on actual vessels.

7. Conclusion

This study demonstrates the potential of the mHUD as a tool for improving visibility in poor conditions on ship bridges and ROCs for autonomous ships. While the study provided encouraging results, several features and functionalities are still missing or require further refinement in the current iteration of the mHUD:

  • dynamic object filtering;

  • distance representation of objects;

  • customisation options for features such as adjusting light intensity or colours;

  • integration with alarms.

In addition to the possible refinements mentioned by the participants, there are additional challenges before implementing the concept in a real ship bridge, which include the following.

  • Integration with the ship’s system via pilot plug or interfacing with the NMEA 2000 network to receive the needed information, alternatively developing a stand-alone sensor rig serving the same purpose.

  • Developing new solutions to attach the mHUD on different positions, e.g. above the ship bridge window, including ergonomic considerations.

  • Implementing user and expert feedback.

  • Exploring the practical aspects of mHUD integration in real-world maritime environments, including compatibility with existing systems, physical constraints and cost considerations.

  • Running longitudinal trials on actual vessels and in ROCs to evaluate the mHUD’s performance over extended periods and in diverse operational scenarios.

The study provides a solid foundation for further exploration of the mHUD’s potential. While the current iteration shows promise, iterative research and development are necessary to address the identified limitations and realise the concept’s full potential. With continued refinement and validation, the mHUD could become a valuable addition to maritime navigation systems, enhancing safety and efficiency in both traditional and autonomous shipping contexts.

Acknowledgements

We would like to thank the navigation students at the vocational school in Møre og Romsdal County Municipality, Ålesund, Norway, for participating in this study. Special thanks go to the NTNU Shore Control Lab (SCL) team and NTNU Ålesund for their collaboration and provision of the ship simulator. Their expertise and resources have greatly facilitated our study and experimentation, allowing us to conduct meaningful evaluations of the mHUD. We acknowledge and appreciate the collective effort and collaboration of all those involved, as their contributions have significantly contributed to the progress and outcomes of this research project.

Funding statement

The study was funded by the Norwegian Research Council through the SFI Autoship Project (grant number 309230), the MIDAS project (grant number 332921) and the European Commission through the Lashfire projects (EU grant number 814975). The study was approved by SIKT under reference number 314460. The Technology Transfer Organisation (TTO) of the Norwegian University of Science and Technology has supported the filing of the Patent for the concept and functionality of the mHUD at the Norwegian Industrial Property Office, application number 20231332 and PCT/NO2024/050246.

References

Aehnelt, M., Peter, C. and Müsebeck, P. (2012). A Discussion of Using Mental Models in Assistive Environments. Proceedings of the 5th International Conference on Pervasive Technologies Related to Assistive Environments, 15.10.1145/2413097.2413145CrossRefGoogle Scholar
Ayres, P., Lee, J.Y., Paas, F. and van Merriënboer, J.J.G. (2021). The Validity of Physiological Measures to Identify Differences in Intrinsic Cognitive Load. Frontiers in Psychology, 12, 116.10.3389/fpsyg.2021.702538CrossRefGoogle ScholarPubMed
Baevsky, R. and Berseneva, A. (2008). Methodical Recommendations Use Kardivar System for Determination of the Stress Level and Estimation of the Body Adaptability Standards of Measurements and Physiological Interpretation.Google Scholar
Bandara, D., Woodward, M., Chin, C. and Jiang, D. (2020). Augmented Reality Lights for Compromised Visibility Navigation. Journal of Marine Science and Engineering, 8(12), 1014.10.3390/jmse8121014CrossRefGoogle Scholar
Brooke, J. (1986). System Usability Scale (SUS): A Quick-and-Dirty Method of System Evaluation User Information. Reading, UK: Digital Equipment Co Ltd, 43, 17.Google Scholar
Charissis, V. and Papanastasiou, S. (2010). Human–Machine Collaboration Through Vehicle Head Up Display Interface. Cognition, Technology & Work, 12(1), 4150.10.1007/s10111-008-0117-0CrossRefGoogle Scholar
Commission, I.E. (2017). IEC 62287-1 and 62287-2: Maritime Navigation and Radiocommunication Equipment and Systems - Class B Shipborne Equipment of the Universal Automatic Identification System (AIS) - Operational and Performance Requirements, Methods of Test and Required Test Results.Google Scholar
Commission, I.E. (2018). IEC 61993-2: Maritime Navigation and Radiocommunication Equipment and Systems - Automatic Identification Systems (AIS) - Part 2: Class A Shipborne Equipment of the Universal Automatic Identification System (AIS) - Operational and Performance Requirements, Methods of Test and Required Test Results. Accessed 29 May 2024.Google Scholar
Crosbie, J.W. (2006). Lookout Versus Lights: Some Sidelights on the Dark History of Navigation Lights. The Journal of Navigation, 59(1), 17.10.1017/S0373463305003607CrossRefGoogle Scholar
Digital, K. (2023). Ship’s Bridge Simulator/Navigation Simulator. https://kongsbergdigital.com/products/k-sim/k-sim-navigation/. Accessed 06 January 2023.Google Scholar
Empatica (2023). E4 Wristband User’s Manual. https://empatica.app.box.com/v/E4-User-Manual. Accessed 06 January 2023.Google Scholar
Endsley, M.R. (1995). A Taxonomy of Situation Awareness Errors. Human Factors in Aviation Operations, 3(2), 287292.Google Scholar
Glomsvoll, O. and Bonenberg, L.K. (2017). GNSS Jamming Resilience for Close to Shore Navigation in the Northern Sea. The Journal of Navigation, 70(1), 3348.10.1017/S0373463316000473CrossRefGoogle Scholar
Grandjean, E. and Kroemer, K.H. (1997). Fitting the Task to the Human: A Textbook of Occupational Ergonomics. CRC Press.Google Scholar
Grech, M.R., Horberry, T. and Smith, A. (2002). Human Error in Maritime Operations: Analyses of Accident Reports Using the Leximancer Tool. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 46, Sage Publications Sage CA, Los Angeles, CA, 17181721.10.1177/154193120204601906CrossRefGoogle Scholar
Harper, S.B. and Dorton, S.L. (2021). A Pilot Study on Extending the Sus Survey: Early Results. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 65, SAGE Publications Sage CA, Los Angeles, CA, 447451.10.1177/1071181321651162CrossRefGoogle Scholar
Hart, S.G. and Staveland, L.E. (1988). Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. Advances in Psychology, 52, 139183. Elsevier.10.1016/S0166-4115(08)62386-9CrossRefGoogle Scholar
Herritt, R. and Brooks, S. (2022). AR Visualization for Coastal Water Navigation. Electronic Imaging, 34, 112.10.2352/EI.2022.34.1.VDA-408CrossRefGoogle Scholar
International Maritime Organization (2021). Outcome of the Regulatory Scoping Exercise for the Use of Maritime Autonomous Surface Ships (mass). MSC.1/Circ.1638. Approved at the 103rd Session of the Maritime Safety Committee (5 to 14 May 2021).Google Scholar
ISO (2019). Ergonomics of Human-System Interaction-Part 210: Human-Centred Design for Interactive Systems. https://www.iso.org/standard/77520.html.Google Scholar
Jaram, H., Vidan, P., Vukša, S. and Pavić, I. (2021). Situational Awareness–Key Safety Factor for the Officer of the Watch. Pedagogy (0861-3982), 93, 225240.Google Scholar
Jurkovič, M., Kalina, T., Turčan, R. and Gardlo, B. (2017). Proposal of an Enhanced Safety System on Board of the Inland Vessel. MATEC Web of Conferences, Vol. 134, EDP Sciences, 00022.10.1051/matecconf/201713400022CrossRefGoogle Scholar
Kalagher, H., de Voogt, A. and Boulter, C. (2021). Situational Awareness and General Aviation Accidents. Aviation Psychology and Applied Human Factors, 18(7), 2343.Google Scholar
Kirilova, M. (2020). Prospects of Development of Unmanned Ships in Russian Federation. Vestnik of Astrakhan State Technical University, 2020, 1622.10.24143/2073-1574-2020-3-16-22CrossRefGoogle Scholar
Kjerstad, N. (2024). Navigasjon. 5th Edition. Fagbokforlaget.Google Scholar
Laera, F., Fiorentino, M., Evangelista, A., Boccaccio, A., Manghisi, V.M., Gabbard, J., Gattullo, M., Uva, A.E. and Foglia, M.M. (2021). Augmented Reality for Maritime Navigation Data Visualisation: A Systematic Review, Issues and Perspectives. The Journal of Navigation, 74(5), 10731090.10.1017/S0373463321000412CrossRefGoogle Scholar
Larmuseau, C., Vanneste, P., Desmet, P. and Depaepe, F. (2019). Combining Physiological Data and Subjective Measurements to Investigate Cognitive Load During Complex Learning. Frontline Learning Research, 7, 5875.10.14786/flr.v7i2.403CrossRefGoogle Scholar
Leite, B.G., Sinohara, H.T., Maruyama, N. and Tannuri, E.A. (2022). Maritime Navigational Assistance by Visual Augmentation. The Journal of Navigation, 75(1), 5775.10.1017/S0373463321000795CrossRefGoogle Scholar
Marzuki, M.F.M., Yaacob, N.A. and Yaacob, N.M. (2018). Translation, Cross-Cultural Adaptation, and Validation of the Malay Version of the System Usability Scale Questionnaire for the Assessment of Mobile Apps. JMIR Human Factors, 5(2), e10308.10.2196/10308CrossRefGoogle Scholar
Nielsen, J. (1995). How to Conduct a Heuristic Evaluation. Nielsen Norman Group, 1(1), 8.Google Scholar
Norway, A.I.B. and Norway, D.A.I.B. (2019). Part One Report on the Collision on 8 November 2018 Between the Frigate HNoMS Helge Ingstad and the Oil Tanker Sola TS Outside the Sture Terminal in the Hjeltefjord in Hordaland County. Accident Investigation Board Norway.Google Scholar
Pedroli, E., Greci, L., Colombo, D., Serino, S., Cipresso, P., Arlati, S., Mondellini, M., Boilini, L., Giussani, V., Goulene, K., Agostoni, M., Sacco, M., Stramba-Badiale, M., Riva, G. and Gaggioli, A. (2018). Characteristics, Usability, and Users Experience of a System Combining Cognitive and Physical Therapy in a Virtual Environment: Positive Bike. Sensors, 18(7), 2343.10.3390/s18072343CrossRefGoogle Scholar
Pieroni, A., Lantieri, C., Imine, H. and Simone, A. (2016). Light Vehicle Model for Dynamic Car Simulator. Transport, 31(2), 242249.10.3846/16484142.2016.1193051CrossRefGoogle Scholar
Psarros, G.A. (2018). Fuzzy Logic System Interference in Ship Accidents. Human Factors and Ergonomics in Manufacturing & Service Industries, 28(6), 372382.10.1002/hfm.20747CrossRefGoogle Scholar
Ratcliffe, S. and Lyon, P. (1986). Indication of Effective Speed and Deceleration to Vessels in Port Approaches. The Journal of Navigation, 39(1), 6674.10.1017/S0373463300014235CrossRefGoogle Scholar
Rødseth, Ø.J., Nordahl, H., and Hoem, Å. (2018). Characterization of Autonomy in Merchant Ships. 2018 OCEANS-MTS/IEEE Kobe Techno-Oceans (OTO), IEEE, 17.Google Scholar
Tarvainen, M.P., Niskanen, J.-P., Lipponen, J.A., Ranta-Aho, P.O. and Karjalainen, P.A. (2014). Kubios HRV–Heart Rate Variability Analysis Software. Computer Methods and Programs in Biomedicine, 113(1), 210220.10.1016/j.cmpb.2013.07.024CrossRefGoogle ScholarPubMed
Union, I.T. (2014). ITU-R M.1371-5: Technical Characteristics for an Automatic Identification System Using Time Division Multiple Access in the VHF Maritime Mobile Frequency Band. Accessed 29 May 2024.Google Scholar
Vestola, M. (2010). A Comparison of Nine Basic Techniques for Requirements Prioritization. Helsinki University of Technology, 18.Google Scholar
Winer, B.J., Brown, D.R., Michels, K.M., et al. (1971). Statistical Principles in Experimental Design, Vol. 2. Mcgraw-Hill, New York.Google Scholar
Zhang, Y. and Wang, P. (2005). Measuring Mental Models: Rationales and Instruments. Proceedings of the American Society for Information Science and Technology, 42(1).10.1002/meet.14504201270CrossRefGoogle Scholar
Figure 0

Figure 1. Navigational lights on Ålesund port, compared during the day and night with light background noise (illustration by Norvald Kjerstad (Kjerstad, 2024)).

Figure 1

Figure 2. A screenshot of a video recording on the bridge of HNoMS Roald Amundsen on the observation voyage before the accident shows an approximate situation and how it appears later that night (Norway and Norway, 2019).

Figure 2

Figure 3. Overview of the design and development process of the mHUD prototype.

Figure 3

Figure 4. In the design phase, two forms were considered for the mHUD, a circular or rectangular shape. Comparing the distance from the centre to the border of the two forms (representing the mHUD) reveals a consistent distance from the centre in the ring-shaped design and varying lengths to the borderline in the rectangular form.

Figure 4

Figure 5. mHUD (lights highlighted with red frame) installed in the ship bridge simulator over the officer on watch.

Figure 5

Figure 6. Visualisation of possible use cases where the mHUD can be used: (a) vessel from port side; (b) the vessel from the starboard side; (c) vessel from the stern side; (d) vessel from the bow side; (e) two buoys; (f) lighthouse indicating navigating in a green sector; and (g) showing shallow water and land in a narrow water passage.

Figure 6

Table 1. Object types, their visual representations and characteristics

Figure 7

Table 2. Inputs processed by the mHUD

Figure 8

Figure 7. The program transfers the position of the objects to the mHUD. On the left side, the position of each object within the range is visible; in the middle, the map displays the objects and the ship. The software snapshot shows a smaller frame of the scenario displayed in Figure 8 and marked for better comprehensibility; on the top, the LEDs are shown to verify the information displayed on the mHUD for debugging purposes (highlighted with a red frame).

Figure 9

Figure 8. The path each participant followed during the journey with the 13 ships and aids to navigation they should identify.

Figure 10

Figure 9. Each participant encountered four conditions during the study on different journey passages: (a) navigation with mHUD and good visibility; (b) navigation without mHUD and good visibility; (c) navigation with mHUD and good visibility; and (d) navigation without mHUD and poor visibility.

Figure 11

Table 3. Study aims and assessment methods

Figure 12

Table 4. The average time (in seconds) the participants needed to identify objects

Figure 13

Table 5. The cumulated numbers of objects the participants have missed or seen in each condition

Figure 14

Figure 10. General workload the participants reported after each condition (Figure 9).

Figure 15

Figure 11. The reported SUS from all participants was transformed into comprehensible grading.

Figure 16

Figure 12. Concept of ROC setup with multiple screens and the mHUD represented by an LED matrix.

Figure 17

Figure 13. Rendering of an example of the digital mHUD integrated in video feed.