Skip to main content Accessibility help
×
Hostname: page-component-54dcc4c588-64p75 Total loading time: 0 Render date: 2025-10-02T20:23:45.256Z Has data issue: false hasContentIssue false

6 - AI-supported Search Interaction for Enhancing Users’ Understanding

Published online by Cambridge University Press:  19 September 2025

Dan Wu
Affiliation:
Wuhan University, China
Shaobo Liang
Affiliation:
Wuhan University, China

Summary

This chapter mainly investigates the role of Artificial Intelligence (AI) in augmenting search interactions to enhance users’ understanding across various domains. The chapter begins by examining the current limitations of traditional search interfaces in meeting diverse user needs and cognitive capacities. It then discusses how AI-driven enhancements can revolutionize search experiences by providing tailored, contextually relevant information and facilitating intuitive interactions. Through case studies and empirical analysis, the effectiveness of AI-supported search interaction in improving users’ understanding is evaluated in different scenarios. This chapter contributes to the literature on AI and human–computer interaction by highlighting the transformative potential of AI in optimizing search experiences for users, leading to enhanced comprehension and decision-making. It concludes with implications for research and practice, emphasizing the importance of human-centered design principles in developing AI-driven search systems.

Information

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2025

6 AI-supported Search Interaction for Enhancing Users’ Understanding

6.1 Introduction

In the era of information explosion, the ability to efficiently access, process, and comprehend vast amounts of data has become crucial for individuals and organizations alike. Traditional search engines, while having made significant strides in information retrieval, often fall short when handling complex queries that require nuanced understanding and contextualization. Users frequently find themselves engaged in multiple iterations of search queries, refining and adjusting their approach to obtain the desired information. This process not only consumes valuable time but also increases cognitive load, potentially hampering effective decision-making and knowledge acquisition (Na & Lee, Reference Na and Lee2016).

The advent of generative Artificial Intelligence (AI) presents a promising solution to address these limitations. By leveraging advanced natural language processing and contextual understanding capabilities, generative AI has the potential to revolutionize search interactions, offering personalized and contextually relevant content in response to user queries (Ali et al., Reference Ali, Naeem and Bhatti2020). This technological leap forward could significantly reduce users’ cognitive burden and enhance their efficiency in information retrieval and comprehension. Recent studies have highlighted the transformative potential of AI in search systems. For instance, Yue and Peng (Reference Yue and Peng2021) demonstrated that AI-enhanced search interfaces could significantly improve search efficiency and effectiveness in enterprise settings. Similarly, Vuong et al. (Reference Vuong, Saastamoinen, Jacucci and Ruotsalo2019) found that users reported higher satisfaction when interacting with conversational search systems, which incorporate elements of AI-supported interaction.

However, despite these promising indications, there remains a critical need for comprehensive research to validate the effectiveness of generative AI in real-world search scenarios and to understand its impact on user behavior and cognition. Questions persist regarding the extent to which AI-supported search interactions can truly enhance users’ understanding, particularly when dealing with multifaceted or specialized information needs (Huggins-Manley et al., Reference Huggins-Manley, Booth and D’Mello2022).

To address this research gap, it is crucial to consider theoretical frameworks that can provide insight into user behavior and information seeking processes. Two particularly relevant theories are the Information Gap Theory and the Uses and Gratifications Theory (UGT). The Information Gap Theory, proposed by Loewenstein (Reference Loewenstein1994), suggests that, when individuals become aware of a gap in their knowledge, they are motivated to seek information to fill this gap. In the context of AI-supported search interactions, generative AI has the potential to more efficiently identify and address these information gaps by providing contextually relevant and personalized information, thereby enhancing user understanding and satisfaction (Ullah & Khusro, Reference Ullah and Khusro2020).

On the other hand, the UGT originally developed in media studies posits that individuals actively choose and use media to satisfy specific needs or goals (Katz et al., Reference Katz, Blumler and Gurevitch1973; Falgoust et al., Reference Falgoust, Winterlind, Moon, Parker, Zinzow and Madathil2022). When applied to information seeking behavior, this theory can help explain why users might prefer AI-supported search systems over traditional search engines. If AI-supported systems can better satisfy users’ information needs and provide a more gratifying search experience, users may be more inclined to adopt and continue using these systems (Sundar & Limperos, Reference Sundar and Limperos2013; Hsu et al., Reference Hsu, Lin and Miao2020). For example, a user engaged in a highly technical search for scientific articles or making purchasing decisions can leverage AI-supported systems to aggregate and summarize multiple sources of information in a more coherent and comprehensible format, thus increasing satisfaction.

The primary objective of this research is to evaluate the impact of AI-supported search interactions on enhancing user understanding, particularly in high-complexity contexts. This chapter will focus on testing two research questions:

RQ1: Whether AI-supported search systems can fill knowledge gaps effectively, thereby improving users’ understanding of complex information.

RQ2: Whether users can perceive AI-supported search systems as more useful and easier to use than traditional search engines, and thus are more likely to prefer these systems for future search tasks.

By pursuing these objectives, we seek to contribute valuable insights to both the theoretical understanding and practical application of AI-enhanced search interactions. Our findings could inform the design and optimization of future search systems, potentially leading to more efficient and user-centric information retrieval tools. Through this chapter, we aim to shed light on the transformative potential of AI in search interactions and its role in enhancing users’ understanding in an increasingly complex information landscape.

6.2 Literature Review

6.2.1 Applications of Generative AI in Information Retrieval

The integration of generative AI into information retrieval systems has marked a significant advancement in addressing complex information needs. Unlike traditional keyword-based search engines, generative AI models, particularly large language models (LLMs) such as GPT-4, have demonstrated remarkable capabilities in understanding context and generating human-like responses (Zhao et al., Reference Zhao, Liu, Ren and Wen2024). Recent studies have shown that generative AI can significantly enhance search efficiency and user understanding in various ways. Firstly, these models excel at query understanding and expansion. Zamani et al. (Reference Zamani, Dumais, Craswell, Bennett and Lueck2020) demonstrated that AI-powered systems could generate clarifying questions, helping users refine their queries and obtain more precise results. This capability is particularly valuable when users have vague or complex information needs that are difficult to articulate in a single query.

Moreover, generative AI has shown promise in producing comprehensive and contextually relevant summaries of search results. Hao and Cukurova (Reference Hao and Cukurova2023) found that AI-generated summaries could significantly reduce the time users spend sifting through multiple documents, thereby enhancing information absorption and decision-making processes. This is especially beneficial in scenarios requiring the synthesis of information from multiple sources, such as academic research or business intelligence gathering.

The ability of generative AI to engage in multi-turn conversations has also revolutionized the search process. Meng et al. (Reference Meng, Aliannejadi and de Rijke2023) observed that conversational search systems powered by LLMs could maintain context across multiple queries, allowing for more natural and in-depth exploration of topics. This conversational approach mimics human-to-human interaction, potentially leading to a more intuitive and satisfying search experience.

However, it is crucial to note that the integration of generative AI in search systems is not without challenges. Issues such as hallucination (generating plausible but incorrect information) and bias have been identified as significant concerns (Bender et al., Reference Bender, Gebru, McMillan-Major and Shmitchell2021). These challenges underscore the need for a careful system design and the importance of maintaining human oversight in AI-supported search interactions.

6.2.2 Information Gap Theory in AI-supported Search

The Information Gap Theory, as proposed by Loewenstein (Reference Loewenstein1994), provides a valuable framework for understanding user motivation in information-seeking behaviors. This theory posits that when individuals become aware of a gap between what they know and what they want to know, they experience a feeling of deprivation, which motivates them to seek information to close this gap.

In the context of AI-supported search, generative AI has the potential to identify and address these information gaps more efficiently. Jiang et al. (Reference Jiang, Liu, Liu, Lim, Tan and Gu2023) found that AI-powered search systems could infer users’ knowledge states and information needs more accurately than traditional systems. By doing so, these systems can provide more targeted and relevant information, effectively bridging the user’s knowledge gap.

Moreover, the ability of generative AI to provide explanations and background information alongside search results can significantly enhance user understanding. Yiannakoulias (Reference Yiannakoulias2024) demonstrated that, when AI systems automatically supplied contextual information, users reported higher levels of topic comprehension and satisfaction. This automatic provision of supplementary information aligns well with the Information Gap Theory, as it proactively addresses potential knowledge gaps that the user may not have initially recognized.

However, it is important to consider potential drawbacks. De Cremer and Kasparov (Reference De Cremer and Kasparov2022) raised concerns about the risk of over-reliance on AI-generated information, which could potentially narrow users’ exploration of diverse viewpoints. This highlights the need for AI-supported search systems to balance efficient information provision with encouraging critical thinking and diverse information seeking.

6.2.3 Uses and Gratifications Theory in AI-supported Systems

The Uses and Gratifications Theory (UGT), originally developed in media studies, has found new relevance in the context of AI-supported information systems. This theory suggests that individuals actively choose and use media to satisfy specific needs or goals (Katz et al., Reference Katz, Blumler and Gurevitch1973). When applied to AI-supported search systems, UGT can provide insights into user adoption and continued use of these technologies.

Recent studies have identified several gratifications that users seek from AI-supported search systems. Chang et al. (Reference Chang, Lee, Wong and Jeong2022) found that, in addition to traditional information-seeking gratifications, users of AI-powered systems reported high levels of “interaction gratification” – the satisfaction derived from engaging with an intelligent system. This suggests that the conversational nature of many AI-supported search interfaces may be intrinsically rewarding for users.

Furthermore, the personalization capabilities of AI systems align well with the UGT framework. Gao and Liu (Reference Gao and Liu2023) demonstrated that AI-powered recommendation systems in search interfaces could significantly enhance user satisfaction by providing tailored results. This personalization gratifies users’ needs for efficiency and relevance, potentially increasing their likelihood of continued system use. The simplified user interfaces often associated with AI-supported systems also play a role in user gratification. Choi and Drumwright al. (Reference Choi and Drumwright2021) found that users reported higher ease-of-use satisfaction with voice-activated AI search assistants compared to traditional text-based interfaces. This suggests that AI systems’ ability to understand natural language queries and provide concise, relevant responses may be particularly gratifying for users seeking quick and effortless information retrieval.

However, it is crucial to consider potential negative gratifications as well. Privacy concerns and the fear of reduced control over information access have been identified as factors that may deter some users from fully embracing AI-supported search systems (Choung et al., Reference Choung, David and Ling2024). These findings highlight the need for transparent and user-centric design in AI-supported search interfaces to maximize positive gratifications while minimizing potential drawbacks.

In conclusion, the integration of generative AI in information retrieval systems presents significant opportunities for enhancing user understanding and satisfaction. By leveraging the insights from Information Gap Theory and Uses and Gratifications Theory, developers can create AI-supported search systems that not only efficiently bridge knowledge gaps but also provide a gratifying user experience. However, careful consideration must be given to potential challenges, including issues of over-reliance, bias, and privacy concerns. Future research should focus on addressing these challenges while further exploring the potential of AI to revolutionize the search experience.

6.3 Research Design

This chapter employs a mixed-methods approach to investigate the effectiveness of AI-supported search interactions in enhancing users’ understanding. Our research design combines quantitative measurements with qualitative insights to provide a comprehensive analysis of user behavior, performance, and perceptions.

6.3.1 Experimental Procedure

The experiment will be conducted in three phases:

  1. (1) Pre-experiment: Participants will complete a simple search task to familiarize themselves with both the traditional search system and the AI-supported search system (New Bing). This phase serves to minimize learning effects and ensure participants are comfortable with both interfaces.

  2. (2) Formal experiment: Participants will be randomly assigned to either the control group (using traditional search) or the experimental group (using AI-supported search). Both groups will complete identical search tasks, with their behavioral data being recorded throughout the process. This between-subjects design allows for a direct comparison of the two search systems while minimizing carry-over effects (Hornbæk & Oulasvirta, Reference Hornbæk and Oulasvirta2017).

  3. (3) Post-experimental: Upon completion of the tasks, participants will fill out a questionnaire to provide feedback on their perceptions and experiences with the search systems. This phase is crucial for gathering data related to the Uses and Gratifications Theory (Sundar & Limperos, Reference Sundar and Limperos2013).

6.3.2 Experiment Tasks

The search tasks are designed to highlight the differences between generative AI and traditional search systems in terms of user understanding and efficiency. Each task is based on real-world scenarios and covers different information needs. These tasks will cover academic research, consumer decision-making, and professional information retrieval. The tasks and corresponding details are presented in the Table 6.1.

Table 6.1Experimental tasks design
TaskTask DescriptionTask GoalsTime Limit
Academic SearchSearch for and summarize key points from academic papers about “The Impact of Climate Change on the Global Economy.” Find and summarize at least 3 sources.Test whether AI can facilitate faster and more accurate retrieval of complex information, helping users form a better understanding of academic material.30 minutes
Shopping DecisionSearch for information comparing smartphone models, user reviews, and prices. Recommend the best phone based on your findings.Evaluate whether AI can assist in integrating data from multiple sources (reviews, prices) to help users make quicker and more informed purchasing decisions.15 minutes
Information AccessSearch for and summarize recent developments in quantum computing technology. Provide a brief overview and recommend an influential article.Assess how effectively AI can assist users in retrieving and summarizing information in a technical, specialized field.10 minutes

The time limits for each task are carefully calibrated to balance the need for thorough exploration with the realities of typical search behaviors (O’Brien et al., Reference O’Brien, Arguello and Capra2020).

6.3.3 User Feedback and Questionnaire Design

The post-task questionnaire is designed to capture users’ perceptions and experiences. It includes the following dimensions, each using a 5-point Likert scale, as shown in Table 6.2.

Table 6.2Questionnaire design
ObjectsMeasurementConstructs
Perceived Usefulness“Using the AI-supported search system has significantly improved my information retrieval efficiency.”
“The generative AI search system provided useful information to address my query.”
“The AI-supported search system enabled me to complete tasks more quickly.”
Venkatesh and Davis (Reference Venkatesh and Davis2000)
Perceived Ease of Use“I find the AI-supported search system very easy to use.”
“I can effortlessly find the information I need through the generative AI.”
“I find the interface of the generative AI search system intuitive to navigate.”
Venkatesh and Bala (Reference Venkatesh and Bala2008)
Information Understanding Assessment“Using the generative AI search system has significantly improved my understanding of the issue.”
“The AI-supported search helped me better digest and organize information.”
“I obtained more background information through the generative AI compared to traditional search engines.”
Savolainen (Reference Savolainen2013)
Intention to Use in the Future“I will prioritize using the generative AI search system for future search tasks.”
“Based on my experience, I am inclined to recommend the generative AI search system to others.”
“In the future, using the AI-supported search system will become my default practice.”
Venkatesh et al. (Reference Venkatesh, Morris, Davis and Davis2003)

Perceived usefulness is a key component of the Technology Acceptance Model (TAM), designed to measure the extent to which a user believes a particular technology enhances their performance in tasks or work-related activities (Venkatesh & Davis, Reference Venkatesh and Davis2000). Perceived ease of use assesses the level of difficulty a user experiences when using a particular technology, which is also based on the TAM model (Venkatesh & Bala, Reference Venkatesh and Bala2008). Information understanding assessment evaluates the depth of a user’s understanding of tasks or information after using the system, often associated with cognitive load and knowledge acquisition (Savolainen, Reference Savolainen2013). Intention to use in the future measures a user’s willingness to continue using the AI-supported search system in the future, often linked to user experience and perceived usefulness (Venkatesh et al., Reference Venkatesh, Morris, Davis and Davis2003).

6.3.4 Behavioral Data Collection

To obtain comprehensive data, we tracked and recorded user behavior through the following methods.

  1. (1) Screen Recording: We used screen recording software to capture users’ complete search behavior during the experiment.

  2. (2) Click Path Analysis: We recorded the number of webpages users click on from entering the search query to selecting results, observing their search behavior patterns.

  3. (3) Dwell Time: We tracked the time users spent on each search result page or generated answer, analyzing whether they read and understood the content in depth.

  4. (4) Query Modifications: We recorded the number of times users modified or expanded their search queries, reflecting the difficulties encountered during the search process and information that needs clarification.

  5. (5) Output Quality: We assessed the quality of users’ final task outputs through expert scoring or comparison with standard answers.

6.3.5 Participant Characteristics

Fifty participants were selected to ensure diversity in terms of age, gender, and professional background, allowing for generalizable results. Table 6.3 shows a simulated distribution of participants.

Table 6.3Participant characteristics
AgeParticipantsGender (M/F)Profession
18–24157/8University Students
25–342010/10Early Career Professionals
35–441510/5Mid-Level Professionals

6.4 Findings

6.4.1 Behavioral Results Analysis

Click Path and Task Completion Time Analysis

In the context of information retrieval systems, click path refers to the sequence of clicks users make to navigate through search results, while task completion time measures the total duration users take to complete a task. Analyzing these two variables provides critical insights into user behavior, especially in comparing the effectiveness of traditional search engines versus AI-supported systems (New Bing).

In the research design, participants interacted with both traditional search engines and the AI-enhanced New Bing system. Based on the collected data, we observed the trends illustrated in Figure 6.1.

A dual-bar graph of the click path analysis plots average clicks for traditional search and AI enhanced search. See long description.

Figure 6.1 Click path analysis

Figure 6.1Long description

The graph plots bars for average clicks for traditional search and average clicks for AI enhanced search. The values are 8.2 and 4.1 for the academic search task, 5.6 and 3.4 for the shopping decision task, and 7.4 and 3.8 for the information retrieval task, respectively.

From the data in Figure 6.1, it is evident that users in the AI-enhanced search group required fewer clicks to reach relevant information across all tasks. This reduction in click path length can be attributed to the contextual understanding of user queries by AI-powered systems, which are able to present more relevant and tailored results earlier in the search process. Natural language processing (NLP) plays a crucial role in interpreting user intent, which reduces the need for multiple refinements or excessive exploration of irrelevant links.

Task Completion Time Analysis

Task completion time serves as a critical metric to assess the efficiency of search systems. It measures how quickly users are able to complete their tasks, which reflects the usability and effectiveness of the system. Results show that AI-enhanced search engines, like New Bing, reduce the time required for users to find relevant information by providing higher quality results upfront. Figure 6.2 summarizes the average task completion times for both groups.

A dual-bar graph of the task completion time plots the average time for traditional search and AI enhanced search. See long description.

Figure 6.2 Task completion time

Figure 6.2Long description

The graph plots bars for average time for traditional search and average time for AI enhanced search. The values in minutes are 23 and 15 for the academic search task, 13 and 9 for the shopping decision task, and 19 and 11 for the information retrieval task, respectively.

These results clearly demonstrate that users completed tasks significantly faster when using the AI-enhanced system. For example, the academic search task showed a 35 percent reduction in completion time, highlighting the potential of AI-supported systems to expedite information retrieval, particularly for complex, knowledge-based queries.

This time reduction can be attributed to the contextual understanding provided by AI-generated summaries, which allow users to engage with highly relevant information more quickly without the need for excessive query modifications.

The combination of shorter click paths and reduced task completion times indicates that AI-supported search systems are more efficient in delivering relevant content. This efficiency not only enhances user satisfaction but also aligns with the Uses and Gratifications Theory, as users’ cognitive and operational needs are met more effectively (Cheng & Jiang, Reference Cheng and Jiang2020). Furthermore, this aligns with Information Gap Theory, where the system effectively fills users’ knowledge gaps more quickly than traditional systems.

Query Count Analysis

Query count serves as a critical indicator of how effective a search system is at delivering relevant results. Higher query counts may indicate that users need to continuously refine or modify their queries because the system does not fully understand their intent, while lower query counts suggest that the system is able to meet user needs with fewer iterations.

Figure 6.3 shows that participants using the AI-enhanced search system required significantly fewer queries to retrieve relevant information across all tasks. For example, in the academic search task, the average query count for the AI group was less than half of that for the traditional search group (2.1 vs. 5.3). This indicates that the AI system, powered by advanced natural language processing (NLP) and contextual understanding, is more effective at delivering relevant results based on the user’s initial query, reducing the need for query reformulation (Pinzolits, Reference Pinzolits2024).

A dual-bar graph of the query count analysis plots the average query count for traditional search and AI enhanced search. See long description.

Figure 6.3 Query count analysis

Figure 6.3Long description

The graph plots bars for average query count for traditional search and average query count for AI enhanced search. The values are 5.3 and 2.1 for the academic search task, 3.8 and 1.6 for the shopping decision task, and 4.9 and 2.3 for the information retrieval task, respectively.

This reduction in query count demonstrates the efficiency of the AI system in understanding the user’s intent and providing tailored results. Generative AI systems can interpret ambiguous queries, generate responses based on context, and offer multiple options from different angles, thereby minimizing the need for users to engage in trial-and-error behavior typical of traditional search engines (Bender et al., Reference Bender, Gebru, McMillan-Major and Shmitchell2021).

Time on Page Analysis

Time on page provides a measure of user engagement with the content of search results. Higher time on page typically suggests that users are finding the content relevant and useful enough to spend more time reading or interacting with it. However, an excessively long time on page can also indicate that users are struggling to interpret or process the information.

Figure 6.4 shows that participants spent significantly less time on individual result pages when using the AI-enhanced system. This is particularly evident in the academic search task, where the average time on page dropped from 7.5 minutes in the traditional system to 5.1 minutes in the AI-enhanced system.

A dual-bar graph of the time on the page plots the average time on the page for traditional search and AI enhanced search. See long description.

Figure 6.4 Time on page

Figure 6.4Long description

The graph plots bars for average time on page for traditional search and average time on page for AI enhanced search. The values in minutes are 7.5 and 5.1 for the academic search task, 4.2 and 3.3 for the shopping decision task, and 6.8 and 4.5 for the information retrieval task, respectively.

This reduction in time on page for the AI group suggests that users were able to find more relevant information faster, without needing to sift through irrelevant or redundant content. AI-generated summaries and contextually relevant results provided by New Bing allowed users to absorb key information quickly, thereby reducing the time spent on each page (Gerlich, Reference Gerlich2023). Furthermore, the AI system’s ability to synthesize information from multiple sources likely contributed to this reduction in time, as users could easily digest the core findings without having to navigate through multiple lengthy documents.

6.4.2 Perception Data Analysis

In this section, we analyze the perceptual data collected from users regarding their experience with AI-enhanced search systems and traditional search engines. The analysis focuses on three key dimensions: perceived usefulness, perceived ease of use, and future usage intention. In addition to frequency statistics, independent samples t-tests were conducted to examine whether the differences between the two systems were statistically significant. The analysis was based on post-task questionnaire responses, using a 5-point Likert scale (1 = Strongly Disagree, 5 = Strongly Agree).

Perceived Usefulness Analysis

Perceived usefulness measures how effectively users believe a system helps them achieve their search objectives. Participants rated their perception of both systems on a Likert scale ranging from 1 (not useful at all) to 5 (very useful). Table 6.4 presents the average perceived usefulness scores for each task.

Table 6.4User’s perceived usefulness
GroupMean (Traditional Search)Mean (AI-enhanced Search)t-valuep-value
Academic Search Task3.64.7−6.23< 0.001
Shopping Decision Task3.94.8−5.88< 0.001
Information Retrieval Task3.74.6−6.05< 0.001

For the academic search task, the t-value of −6.23 with a p-value of < 0.001 indicates a statistically significant difference in perceived usefulness between the two systems. Users found the AI-enhanced search system to be substantially more useful in helping them retrieve complex academic content.

Similar statistically significant results were observed for the shopping decision and information retrieval tasks, with t-values of −5.88 and −6.05, respectively, and p-values all below 0.001. This suggests that participants consistently rated AI-enhanced systems as more useful across various contexts.

These findings highlight the superior ability of AI-enhanced systems to deliver contextually relevant and precise results, improving task completion efficiency and the overall user experience (Gerlich, Reference Gerlich2023).

Perceived Ease of Use Analysis

Perceived ease of use evaluates how intuitive and user-friendly the system interface is when performing search tasks. The analysis of perceived ease of use also shows a significant difference favoring the AI-supported system.

Table 6.5User’s perceived ease of use
GroupMean (Traditional Search)Mean (AI-enhanced Search)t-valuep-value
Academic Search Task3.54.8−7.15< 0.001
Shopping Decision Task4.04.7−4.93< 0.001
Information Retrieval Task3.64.5−6.31< 0.001

In the academic search task, the t-value of −7.15 and p-value of < 0.001 demonstrate that the AI-enhanced search system was perceived as significantly easier to use. This likely results from the natural language processing (NLP) capabilities of AI systems, which allow users to engage in multi-turn interactions and refine queries contextually, reducing cognitive load (Bender et al., Reference Bender, Gebru, McMillan-Major and Shmitchell2021).

The same pattern was found in the shopping decision and information retrieval tasks, where AI-enhanced systems consistently outperformed traditional systems in terms of ease of use, with significant p-values below 0.001.

Future Usage Intention Analysis

Future usage intention assesses users’ likelihood of adopting the AI-enhanced search system for future tasks, based on their overall experience. The analysis of future use intention reveals the most pronounced difference between the two systems.

Table 6.6User’s perceived usefulness
GroupMean (Traditional Search)Mean (AI-enhanced Search)t-valuep-value
Academic Search Task3.74.9−6.84< 0.001
Shopping Decision Task4.14.8−4.75< 0.001
Information Retrieval Task3.84.7−5.90< 0.001

For the academic search task, the t-value of −6.84 and p-value < 0.001 indicate that users are significantly more likely to adopt AI-enhanced systems for future academic searches. The AI system’s ability to synthesize and present complex information efficiently contributes to this high future usage intention (Hao & Cukurova, Reference Hao and Cukurova2023).

The same trend is observed in the shopping decision and information retrieval tasks, where users displayed a strong preference for AI-enhanced systems, as evidenced by statistically significant differences (p < 0.001) in future usage intention.

These results align with the Technology Acceptance Model (TAM), which posits that higher perceived usefulness and ease of use lead to greater user adoption of technology (Venkatesh et al., Reference Venkatesh, Morris, Davis and Davis2003). As AI systems continue to evolve, users are likely to rely more on these tools for tasks that require synthesizing large amounts of information.

6.5 Discussion and Conclusion

This chapter has shed light on the transformative potential of AI-supported search interactions in reshaping users’ information seeking behavior and perceptions. Our findings reveal a significant shift in search efficiency, effectiveness, and user satisfaction when comparing AI-supported systems to traditional search methods.

The dramatic reduction in query modifications across all task types – from academic research to consumer decision-making – suggests that AI-supported search is adept at interpreting user intent and providing relevant results from the outset. This efficiency gain is particularly pronounced in complex, information-dense tasks, where the AI’s ability to understand context and nuance proves most valuable. The observed increase in dwell time on relevant pages further supports this notion, indicating that users are spending less time reformulating queries and more time engaging with pertinent information.

These behavioral changes have profound implications for our understanding of information seeking processes. Traditional models that emphasize iterative query refinement may need to be revisited in light of the more streamlined search process enabled by AI support.

The consistently positive user perceptions of AI-supported search across dimensions of usefulness, ease of use, and future use intention are particularly striking. These findings strongly support the Technology Acceptance Model and indicate a high likelihood of user adoption and continued use of AI-supported search systems. The marked preference for AI-supported search in future use intention suggests that these systems are not merely efficient, but also provide a more satisfying and gratifying search experience, aligning with the Uses and Gratifications Theory.

From a practical standpoint, these findings underscore the importance of integrating AI capabilities into search systems across various domains. Developers and organizations should prioritize features that reduce the need for query reformulations and support more efficient information retrieval. However, the varying benefits observed across different task types highlight the need for task-specific optimizations. Interfaces should be designed to capitalize on the increased dwell time, perhaps by providing more in-depth content summaries or related information to support deeper engagement with the material.

While these results are promising, they also open up new avenues for research. The long-term effects of AI-supported search on information seeking behavior and learning outcomes remain to be explored. How different user groups interact with and benefit from these systems is another crucial area for investigation. Moreover, while increased dwell time suggests deeper engagement, future studies should directly assess the quality and depth of information processing and comprehension.

As AI-supported search systems become more prevalent, it is also imperative to consider their ethical implications. Issues of bias in AI algorithms, privacy concerns, and the potential for creating information bubbles need to be thoroughly examined and addressed.

In conclusion, this chapter provides compelling evidence for the effectiveness and user acceptance of AI-supported search interactions. By enhancing search efficiency, promoting deeper engagement with content, and garnering positive user perceptions, these systems have the potential to significantly transform information-seeking behavior.

References

Ali, M. Y., Naeem, S. B., & Bhatti, R. (2020). Artificial Intelligence Tools and Perspectives of University Librarians: An Overview. Business Information Review, 37(3), 116124.10.1177/0266382120952016CrossRefGoogle Scholar
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623).Google Scholar
Chang, Y., Lee, S., Wong, S. F., & Jeong, S. P. (2022). AI-powered Learning Application Use and Gratification: An Integrative Model. Information Technology & People, 35(7), 21152139.10.1108/ITP-09-2020-0632CrossRefGoogle Scholar
Cheng, Y., & Jiang, H. (2020). How Do AI-driven Chatbots Impact User Experience? Examining Gratifications, Perceived Privacy Risk, Satisfaction, Loyalty, and Continued Use. Journal of Broadcasting & Electronic Media, 64(4), 592614.10.1080/08838151.2020.1834296CrossRefGoogle Scholar
Choi, T. R., & Drumwright, M. E. (2021). “OK, Google, Why Do I Use You?” Motivations, Post-consumption Evaluations, and Perceptions of Voice AI Assistants. Telematics and Informatics, 62, 101628.10.1016/j.tele.2021.101628CrossRefGoogle Scholar
Choung, H., David, P., & Ling, T. W. (2024). Acceptance of AI-powered Technology in Surveillance Scenarios: Role of Trust, Security, and Privacy Perceptions. Security, and Privacy Perceptions. http://dx.doi.org/10.2139/ssrn.4724446CrossRefGoogle Scholar
De Cremer, D., & Kasparov, G. (2022). The Ethics of Technology Innovation: A Double-edged Sword? AI and Ethics, 2(3), 533537.10.1007/s43681-021-00103-xCrossRefGoogle Scholar
Falgoust, G., Winterlind, E., Moon, P., Parker, A., Zinzow, H., & Madathil, K. C. (2022). Applying the Uses and Gratifications Theory to Identify Motivational Factors behind Young Adult’s Participation in Viral Social Media Challenges on TikTok. Human Factors in Healthcare, 2, 100014.10.1016/j.hfh.2022.100014CrossRefGoogle Scholar
Gao, Y., & Liu, H. (2023). Artificial Intelligence-enabled Personalization in Interactive Marketing: A Customer Journey Perspective. Journal of Research in Interactive Marketing, 17(5), 663680.10.1108/JRIM-01-2022-0023CrossRefGoogle Scholar
Gerlich, M. (2023). Perceptions and Acceptance of Artificial Intelligence: A Multi-dimensional Study. Social Sciences, 12(9), 502.10.3390/socsci12090502CrossRefGoogle Scholar
Hao, X., & Cukurova, M. (2023, June). Exploring the Effects of “AI-Generated” Discussion Summaries on Learners’ Engagement in Online Discussions. In International Conference on Artificial Intelligence in Education (pp. 155161). Springer Nature Switzerland.Google Scholar
Hornbæk, K., & Oulasvirta, A. (2017, May). What Is Interaction? Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pp. 5040–5052).10.1145/3025453.3025765CrossRefGoogle Scholar
Hsu, C. L., Lin, J. C. C., & Miao, Y. F. (2020). Why Are People Loyal to Live Stream Channels? The Perspectives of Uses and Gratifications and Nedia Richness Theories. Cyberpsychology, Behavior, and Social Networking, 23(5), 351356.10.1089/cyber.2019.0547CrossRefGoogle Scholar
Huggins-Manley, A. C., Booth, B. M., & D’Mello, S. K. (2022). Toward Argument-based Fairness with an Application to AI-enhanced Educational Assessments. Journal of Educational Measurement, 59(3), 362388.10.1111/jedm.12334CrossRefGoogle Scholar
Jiang, N., Liu, X., Liu, H., Lim, E. T. K., Tan, C. W., & Gu, J. (2023). Beyond AI-powered Context-aware Services: The Role of Human–AI Collaboration. Industrial Management & Data Systems, 123(11), 27712802.10.1108/IMDS-03-2022-0152CrossRefGoogle Scholar
Katz, E., Blumler, J. G., & Gurevitch, M. (1973). Uses and Gratifications Research. The Public Opinion Quarterly, 37(4), 509523.10.1086/268109CrossRefGoogle Scholar
Loewenstein, G. (1994). The Psychology of Curiosity: A Review and Reinterpretation. Psychological Bulletin, 116(1), 7598.10.1037/0033-2909.116.1.75CrossRefGoogle Scholar
Meng, C., Aliannejadi, M., & de Rijke, M. (2023, October). System Initiative Prediction for Multi-turn Conversational Information Seeking. Proceedings of the 32nd ACM International Conference on Information and Knowledge Management (pp. 1807–1817).10.1145/3583780.3615070CrossRefGoogle Scholar
Na, K., & Lee, J. (2016). When Two Heads Are Better Than One: Query Behavior, Cognitive Load, Search Time, and Task Type in Pairs versus Individuals. Aslib Journal of Information Management, 68(5), 545565.10.1108/AJIM-04-2015-0057CrossRefGoogle Scholar
O’Brien, H. L., Arguello, J., & Capra, R. (2020). An Empirical Study of Interest, Task Complexity, and Search Behaviour on User Engagement. Information Processing & Management, 57(3), 102226.10.1016/j.ipm.2020.102226CrossRefGoogle Scholar
Pinzolits, R. (2024). AI in Academia: An Overview of Selected Tools and Their Areas of Application. MAP Education and Humanities, 4, 3750.10.53880/2744-2373.2023.4.37CrossRefGoogle Scholar
Savolainen, R. (2013). Approaching the Motivators for Information Seeking: The Viewpoint of Attribution Theories. Library & Information Science Research, 35(1), 6368.10.1016/j.lisr.2012.07.004CrossRefGoogle Scholar
Sundar, S. S., & Limperos, A. M. (2013). Uses and Grats 2.0: New Gratifications for New Media. Journal of Broadcasting & Electronic Media, 57(4), 504525.10.1080/08838151.2013.845827CrossRefGoogle Scholar
Ullah, I., & Khusro, S. (2020). On the Search Behaviour of Users in the Context of Interactive Social Book Search. Behaviour & Information Technology, 39(4), 443462.10.1080/0144929X.2019.1599069CrossRefGoogle Scholar
Venkatesh, V., & Bala, H. (2008). Technology Acceptance Model 3 and a Research Agenda on Interventions. Decision Sciences, 39(2), 273315.10.1111/j.1540-5915.2008.00192.xCrossRefGoogle Scholar
Venkatesh, V., & Davis, F. D. (2000). A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies. Management Science, 46(2), 186204.10.1287/mnsc.46.2.186.11926CrossRefGoogle Scholar
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User Acceptance of Information Technology: Toward a Unified View. MIS Quarterly, 27(3), 425478.10.2307/30036540CrossRefGoogle Scholar
Vuong, T., Saastamoinen, M., Jacucci, G., & Ruotsalo, T. (2019). Understanding User Behavior in Naturalistic Information Search Tasks. Journal of the Association for Information Science and Technology, 70(11), 12481261.10.1002/asi.24201CrossRefGoogle Scholar
Yiannakoulias, N. (2024). Spatial Intelligence and Contextual Relevance in AI-driven Health Information Retrieval. Applied Geography, 171, 103392.10.1016/j.apgeog.2024.103392CrossRefGoogle Scholar
Yue, G., & Peng, S. (2021, June). Application of Artificial Intelligence in the Academic Search Engine. International Conference on Applications and Techniques in Cyber Security and Intelligence (pp. 611616). Springer International Publishing.Google Scholar
Zamani, H., Dumais, S., Craswell, N., Bennett, P., & Lueck, G. (2020, April). Generating Clarifying Questions for Information Retrieval. WWW ’20: Proceedings of the Web Conference 2020 (pp. 418–428). https://dl.acm.org/doi/abs/10.1145/3366423.3380126CrossRefGoogle Scholar
Zhao, W. X., Liu, J., Ren, R., & Wen, J. R. (2024). Dense Text Retrieval Based on Pretrained Language Models: A Survey. ACM Transactions on Information Systems, 42(4), 160.10.1145/3637870CrossRefGoogle Scholar
Figure 0

Table 6.1 Experimental tasks design

Figure 1

Table 6.2 Questionnaire design

Figure 2

Table 6.3 Participant characteristics

Figure 3

Figure 6.1 Click path analysisFigure 6.1 long description.

Figure 4

Figure 6.2 Task completion timeFigure 6.2 long description.

Figure 5

Figure 6.3 Query count analysisFigure 6.3 long description.

Figure 6

Figure 6.4 Time on pageFigure 6.4 long description.

Figure 7

Table 6.4 User’s perceived usefulness

Figure 8

Table 6.5 User’s perceived ease of use

Figure 9

Table 6.6 User’s perceived usefulness

Accessibility standard: Unknown

Accessibility compliance for the HTML of this book is currently unknown and may be updated in the future.

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×