Hostname: page-component-54dcc4c588-m259h Total loading time: 0 Render date: 2025-09-12T12:59:11.748Z Has data issue: false hasContentIssue false

Cautious optimism: AI and the research methods playbook

Published online by Cambridge University Press:  11 September 2025

Alison Mackey*
Affiliation:
Department of Linguistics, Georgetown University, Washington, DC, USA
Rights & Permissions [Opens in a new window]

Abstract

Information

Type
Conclusion
Copyright
© The Author(s), 2025. Published by Cambridge University Press.

Many of the authors in this volume on research methodology, like those in other current and recent publications in applied linguistics, have taken a wide variety of positions around the use of artificial intelligence (AI). Here and elsewhere, concerns about the quality and ethics of AI-driven or assisted research in applied linguistics are being expressed in our field. It seems inevitable that important aspects of our work will rapidly evolve and adapt. What follows is a short review of how AI is already influencing applied linguistics research methods, a few reflections for discussion, and some cautious optimism about what may lie ahead.

AI and the questions we ask

Articulating how a particular study identifies and addresses a unique and pressing need in the field has always been central to successful research and publication. Integrating AI into the process of generating research questions might help researchers achieve two critical goals in research design. First, AI can assist in brainstorming potential questions by identifying gaps in the literature through its powerful review capabilities. Second, AI can contribute to the development of higher-quality research questions; when models are effectively trained on what constitutes a strong research question, their output can support researchers – particularly novice ones – in crafting and refining questions that are answerable (i.e., can be directly addressed through appropriate data and analysis), feasible (i.e., sufficiently narrow in scope for a single study), and useful (i.e., meaningfully address real-world issues).

That said, caution is warranted. Researchers focused on replication and meta-analytic work have long expressed concern that the field’s emphasis on novelty and exploration can overshadow the essential work of confirmation and refinement. AI’s ability to generate increasingly narrow and specialized research questions may reinforce this trend, as researchers seek unique angles to contribute original findings. Nevertheless, with careful attention to the ongoing need for replication and synthesis, the judicious use of AI in question generation and refinement can greatly accelerate and streamline this process – likely leading to stronger questions and, ultimately, better studies.

AI and the tools we develop

The rise of open science practices – particularly the alignment and open availability of research tools and data across studies and research teams – has been a major advance in applied linguistics over the past decade and a half. Resources like the Instruments for Research into Language Studies (IRIS) repository (https://www.iris-database.org/) provide key advantages: (a) enabling researchers to replicate previous studies more easily, (b) eliminating the need to reinvent the wheel for each new project, and (c) facilitating robust comparisons and meaningful generalizations across research.

AI is now being used not only to identify and retrieve relevant instruments more efficiently but also to design or adapt them based on researcher input – rapidly generating new versions tailored to different languages, contexts, or research goals. Like IRIS, AI can offer references and summaries of studies that have used similar tools, further supporting evidence-based instrument selection and design. As with any new tool, however, caution is essential. AI-generated instruments must be carefully reviewed to ensure they accurately reflect the source materials and maintain fidelity to the constructs being measured. Proper citation and credit for original developers need to be given too. An additional concern is that the widespread use of AI in instrument creation raises the risk of standardization to the point of stagnation – where researchers continually rely on the same familiar formats simply because they are readily available. If a thoughtful approach to design, critical oversight, and appropriate training is applied though, AI has undeniable potential to speed up and streamline the process and product of research tool development.

AI and the learners we study

A major development in the intersection of AI and research methodology involves the ways we model and simulate language learners and their interactions. For researchers working in interactional contexts, the use of AI as a simulated interlocutor offers practical and conceptual advantages. Recent advances mean that AI systems are becoming increasingly adept at compensating for variability in natural speech, including accents, dialects, and unplanned language use. As AI interlocutors improve, they are being integrated into language learning platforms – such as Duolingo Max (https://www.duolingo.com/help/what-is-duolingo-max) – and are beginning to facilitate realistic, spontaneous tailored learning opportunity-based exchanges with learners. These advances point towards a possible methodological future in which AI interlocutors may be used as embedded participants in research designs. This would provide us with tools that are not only cost-effective but also scalable and replicable. It also opens doors for studying interaction in ways that are independent of participant availability, scheduling, and instructional constraints.

Beyond simulating interlocutors, researchers are beginning to ask questions about the feasibility of simulating learner behavior itself. Could we one day model second language learning trajectories in the same way meteorologists model storms or epidemiologists model disease progression? While current AI systems cannot simulate learner behavior with high precision, progress in modeling L2 production, comprehension, and individual differences is accelerating. The potential for researchers to test instructional hypotheses in advance – using AI to simulate outcomes across different teaching scenarios, learner profiles, and environments – might fundamentally change how we design, pilot, and validate research. Simulations like this might be particularly useful in contexts where ethical concerns or practical limitations constrain experimentation with live participants.

At the same time, though, we need to be careful. Simulation systems, if over-relied on, risk reinforcing narrow views of what constitutes a typical or successful learner. There are legitimate concerns about bias in training data, about overly deterministic models of learning that overlook human agency, and about the ethical implications of designing research based on projected rather than observed behavior. Nevertheless, as we learn to balance innovation with scrutiny, AI simulation seems likely to become a transformative tool for studying learners in new and generative ways.

For many of us, AI platforms and large language models are now standard components in our research toolkits. It is inevitable that some of our most established methods will evolve in the not-too-distant future. The most balanced and forward-looking course of action would seem to be to cautiously embrace AI, to remain open to how it may reshape our approaches to research methodology, and to look to journals like ARAL to publish work that tells us more. The next issue, which is the first under the superb new editorship of Andrea Révész, does a magnificent job of exploring all these issues, and more, in “Artificial intelligence in applied linguistics: Applications, promises, and challenges.

Acknowledgments

I would like to acknowledge the various critical contributions of (in alphabetical order) ChatGPT, Erin Fell, and David Yarowsky to debating, drafting, completely rewriting, and editing the content for this short piece.