Hostname: page-component-857557d7f7-zntvd Total loading time: 0 Render date: 2025-12-08T23:25:20.303Z Has data issue: false hasContentIssue false

Developmental reviewing: Is it really good for science?

Published online by Cambridge University Press:  01 December 2025

Tammy D. Allen*
Affiliation:
University of South Florida, Tampa, FL, USA
Kimberly A. French
Affiliation:
Colorado State University, Fort Collins, CO, USA
Derek R. Avery
Affiliation:
University of Houston, Houston, TX, USA
Eden B. King
Affiliation:
Rice University, Houston, TX, USA
Brenton M. Wiernik
Affiliation:
Independent Researcher, Tampa, USA
*
Corresponding author: Tammy D. Allen; Email: tallen@usf.edu
Rights & Permissions [Opens in a new window]

Abstract

Peer review is part of the bedrock of science. In recent years the focus of peer review has shifted toward developmental reviewing, an approach intended to focus on the author’s growth and development. Yet, does the focus on developing the author have unintended consequences for the development of science? In this paper, we critique the developmental approach to peer review and contrast it with the constructive approach, which focuses on improvement of the research. We suggest the developmental approach, although with laudable aims, has also produced unintended consequences that negatively impact authors’ experiences as well as the quality and meaningfulness of the science published. We identify problems and discuss potential solutions that can strengthen peer review and contribute to science for a smarter workplace.

Information

Type
Focal Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Society for Industrial and Organizational Psychology

We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller, but we know that the system of peer truth is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong.

--Richard Horton, former editor-in-chief of The Lancet (Reference Horton2000)

Overview

Concerns about journal peer review (the act of knowledgeable colleagues assessing scholarship prior to publication) are nothing new. The above quote dates to 2000. Everyone has heard of and shared horror stories about bad experiences with reviewers (and action editors). Moreover, articles articulating problems with the review process have been published in journals for years (e.g., Ellwanger & Chies, Reference Ellwanger and Chies2020; Tennant et al., Reference Tennant, Dugan, Graziotin, Jacques, Waldner, Mietchen, Elkhatib, Collister, Pikas, C., Crick, Masuzzo, Caravaggi, Berg, Niemeyer, Ross-Hellauer, Mannheimer, Rigling and Colomb2017). However, most appreciate the important intended role that peer review plays in the scientific process to the extent that it helps ensure the quality and integrity of published research.

The current system of double-blind peer review (i.e., the identity of both the author and the reviewer are hidden) used within the field of industrial-organizational (I-O) psychology has been in place for decades. Within I-O, most of the attention on peer review in recent years has focused on ways to ensure our science is more open and robust and on elimination of questionable research practices (QRPs) (e.g., Banks et al., Reference Banks, Rogelberg, Woznyj, Landis and Rupp2016, Reference Banks, Field, Oswald, O’Boyle, Landis, Rupp and Rogelberg2019; Butler et al., Reference Butler, Delaney and Spoelstra2017; Grand et al., Reference Grand, Rogelberg, Allen, Landis, Reynolds, Scott, Tonidandel and Truxillo2018; Kepes et al., Reference Kepes, Keener, McDaniel and Hartman2022). For example, authors are increasingly asked to include their data and analysis code so that reviewers can check for anomalies. Some journals have innovated to include multiple types of submissions such as results-masked review (i.e., reviewers evaluate manuscript without being shown the results and discussion) and registered reports (i.e., study design, methods, and proposed analyses are submitted for review prior to data collection) (e.g., Journal of Business and Psychology, Journal of Vocational Behavior). Although more evidence is needed to demonstrate which of these practices, and to what extent, yield improvements to the science, most agree that efforts to reduce fraud and improve transparency are beneficial to the integrity of science.

Over the last decade or so, another trend that has taken hold that has received less attention and scrutiny is the ethos that reviewers be “developmental” in their reviews (e.g., reviewer guidance for Journal of Applied Psychology, Academy of Management Review). Although well intentioned and with several desirable elements, we suggest that there are potential harmful effects on science associated with the developmental review approach. In this focal article we consider how this approach has the potential to modify what we research and publish as I-O psychologists and how it may impair perceptions of justice among authors. In doing so, we identify several concerns and offer potential solutions, including renewed focus on the research (constructive) rather than on the author (developmental), in the sections that follow. We preface these comments by noting that the views expressed in this focal article do not represent those of the current journal. Nor do our views reflect upon or represent the journals with which the authorship team serve in roles as authors, reviewers, or action editors. We also acknowledge that many of the concerns we raise are applicable to peer review in general and not isolated to the developmental approach to peer review. However, we contend that the developmental approach primes a different mindset among reviewers that serves to exacerbate many of the concerns that exist and therefore merits scrutiny.

Developmental Reviewing

The term developmental reviewing means different things to different journal editors and reviewers (e.g., Saunders, Reference Saunders2005; Ragins, Reference Ragins2018). Most recently, Ragins (Reference Ragins2015) clarified and shifted the definition of developmental reviewing from reviewing that “focus[es] on the work” toward reviews that also offer “learning and growth opportunities for the author, the reviewer, and our field” (p. 2). Her editorial further elaborates that “we develop the work by developing the author” (p. 1), and that developmental review can “help authors discover the gems in their work, gain new insights, find their voice and contribution, … and develop their capacity and willingness to contribute to and engage in the field” (p. 2). We suggest that the focus, approach, and tone associated with developmental peer review as defined by Ragins (Reference Ragins2015) differs from a constructive approach, which focuses more explicitly on providing critiques and suggestions for the manuscript (e.g., Feldman, Reference Feldman2004). We outline these differences in Table 1.

Table 1. Differences Between Constructive and Developmental Peer Review

We address what we believe are implications for the shift to developmental reviewing. Few would argue with the notion that reviewers should be kind and constructive in their reviews. Moreover, there should be no room in the review process for hostility. However, does the focus on developing the author (rather than on developing the research) have unintended consequences for the development of science? Does it result in improvements in the science? In the sections below, we discuss how developmental reviewing may undermine author expertise; reward A while hoping for B; draw attention away from accuracy, validity, and fraud; serve to maintain the status quo; and increase reviewer burden.

Undermining author expertise

The developmental perspective assumes that authors are junior to the field and should be mentored by the reviewers. Reviewers are elevated to the status of mentor and sometimes seemingly assume the role of senior coauthor. In the spirit of “developing” the authors, the reviewer may feel empowered to treat them as more junior collaborators, requesting and even insisting on consequential changes based on their own subjective perspective to the study framing or analysis.

Consider the experience recounted by Roberson et al. (Reference Roberson, Avery and Leigh2024) in their commentary on publishing a DEI article—a research topic in which the authors have exceptionally deep expertise. In the decision letter, their action editor noted that they were “blessed by two experienced, thoughtful, and developmental [italics added] reviewers who studied your manuscript very carefully” (p. 261). Roberson et al. (Reference Roberson, Avery and Leigh2024) noted that the suggestions were instructional in slant and that the tenor of the feedback was that the “reviewers not only had a better command of the diversity literature than the authors but that the authors lack a knowledge of scholarly writing and therefore must be offered didactic instructions on how to craft a manuscript”(p. 263). The editor appears to convey a belief that it is the reviewers’ role to instruct the authors on how to think about the topics they are researching.

Unfortunately, this is not an isolated incident. During the preparation of this manuscript, one of the authors received a six-page, single-spaced review on a revision. The letter included many detailed suggestions for how to write individual words, sentences, and sections and approach revisions in a way that aligned with the reviewer’s own work. These detailed comments were peppered with well-meaning positive commentary that took on a mentoring tone, assuming the author needed to be taught how to write and conduct research. In fact, the Academy of Management checklist (Academy of Management, 2024) includes detailed reviewer guidelines (including videos) on developmental reviewing seemingly entrenched in a similar perspective. Take for example the following advice:

Try to make your revisions developmental. The goal is to develop authors as well as evaluate their work. Losing someone who might subsequently contribute greatly to management learning and education research, but was dissuaded by a caustic or overly critical review process as they are beginning to learn to conduct research [emphasis added] in this area is not ideal.

Note the mindset that the author is new to the research process. As a result of the expectation that reviewers provide extensive feedback, focus on suggestions, and inform the authors how the reviewer might approach the research, detailed and long reviews become, “this is how I would have done this research” rather than engagement with the research as conducted. At its worst, this practice can result in “a manuscript that its author may not have intended to write, expressing in someone else’s language thoughts the author may not have intended to convey, under a title the author may not have selected” (Bedeian, Reference Bedeian1996, p. 315).

Aligned with this experience, in a study of authors of articles published in Academy of Management Journal and Academy of Management Review, more than one third of the participants reported that they were treated like an inferior by an editor or by a reviewer, and over half felt that an editor regarded the reviewer’s knowledge on the participant’s manuscript as more important than their own (Bedeian, Reference Bedeian2004). We do not intend to suggest that the proponents of developmental reviewing are in favor of reviewer ghostwriting or that they espouse condescending attitudes toward authors (see Lepak, Reference Lepak2009; Ragins, Reference Ragins2015). Nor do we infer that experienced scholars cannot learn from the review process. However, the developmental approach mindset encourages reviewers to adopt a “let me teach you” stance that assumes primacy of the reviewers’ experience and skills over that of the manuscript authors. This stance is especially questionable when considering that manuscripts today are typically authored by a team of scholars who possess varied skill sets and an accumulation of experience. In a study concerning ethics of peer review, the two most common problems reported were reviewer incompetence (61.8%) and reviewer bias (50.5%) (Resnik et al., Reference Resnik, Gutierrez-Ford and Peddada2008). The elephant in the room is that the authorship team could be (and often is) far more versed in the content area of the research than the reviewers.

Mismatch between help provided and needed help/KSAOs is further problematic when factoring in the powerful position of the reviewer (Bedeian, Reference Bedeian2004; also recognized in Ragins, Reference Ragins2015). The reviewer as mentor model inherently places more weight, value, and power in the reviewer’s words and expertise relative to that of the authors. This weight may in fact be inappropriate and at worst can damage the quality of the work and/or diminish the authors’ voice (Bedeian, Reference Bedeian2004). As we know from the “unhelpful help” literature (Dalal & Sheng, Reference Dalal and Sheng2019), this mismatch between the help/KSAOs provided and help/KSAOs needed can have negative effects on author motivation and attitudes and be perceived as condescending, the opposite of the encouraging and motivating goals of developmental reviewing (Ragins, Reference Ragins2015).

Rewarding A, while hoping for B

We suggest developmental reviewing contributes to a misalignment between the objectives of science and reviewer motivations. Developmental reviewing centers attention on developing authors rather than on developing the paper through identification of high-quality research that makes a meaningful contribution to knowledge. For example, “best reviewer” awards are determined based on criteria such as timeliness, number of reviews completed, and thoroughness of the reviews. That is, rewards are not based on the behavior that we should most desire, accurately identifying research that advances the field (Kerr, Reference Kerr1995).

Rater motivation theory suggests raters will be motivated to rate accurately if rewards exist for accurate ratings and if the possibility of receiving rewards is correlated with rating accuracy (Murphy & Cleveland, Reference Murphy and Cleveland1995). No such rewards exist in developmental peer review. There is no standard for accuracy. Moreover, viewing the reviewer as a benign mentor of the research perhaps makes us more likely to forget that reviewer guidance can be idiosyncratic, and that not all guidance is in the service of good science. Further, we contend that little has been done to acknowledge and grapple with bias and inaccuracy as part of the review process, which should be central to the objective of publishing meaningful science.

In addition to no standard for accuracy, there is no accountability for (in)accuracy among reviewers. As assumed developmental experts, reviewers are not explicitly asked to justify or substantiate requests. Editors may rate reviewers, but these ratings are not accessible even to reviewers themselves, and ratings may be based on inadequate criteria. Editors may also provide feedback to reviewers, but, to our knowledge, this is rarely done in practice. As is well-documented in the performance appraisal literature, lack of justification and accountability reduces accuracy (Levy & Williams, Reference Levy and Williams2004; Mero & Motowidlo, Reference Mero and Motowidlo1995). Without required justification or feedback on review accuracy, reviewers are not held responsible and may be unaware of the degree that their assessments are accurate.

Due to a focus on developmental reviewing, reviewers (and editors) may expect lengthy, critical reviews for all papers, even submissions that are of outstanding quality, motivating reviewers to extend the length of their reviews. Evaluating an outstanding paper as such with minimal commentary may be construed as “nondevelopmental” by an action editor who is looking for more comments as a signal of thoughtfulness and material for a decision. “Developmental” inherently assumes change is necessary. Reviewers who at first blush have minimal comments might be especially critical (nitpicky) to ensure they meet “developmental” review expectations and be perceived as conscientious reviewers.

Inattention to accuracy, validity, and fraud

Related to the misaligned incentives above, developmental reviewing also has potential to center reviewer attention on developing authors rather than on ensuring that the results in the paper are accurately computed and reported, interpreted validly, and, at a minimum, free from fraud and misrepresentation. As the epigraph to this article alludes, public beliefs about peer review center on its purported role in ensuring that published results are accurate, valid, and free of fraud or misrepresentation. For people outside of academic settings, “peer reviewed” is often regarded as akin to a golden seal of approval indicating that the contained work has been “vetted” and deemed correct. Despite this perception, cases of fraud, misspecified analyses, inaccurate reporting of results, and other sources of invalidity are not uncommon (e.g., Engber, Reference Engber2024; LeBreton et al., Reference LeBreton, Wu, Bing, Lance and Vandenberg2009; Lee, Reference Lee2024). For example, a review of 25 years of papers in leading I-O psychology and management journals that employed confirmatory factor analysis, one of our fields’ most popular statistical methods, found that results tables include mathematically impossible values with alarming frequency (Credé & Harms, Reference Credé and Harms2015). In the authors’ experience, reviewers rarely appear to spend much, if any, time checking results for accuracy or even plausibility. Even when data and code are supplied by authors, it is unclear how often reviewers actually open these files, review them, or verify their correctness.

We believe that the lack of attention to ensure basic validity, accuracy, and truthfulness in science is driven in part by an overemphasis on developmental reviewing and lack of review standards specifically focused on basic verification of reported results. For example, review guidelines or checklists rarely include instructions to validate results reported in text, tables, or figures. Many reviewers appear to accept reported numbers in papers at face value without checking if they are accurate, or even possible. Indeed, three of the four criteria that comprise the reviewer checklist for Academy of Management Journal are dedicated to innovation, novelty, and contribution, whereas only one of four criteria speak to evaluation of methodological rigor (internal, construct, statistical conclusion validity). Emphasizing criteria such as novelty or storytelling in an environment in which the opinions of reviewers are given preeminence in their role as “developers” amplifies the subjective elements of the review process.

Maintaining the status quo

As I-O psychologists, we are instructed early in our training that overemphasizing conformity can prove perilous for innovation. Unfortunately, some of the unhealthier aspects of the current approach to developmental reviewing are almost certainly preventing evolution of our science. The processes of becoming reviewers, editorial board members, and editors is largely one of socialization, wherein certain values are identified, recognized, promoted, and rewarded. For more objective elements, this often produces better research. For instance, explicitly identifying QRPs as unethical and socializing them as such facilitates peer reviews aimed at minimizing the occurrence of these behaviors in papers published in our discipline. In this case, socializing reviewers toward conformity produces a desired outcome (Banks et al., Reference Banks, Rogelberg, Woznyj, Landis and Rupp2016).

Unfortunately, however, this process is equally applicable to more subjective elements, thereby coinciding in far less desirable outcomes—especially when reviewers are expected to develop authors to think as they do. For example, reviewers frequently recommend particular analytic methods (such as mediation analysis) that are popular, even when they are not appropriate for the study under review. As a more pernicious example, white racial superiority was the generally accepted scientific standard in the Journal of Applied Psychology in the early 20th century (and was likely treated as such by reviewers), but there has been considerable softening of this stance over time, and papers are now routinely published that reflect a greater openness to beliefs of racial equality (Roberson et al., Reference Roberson, Ryan and Ragins2017). This naturally occurring evolution of thought is slowed or outright thwarted if the default assumption in the peer-review process is that the gatekeepers (reviewers) are inherently correct in their perspectives and authors are inherently wrong. We may literally be slowing progress unnecessarily by promoting the belief that reviewers know best and are responsible for compelling authors to embody their beliefs about what to examine and how to execute and describe these scientific investigations. In such instances, inertia becomes entrenched, the status quo remains unchanged, and progress is halted.

Increasing reviewer burden

Reviewer burden is a well-known problem throughout the sciences (Ellwanger & Chies, Reference Ellwanger and Chies2020). The developmental approach asks reviewers to not only identify concerns but also to “take the next step in focusing the authors and helping them move forward with their work” (Ragins, Reference Ragins2017, p. 573). Research from the mentoring literature shows that providing developmental guidance to colleagues is time consuming, requiring ample interaction and clear communication between the mentor and the protégé (Eby et al., Reference Eby, Allen, Hoffman, Baranik, Sauer, Baldwin, Morrison, Kinkade, Maher, Curtis and Evans2013). Similarly, Ragins (Reference Ragins2017) states developmental reviewing means having a “dialogue” or “conversation” with the authors, which requires time and mental investment. Performance appraisal research and practice has long noted the difficulty associated with dual roles of both developmental coach and evaluator (Meyer et al., Reference Meyer, Kay and French1965; Murphy, Reference Murphy2008), yet the developmental approach expects this of journal reviewers.

Certainly, asking reviewers to engage in this developmental dialog requires more of their time and other resources. As we noted above, it is likely that reviewers are compelled to write longer and longer reviews in the spirit of being developmental, increasing the burden on both reviewers and responding authors. We suggest that developmental reviewing is further extending conversations across review iterations (third, fourth, and fifth revisions) based on the artifice of developing the authors. Yet there is no evidence that the time and energy spent on developmental (as opposed to constructive) dialog is in fact worthwhile or that it results in scientific gain. We question if reviewers find being developmental improves the experience and meaningfulness of their work (as suggested by quotes in Ragins, Reference Ragins2017). We also note that with the open science movement and the inclusion of online supplemental materials, there is even more material for reviewers to review, further adding to reviewer (and author) workload.

Finally, as a field we have seemingly forgotten about guidelines that encourage reviewers to be brief (typical recommendations are 1.5–2 pages, although Academy of Management recommends 2–4 pages).

From developmental to constructive reviewing

The call for developmental reviewing emerged with admirable aims: to make the peer-review process friendly, positive, encouraging, and to increase learning and knowledge for all (Lepak, Reference Lepak2009; Ragins, Reference Ragins2015). We argue these well-intentioned directives have perhaps gone too far, unevaluated and unchecked. Above, we detailed concerns associated with the focus of developmental reviewing. In the following section we provide ideas for how the review process may be improved. Note that these ideas are presented as a basis for exploration. Although not all may be universally desirable or feasible (indeed there are varying opinions among the authorship team), and there may be incompatibilities and contradictions in some of these suggestions, they are intended to serve as thought experiments to stimulate discussion and reflection on review practices. Additionally, our suggestions aim to curb undesirable aspects of developmental reviewing or refocus toward a constructive reviewing approach; yet, these same suggestions are broadly applicable to resolving other issues in the review process (e.g., power imbalance, fairness). Table 2 provides a summary.

Table 2. Summary of Suggestions

Redistribute power in peer review

Under the current system, the individual who has the most to gain or lose from the process (the author) has limited input or agency. Power imbalance in the review process has tipped too far toward reviewers, chipping away at author agency over their own ideas. Such concerns have been voiced decades ago (see Bedeian, Reference Bedeian2004) and are even explicitly warned against in Ragins’s (Reference Ragins2015) editorial defining developmental reviewing. However, they appear to have gone unheeded. In an effort to increase author agency, we suggest consideration of the following ideas:

  • Involve authors in the evaluation of reviewers. Although the practice at some journals is for editors to rate reviewers, authors are typically not provided with the opportunity to evaluate the extent that the reviewer provided fair, accurate, and helpful feedback. Just as we do with ratings of other types of service providers, such as restaurants, hotels, and ride share drivers, provide authors with the opportunity to rate the reviews received. Make the overall average rating of reviewers of each journal available to the authors.

  • Consider innovations that make the peer-review process more procedurally just for authors. Authors should be entitled to a rigorous review process in which the reviewer has the competencies required to understand the complexities and nuances of the work being evaluated (Bedeian, Reference Bedeian, Baruch, Konrad, Aguinis and Starbuck2008). Moreover, clear appeal and rebuttal processes should be outlined with each decision letter. Each journal should have transparent appeals criteria and a process that can be found on the journal website. Develop an author bill of rights (e.g., that the peers who review have the expertise to do so; Clair, Reference Clair2015).

  • Empower authors at the time of the submission to decline having graduate students involved in the review of their paper. Notably, this change would still enable graduate students to serve as reviewers but also empower authors to decline such for their particular paper. Currently, authors don’t know if the person who reviewed their paper is a second-year graduate student versus a person with decades of experience on the topic of their research. We are aware of journals that regularly have doctoral students act as the reviewers of record and those that encourage junior scholars to serve as co-reviewers. We have an obligation as a community of scholars to develop and train those who are inexperienced to become high quality reviewers, but given the career stakes for those submitting the work for journal peer review, we encourage additional means of training. This could include workshops at conferences. Doctoral training consortia tend to focus on publishing but to our knowledge provide little discussion or guidance on conducting peer review.

  • Allow authors to choose the type of review they would like to receive (e.g., constructive versus developmental). Reaction to developmental reviewing may differ depending on experience and career stage. Journals could define the default level for a review comment (grad student, early career scholar, expert).

  • More vigilance is needed on the part of the editor to not overly rely on reviewers and to guard against their own set of motivations and biases that may infiltrate publication decision-making. Editors can play a role in curbing overzealous developmental reviewers who assert subjective views and demands onto the review process that remove author agency. One way to ensure this is to formally restrict reviewer involvement. For instance, reviewers see the original manuscript but are only privy to the response letter(s) after the first round of review. This would lessen the load on reviewers, minimize their impact on the process, and place the emphasis on decision-making back on the editor where it belongs. This would also have the benefit of reducing workload and burden on reviewers for subsequent rounds of revision.

  • Given the lack of agreement and subjectivity in reviews, reviewers should not have to be 100% satisfied before an editor decides to accept a paper. The goal of peer review as a process should not be to satisfy the reviewers; the goal should be to address fundamental issues related to the quality of the science. These are not the same thing.

  • Just as authorship teams often include individuals with different skillsets, recruit reviewers for specific tasks in the review process. Whereas one or two reviewers might be instructed to focus broadly on the paper, other reviewers could be instructed to specifically evaluate the quality of methods (with specific expertise sought for critical study aspects, such as systematic reviews) and accuracy of results, including rerunning analyses and methods to detect errors and fraud. Although statistical reviewers are common in some fields, such as medicine, such roles are rare in organizational sciences journals. In one of the author’s personal experiences, a recommendation that reviewer teams that involve review of systematic reviews and meta-analyses should include a librarian with expertise in information retrieval was met with skepticism and a dismissal that “such a librarian could not comment on the theory or rest of the paper.” Asking reviewers to focus on specific aspects of the paper could reduce workload for individual reviewers and help to ensure that reviewers focus on aspects of the paper for which they have expertise.

Increase accountability and transparency

The peer-review system operates under the premise that reviewers are motivated to produce accurate and objective reviews. We need to grapple with the fact that reviewer evaluations may be motivated by a variety of factors and that reviewer feedback does not always improve scientific quality. The peer-review process could be altered to increase transparency and accountability among reviewers and editors. To meet this need, we propose consideration of several possible strategies:

  • Just as there has been a movement toward greater transparency in research, we advocate for greater transparency and accountability associated with the review process. This may include more open practices such as publishing the name of the action editor on each published article. Reviewer comments could be open and part of the record. That is, publish reviewer comments, editor decision letter(s), and author responses as a supplemental file that accompanies each published article.

  • Maintain and publish reviewer-related statistics. Just as journals should be transparent about rejection rates and time to decision, metrics such as average length of reviews (i.e., 2 pages versus 5 pages), average author ratings of reviews, and potentially journal inter-reviewer reliability could be tabulated and made available to potential authors. We also echo suggestions made by Avery et al. (Reference Avery, D.K., Dumas, George, Joshi, Loyd, van Knippenberg, Wang and Xu2022) to publish decision statistics by keywords, geographic location, and so forth to provide more accountability and transparency to authors. Internal to the journal, data should be readily available as to each reviewers’ recommendation tendency statistics. Some reviewers are at baseline more lenient/harsh than others. This type of information can be used to help inform action editor reviewer selection and manuscript decision-making.

  • Reinforce journal guidelines at regular intervals. Regular communication to editorial board members is rare. Many journals have annual meetings in which statistics and information about journal operations are shared and suggestions for reviewers may be conveyed. Similar types of information may occur through periodic email communications. However, corralling reviewers to review with journal guidelines in mind (e.g., reviews should average 2 pages and 10 points rather than 5 pages and 25 points), to refrain from suggesting QRPs, and to focus on the issues on which the editor-in-chief wishes to be focused (e.g., empirical contributions are welcome without strong theoretical motivations) requires continual communication and reinforcement (e.g., through feedback).

  • Criteria used to select members of the editorial board of a journal could be included on the journal website. What minimum standards are required? Does the journal permit graduate students to serve as reviewers? We concur with Schoen (Reference Schoen2020) who noted that we need to “have serious conversations about what it means to be a peer” (pp. 41–42). As I-O psychologists we would recommend that any organization be explicit as to the minimum knowledge, skills, and abilities one needs to qualify for a job. The same should be true for reviewers who are tasked with gatekeeping our science. In addition, journal action editors should be prepared to justify their selection of reviewers for a manuscript. Given the importance of reviewer selection to editorial boards and in manuscript assignments, it is shocking that there is so little work evaluating and innovating this aspect of the peer review process.

  • Action editors should take into account the expertise level of the reviewer and apply firm limits on how much reviewer input is weighed in the decision process. Reviewers who have general expertise can provide a useful perspective, but their input should be appropriately calibrated. And ultimately, action editors need to take responsibility for evaluating the manuscript themselves, setting revision requirements, and making acceptance decisions. Reviewers should be consulted for input, not positioned as arbiters, and action editors should be held accountable for the evaluation process, accuracy, and validity of the articles they accept and reject.

Use our science to improve the review process

Just as studies are being conducted to evaluate different open science practices that may improve our science (e.g., preregistration; cf. Devezer et al., Reference Devezer, Navarro, Vandekerckhove and Ozge Buzbas2021), we advocate for modeling, experiments, and research on the practice of peer review. We can use our science on performance appraisal, motivation, and selection to examine factors such as bias in review and reviewer training, reviewer motivation, development of criteria for evaluating reviewers, and reviewer selection to improve peer review. Biomedical fields have conducted and evaluated the impact of interventions intended to improve quality of peer review for publications (see Bruce et al., Reference Bruce, Chauvin, Trinquart, Ravaud and Boutron2016 for a review). Notably, they found that training workshops, mentoring (peer reviewers discussing their review with a senior peer reviewer), and checklists did not impact peer review quality. Open peer review and addition of a statistical peer reviewer were found to increase quality. To build evidence-based changes to the peer-review process, we propose:

  • Develop programs of research focused on ways to improve the accuracy of the peer-review process. This could include experimental studies that compare different reviewer instructions (e.g., take a developmental approach versus a constructive approach) or studies that compare different reviewer comment formats. Another idea may be to compare the reviews of humans versus AI for perceived accuracy and fairness, with raters blind to as how the reviews were generated.

  • Developmental reviewing is founded on ideals of creating encouraging, helpful systems that retain talent and ideas. Research that studies the implications of peer-review practices for reviewer attitudes (e.g., occupational commitment, turnover intentions) and self-concept (e.g., occupational self-efficacy) would help to meet this aim. For example, how do reactions of authors differ when reviewers are instructed to be developmental versus constructive in their review and remarks? How accurate and fair do they perceive each of the two conditions? Such research would necessarily need to take intersectional personal and professional identities into consideration.

  • Create clear guidelines for reviewers that differentiate between methodological or statistical errors versus subjective assessment. Editors too need to recognize the elements of reviewer comments that are subjective (e.g., contribution, novelty, compelling writing, specific design or analytic choices) versus those that involve rigor or clarity (e.g., internal, external, construct, and statistical validity; lack of consistency or precision). Consider new formats for reviewer comments and decision letters that recognize and bring to the surface this distinction.

  • Develop a better way to document and quantify reviewer expertise. Currently, journal administrative systems consist of a list of reviewers who have self-reported their content areas of expertise. There is no standard other than self-report for documenting expertise. More objective ways of identifying the competencies of each reviewer should be identified and implemented. For example, instead of relying on author self-reported key words, perhaps manuscripts could be submitted to AI systems through which the key areas of expertise needed to review the manuscript are extracted. Such systems could better guide the selection of reviewers.

  • Develop novel, effective tools to select reviewers. The size of journal editorial teams has grown considerably. For example, as referenced in Allen et al. (Reference Allen, Eby, Weiss and French2014), in 2003 the Journal of Applied Psychology had seven action editors and 83 editorial review board members, in 2009 there were 11 action editors and 192 board members. The 2024 numbers are 16 action editors and 300 editorial board members. In 2003, Personnel Psychology had two action editors and 47 board members and in 2011 the numbers grew to five action editors and 91 board members. The 2024 numbers are seven action editors and 192 board members. Certainly, there is a need for an increased supply of editorial decision-makers and reviewers to meet the increase in number of journal submissions. However, the larger pool of editors and board members also creates risk that editors will be less familiar as to reviewers’ competencies and expertise, creating more error in matching reviewers to papers. We also question if reviewer editorial boards of today represent the same degree of scholarly expertise as the reviewer pool of a decade ago, and if editorial practices have adjusted to changing reviewer landscapes. More evidence-based ways are needed that go beyond the simple systems currently in place for selecting reviewers to ensure each manuscript receives a quality review.

  • Explore how AI can be used to improve the peer-review process. Discussions are already being had as to how AI may be incorporated into the review process (see for example, Sabet et al., Reference Sabet, Bajaj, Stanford and Celi2023). Could I-O psychology be at the forefront of finding ways to implement AI to improve peer review and reduce human burden? For example, could AI be used to match reviewers to papers? Publishers could develop their own algorithms (based on AI or other approaches) to preserve research integrity and detect inconsistencies. AI agents might be trained and deployed for specific purposes, such as detection of numeric inconsistencies and mismatches between methods and results sections, recommendations for clarifying text narratives, and detection of image and other data manipulation. Given known biases in AI and limited capabilities to understand context and perform real critical evaluation, we do not recommend that AI agents be used as a reviewer in their own right, but there are specific tasks where AI models can usefully complement human reviewers or perform tasks that human reviewers find difficult. Critically, any use of AI should be subjected to rigorous testing prior to operational use.

Reduce waste in peer review

We need to make a concerted effort to reduce waste in terms of resources such as time and effort. As a field, we are expending enormous time and effort reviewing, responding, re-reviewing, re-responding, and so on. This cycle may be repeated at multiple journals prior to publication. Response letters that are longer than the article, having the same piece of work reviewed again and again, conducting a second, and third study to fulfill the idiosyncratic whims of Journal X reviewers/editor only to be rejected. This need not be the way we ensure high quality science. The current system is breaking. System level change is needed. We propose the following review policies and processes:

  • Make all final decisions at the second round for most articles until there is evidence that doing so is detrimental to the science. This doesn’t mean that further revision cannot be made beyond two rounds, but authors should know that their article is accepted for publication prior to putting in further time, effort, and often financial expense to make third, fourth, and so on revisions.

  • Develop a “review commons” that includes partnership between journals who agree to consider papers without repeating peer review (upon approval of the author). Some publishers (e.g., Springer) have implemented a journal transfer desk that aids in resubmitting a paper to another journal within their system, but the editorial process begins anew (https://www.springernature.com/gp/authors/transferdesk). Initiatives such as Review Commons (https://www.reviewcommons.org) and Peer Community In (PCI; https://peercommunityin.org) have also been developed to separate peer review from specific journals or publishers also have value in streamlining the review process and helping it to focus on practices that advance science.

Recalibrate what is valued in the peer review process

Although not solely a function of the developmental peer review approach, overall, we believe the current state of the review process has pulled I-O psychology away from our core values as a field. For I-O psychological science this means conducting research designed to address applied problems, intervention studies, and collaborations with practitioners and communities. Over a decade ago, Ryan and Ford (Reference Ryan and Ford2010) stated that there had been a shift in the center of gravity of I-O psychology. Based on bibliometric analysis, Allen (Reference Allen2015) remarked that our science was increasingly contributing less to psychology and more to management and business. Evidence suggests that shift continues. The reflections of prior editor-in-chiefs of Personnel Psychology that appeared in the 75th anniversary issue (2023) are instructive of the shift that has occurred (Erdogan et al., Reference Erdogan, Kraimer and Bell2023). Paul Sackett, who served as editor from 1984–1990, noted the journal had a distinct focus and that criteria for publication included “immediacy of implications for personnel practice” (p. 2). Ann Marie Ryan, who served as the editor from 2002–2007, reiterated that the journal was known for its focus on applied work stating, “When people bemoan the relevance and usefulness of academic journal publications, I point back to when we specifically encouraged and prioritized publishing work that was on pressing applied problems and of high practical utility” (p. 6). Mike Campion, who was editor 1990–1997, lamented the fact that, “We no longer have a preferred publication outlet for accumulating the important and well-conducted research in practice. This is a problem for an applied science, and it widens the unnecessary science–practitioner divide” (p. 4). There is a need for I-O psychologists to recalibrate and recapture our values as a larger field. What actions can we each take as authors, reviewers, and editors to strengthen our rigor as well as to return I-O psychology to its roots as an applied science that addresses meaningful issues that pertain to work and workers? It is up to each of us to change the narrative so that rigorous, programmatic, practical research is not devalued relative to novel, interesting, bold research. To this end, we suggest:

  • Revisit and codify intended audiences for published work. How do we recenter I-O psychology values in journals that publish our work?

  • Dedicate space within top outlets to currently undervalued contributions (direct and independent replications, papers that focus on practical contributions). Solicit articles with this emphasis and assign editors/reviewers who will appreciate these aims.

  • Focus the review process more on potential for societal impact and less on storytelling.

Conclusion

Our goal for this paper was to discuss ways to move the peer-review process from didactic, subjective, and potentially harmful to helpful, objective, and respectful. Although we believe in the spirit of the intent of developmental reviewing, as is often the case when changes occur to a system, we also believe that there have been downstream unintended consequences that are detrimental to I-O psychological science. As noted by Bedeian (Reference Bedeian2003), editors and reviewers hold considerable power over the “intellectual vitality and future development” (p. 332) of a discipline as well as the careers of individual scholars. We can each take actions as authors, reviewers, and editors to strengthen scientific rigor as well as to produce science that makes a difference. Our hope is that this focal article spurs commentaries that provide actionable suggestions and motivation for positive change.

References

Academy of Management (2024). Reviewer resources: Directions for new reviewers. https://aom.org/research/publishing-with-aom/reviewer-resources.Google Scholar
Allen, T. D. (2015). Connections past and present: Bringing our scientific influence into focus. The Industrial-Organizational Psychologist, 52(3), 123133.Google Scholar
Allen, T. D., Eby, L. T., Weiss, H. M., & French, K. A. (2014). Industrial–organizational psychology’s chicken little syndrome. Industrial and Organizational Psychology, 7(3), 304311. https://doi.org/10.1111/iops.12152 CrossRefGoogle Scholar
Avery, D. R., D.K., B., Dumas, T. L., George, E., Joshi, A., Loyd, D. L., van Knippenberg, D., Wang, M., & Xu, H.(Howie) (2022). Racial biases in the publication process. Exploring expressions and solutions. Journal of Management, 48(1), 716. https://doi.org/10.1177/01492063211030561 Google Scholar
Banks, G. C., Field, J. G., Oswald, F. L., O’Boyle, E. H., Landis, R. S., Rupp, D. E., & Rogelberg, S. G. (2019). Answers to 18 questions about open science practices. Journal of Business and Psychology, 34(3), 257270. https://doi.org/10.1007/s10869-018-9547-8 Google Scholar
Banks, G. C., Rogelberg, S. G., Woznyj, H. M., Landis, R. S., & Rupp, D. E. (2016). Evidence on questionable research practices: The good, the bad, and the ugly. Journal of Business and Psychology, 31, 323338.CrossRefGoogle Scholar
Bedeian, A. G. (1996). Thoughts on the making and remaking of the management discipline. Journal of Management Inquiry, 5(4), 311318. https://doi.org/10.1177/105649269654003 Google Scholar
Bedeian, A. G. (2003). The manuscript review process: the proper roles of authors, referees, and editors. Journal of Management Inquiry, 12(4), 331338. https://doi.org/10.1177/1056492603258974 CrossRefGoogle Scholar
Bedeian, A. G. (2004). Peer review and the social construction of knowledge in the management discipline. Academy of Management Learning & Education, 3(2), 198216.Google Scholar
Bedeian, A. G. (2008). Balancing authorial voice and editorial omniscience: The “It’s my paper and I’ll say what I want to”/“Ghostwriters in the sky” minuet. In Baruch, Y., Konrad, A., Aguinis, H., & Starbuck, W. H. (Eds.), Opening the black box of editorship (pp. (pp 134142). Palgrave Macmillan.10.1057/9780230582590_14CrossRefGoogle Scholar
Bruce, R., Chauvin, A., Trinquart, L., Ravaud, P., & Boutron, I. (2016). Impact of interventions to improve the quality of peer review of biomedical journals: a systematic review and meta-analysis. Bmc Medicine, 14, 85. https://doi.org/10.1186/s12916-016-0631-5 CrossRefGoogle ScholarPubMed
Butler, N., Delaney, H., & Spoelstra, S. (2017). The gray zone: Questionable research practices in the business school. Academy of Management Learning & Education, 16(1), 94109.Google Scholar
Clair, J. A. (2015). Toward a bill of rights for manuscript submitters. Academy of Management Learning & Education, 14(1), 111131. https://doi.org/10.5465/amle.2013.0371 Google Scholar
Credé, M., & Harms, P. D. (2015). 25 years of higher-order confirmatory factor analysis in the organizational sciences: A critical review and development of reporting recommendations. Journal of Organizational Behavior, 36(6), 845872. https://doi.org/10.1002/job.2008 CrossRefGoogle Scholar
Dalal, R. S., & Sheng, Z. (2019). When is helping behavior unhelpful? A conceptual analysis and research agenda. Journal of Vocational Behavior, 110, 272285. https://doi.org/10.1016/j.jvb.2018.11.009 CrossRefGoogle Scholar
Devezer, B., Navarro, D. J., Vandekerckhove, J., & Ozge Buzbas, E. (2021). The case for formal methodology in scientific reform. Royal Society for Open Science, 8, 200805. https://doi.org/10.1098/rsos.200805 Google ScholarPubMed
Eby, L. T. D. T., Allen, T. D., Hoffman, B. J., Baranik, L. E., Sauer, J. B., Baldwin, S., Morrison, M. A., Kinkade, K. M., Maher, C. P., Curtis, S., & Evans, S. C. (2013). An interdisciplinary meta-analysis of the potential antecedents, correlates, and consequences of protégé perceptions of mentoring. Psychological Bulletin, 139(2), 441476.CrossRefGoogle ScholarPubMed
Ellwanger, J. H., & Chies, J. A. B. (2020). We need to talk about peer-review—Experienced reviewers are not endangered species, but they need motivation. Journal of Clinical Epidemiology, 125, 201205. https://doi.org/10.1016/j.jclinepi.2020.02.001 CrossRefGoogle Scholar
Engber, D. (2024). The business school scandal that just keeps getting bigger. The Atlantic. https://www.theatlantic.com/magazine/archive/2025/01/business-school-fraud-research/680669/ Google Scholar
Erdogan, B., Kraimer, M., & Bell, B. (2023). Editorial: Celebrating 75 years of personnel psychology. Personnel Psychology, 76, 363374. https://doi.org/10.1111/peps.12590 Google Scholar
Feldman, D. C. (2004). Being a developmental reviewer: Easier said than done. Journal of Management, 30(2), 161164.10.1016/j.jm.2003.09.002CrossRefGoogle Scholar
Grand, J. A., Rogelberg, S. G., Allen, T. D., Landis, R. S., Reynolds, D. H., Scott, J. C., Tonidandel, S., & Truxillo, D. M. (2018). A systems-based approach to fostering robust science in industrial-organizational psychology. Industrial and Organizational Psychology, 11(1), 442. https://doi.org/10.1017/iop.2017.55 Google Scholar
Horton, R. (2000). Genetically modified food: Consternation, confusion, and crack-up. Medical Journal of Australia, 172(4), 148149. https://doi.org/10.5694/j.1326-5377.2000.tb125533.x CrossRefGoogle ScholarPubMed
Kepes, S., Keener, S. K., McDaniel, M. A., & Hartman, N. S. (2022). Questionable research practices among researchers in the most research-productive management programs. Journal of Organizational Behavior, 43(7), 11901208.Google Scholar
Kerr, S. (1995). On the folly of rewarding A, while hoping for B. Academy of Management Journal, 18, 769783. https://doi.org/10.2307/255378 CrossRefGoogle Scholar
LeBreton, J. M., Wu, J., & Bing, M. N. (2009). The truth(s) on testing for mediation in the social and organizational sciences. In Lance, C. E., & Vandenberg, R. J. (Eds.), Statistical and methodological myths and urban legends: Doctrine, verity and fable in the organizational and social sciences (pp. 107141). Routledge/Taylor & Francis Group.Google Scholar
Lee, S. (2024). This study was hailed as a win for science reform. Now it is being retracted. Chronicle of Higher Education. https://www.chronicle.com/article/this-study-was-hailed-as-a-win-for-science-reform-now-its-being-retracted Google Scholar
Lepak, D. (2009). Editor’s comments: What is goodreviewing? Academy of Management Review, 34(3), 375381.10.5465/amr.2009.40631320CrossRefGoogle Scholar
Levy, P. E., & Williams, J. R. (2004). The social context of performance appraisal: A review and framework for the future. Journal of Management, 30(6), 881905. https://doi.org/10.1016/j.jm.2004.06.005 Google Scholar
McNamara, G., & Schleicher, D. J. (2024). What constitutes a contribution at JOM? Journal of Management, 50(5), 14951501. https://doi.org/10.1177/01492063241238701 CrossRefGoogle Scholar
Mero, N. P., & Motowidlo, S. J. (1995). Effects of rater accountability on the accuracy and the favorability of performance ratings. Journal of Applied Psychology, 80(4), 517.10.1037/0021-9010.80.4.517CrossRefGoogle Scholar
Meyer, H. H., Kay, E., & French, R. P. (1965). Split roles in performance appraisal. Harvard Business Review, 43, 123129.Google Scholar
Murphy, K. R. (2008). Explaining the weak relationship between job performance and ratings of job performance. Industrial and Organizational Psychology, 1(2), 148160 1754–9426 08.10.1111/j.1754-9434.2008.00030.xCrossRefGoogle Scholar
Murphy, K. R., & Cleveland, J. N. (1995). Understanding performance appraisal: Social, organizational, and goal-based perspectives. Sage Publications, Inc.Google Scholar
Ragins, B. R. (2015). Editor’s comments: Developing our authors. Academy of Management Review, 40(1), 18.Google Scholar
Ragins, B. R. (2017). Editor’s comments: Raising the bar for developmental reviewing. Academy of Management Review, 42(4), 573576.Google Scholar
Ragins, B. R. (2018). From boxing to dancing: Creating a developmental editorial culture. Journal of Management Inquiry, 27(2), 158163.10.1177/1056492617726273CrossRefGoogle Scholar
Resnik, D. B., Gutierrez-Ford, C., & Peddada, S. (2008). Perceptions of ethical problems with scientific journal peer review: An exploratory study. Science and Engineering Ethics, 14, 305310.Google ScholarPubMed
Roberson, Q., Avery, D. R., & Leigh, A. (2024). How woke was the symposium on woke organizations? An insider perspective. Academy of Management Perspectives, 38(2), 260266. https://doi.org/10.5465/amp.2023.0459 CrossRefGoogle Scholar
Roberson, Q., Ryan, A. M., & Ragins, B. R. (2017, The evolution and future of diversity at work.Journal of Applied Psychology, 102(3), 483499. https://doi.org/10.1037/apl0000161 Google Scholar
Ryan, A. M., & Ford, J. K. (2010). Organizational psychology and the tipping point of professional identity. Industrial and Organizational Psychology, 3(3), 241258 1754–9426 10.Google Scholar
Sabet, C. J., Bajaj, S. S., Stanford, F. C., & Celi, L. A. (2023). Equity in scientific publishing: Can artificial intelligence transform the peer review process? Mayo Clinic Proceedings: Digital Health, 1(4), 596600. https://www.mcpdigitalhealth.org/action/showPdf?pii=S2949-7612%2823%2900087-1 Google ScholarPubMed
Saunders, C. (2005). Editor’s comments: From the trenches: Thoughts on developmental reviewing. MIS Quarterly, iii–xii.Google Scholar
Schoen, J. L. (2020). Lack of expertise means it is not a peer review. Industrial and Organizational Psychology, 13(1), 4144. https://doi.org/10.1017/iop.2020.4 CrossRefGoogle Scholar
Tennant, J. P., Dugan, J. M., Graziotin, D., Jacques, D. C., Waldner, F., Mietchen, D., Elkhatib, Y., Collister, B., Pikas, L., C., K., Crick, T., Masuzzo, P., Caravaggi, A., Berg, D. R., Niemeyer, K. E., Ross-Hellauer, T., Mannheimer, S., Rigling, L., Colomb, J. (2017). A multi-disciplinary perspective on emergent and future innovations in peer review. F1000Research, 6, 1151. https://doi.org/10.12688/f1000research.12037.3 Google ScholarPubMed
Figure 0

Table 1. Differences Between Constructive and Developmental Peer Review

Figure 1

Table 2. Summary of Suggestions