Hostname: page-component-69cd664f8f-vwfj8 Total loading time: 0 Render date: 2025-03-13T06:08:57.771Z Has data issue: false hasContentIssue false

Reviewing and the State of the Discipline

Published online by Cambridge University Press:  12 March 2025

Paul A. Djupe
Affiliation:
Denison University, USA
Brooklyn Evann Walker
Affiliation:
Hutchinson Community College, USA
Rights & Permissions [Opens in a new window]

Abstract

By many accounts, the state of reviewing is in dire straits. Editors cannot get people to respond to review requests, much less to say yes and complete the review on time. In previous work (Djupe 2015; Djupe, Smith, and Sokhey 2022) conducted before the COVID-19 pandemic, reviewing was heavily concentrated in a core set of reviewers, reviewing increased with age and rank, and political scientists stood by the value of peer reviewing for themselves, the discipline, and the research. Is any of that still true in the post-pandemic period? This article analyzes a Summer 2024 survey of 637 political scientists in comparison with 2013 data and finds an evident decline in reviewing post-pandemic from those who historically review the most. This pattern likely reflects broader movements, especially toward diversification, in the discipline and in higher education.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of American Political Science Association

Peer review is essential to what we do as political scientists—it is one fundamental process that sets us apart from political commentators. However, effective peer review depends on convincing other academics to provide it; anecdotal reports on Twitter/X are not favorable.Footnote 1 This is not the first time someone has “Chicken-Littled” the peer-review enterprise (Borer Reference Borer1997; Gelbach Reference Gelbach2013) and it will not be the last. However, it is worth cataloging the extent to which global trends have a parallel in political science and how the state of peer review has changed since Djupe’s Reference Djupe2015 article in PS: Political Science & Politics.

This study draws on those earlier data collected in the late fall of 2013 and data from a recently completed survey in May 2024. The data allow us to assess whether reviewing amounts, acceptance rates, attitudes, and beliefs have changed during that decade. From the perspective of these survey data, peer reviewing is subsiding somewhat among those historically most likely to review (i.e., full professors). This movement seems to be related to efforts to diversify the discipline and a change in how reviewing is counted. Although ultimately the wider distribution of peer reviewing may be a good thing, the transition is likely causing editors anxiety and may mean that manuscripts linger longer under review.

Although ultimately the wider distribution of peer reviewing may be a good thing, the transition is likely causing editors anxiety and may mean that manuscripts linger longer under review.

THE STATE OF PEER REVIEW IN ACADEMIA AND POLITICAL SCIENCE

In most commentary, if peer review is not exactly broken, it is a system under severe strain (Petrescu and Krishen Reference Petrescu and Krishen2022). The problems are straightforward. With an ever-expanding number of publications and submissions from all over the world, editors cannot keep up. They cannot find reviewers, cannot get potential reviewers to respond, cannot convince them to say yes, and then sometimes cannot get committed reviewers to submit. Reviewers are said to be suffering from fatigue from facing too many requests, especially after the COVID-19 pandemic (Flaherty Reference Flaherty2022).

The largest study of peer reviewing to date was in 2018 and conducted by Publons (2018)—the first web service that tracked reviewing (it has since been acquired by Web of Science). From a large survey combined with data from Publons and other services, the report found a growing rate of refusal, with requests to yield one reviewer growing from 1.9 in 2013 to 2.4 in 2017. During that four-year period, the number of review requests almost doubled from 25 million to slightly more than 40 million. The rate of accepting review requests decreased by 10 percentage points in those four years as well—from about 55% to about 45%.

If editors are struggling to find peer reviewers—and 75% of editors state that they are (Publons 2018)—then it makes sense that they may search beyond a manuscript’s core subfield, leaving potential reviewers to decline requests that are outside of their expertise. Moreover, the number of journals has grown (e.g., from Multidisciplinary Digital Publishing Institute [MDPI]), which can multiply requests to review and potentially drive up refusals depending on reviewer perceptions of these journals.Footnote 2 At the same time, researchers are busy and reviewing simply is not at the top of their to-do list.Footnote 3

In political science, it is not obvious what is occurring, in part because editors discuss so little about the peer-review process. Some note in their editorial reports a reluctance by reviewers, and some report only the number of invitations and a distribution of outcomes. Only a few editors note the critical statistics regarding the average number of invitations per article. If the global average has been moving toward 2.5 invitations for each secured reviewer, the range of estimates provided by political science editors largely aligns with the global average. The latest American Journal of Political Science (AJPS) report (Dolan and Lawless Reference Dolan and Lawless2024) appears to indicate an almost 1:1 ratio, although other journals do not share that experience. The American Political Science Review (APSR) (Tripp and Dion Reference Tripp and Dion2023) reported a relatively stable 2:1 ratio across its past four editorial teams. However, most journals are not AJPS and APSR. From an informal survey of editors, one veteran editor indicated needing to send 2.5 invitations per reviewer, the same as another subfield journal. Another general-field journal editor indicated a 2:1 ratio. In 2015, when one author edited a subfield journal, that ratio was 2:1, but more recent reports from that journal indicate that the ratio has since increased to 2.5:1. Moreover, a few particularly stubborn manuscripts required a herculean effort to secure two reviewers, with almost 20 invitations.Footnote 4 Although not systematic across the discipline, this evidence suggests that political science is no different from academia broadly considered and that the workload of most editors is being stretched.

What we know about individual reviewing rates in political science is outdated. The latest report is from Djupe, Smith, and Sokhey (Reference Djupe, Smith and Sokhey2022) who, using 2017 data, affirmed the distribution of reviewing reported by Djupe (Reference Djupe2015), who used 2013 data. The top of the ranks—R1 full professors—reported reviewing about eight to 8.5 manuscripts a year. Outside of PhD-granting universities, the average was 2.5 reviews a year. The sample averages were close to four—one each quarter. As Djupe (Reference Djupe2015) concluded, “Only 10% of this sample is doing one review a month or more. It is possible that political scientists believe even this workload is too high, but from this look, reports of reviewing fatigue are coming from a highly selective set of faculty.”

Is this still true? Several major developments in the world and political science may have affected reviewing rates. Perhaps the most obvious is the COVID-19 pandemic. Although some editors reported a considerable degree of stress and a greater workload, reports from journals mostly reported business as usual (Stockemer and Reidy Reference Stockemer and Reidy2024). If anything, Stockemer and Reidy reported that 2020 witnessed a boost in submissions before returning to more expected levels. They also noted reports of increased reviewer fatigue, although other researchers noted expanded voluntarism to cope with the increasing number of submissions (Layman et al. Reference Layman, Allen, James, Wayde and Radcliff2024) or simply no change (Lewis and Tepe Reference Lewis and Tepe2024). From this perspective, we do not expect much change from the “before times.”

During a somewhat longer time span, academic disciplines including political science were reckoning with the #MeToo Movement and the slow pace of diversification. This inclination accelerated in Summer 2020 in the aftermath of the murder of George Floyd. The nationwide engagement with race had several ramifications for higher education, including the adoption of DEI goals, job searches that highlighted the politics of race and adjacent research areas, and concern for outlets to be open to research (about race and gender, especially)—once dismissed as subfield concerns tangential to the discipline.

We believe that these developments had an impact on peer-reviewing practices. Editors changed during this period with, for instance, all-women teams at APSR and AJPS. As a result, they may have approached a different set of reviewers than in the past. However, other editors acknowledged that they have sought a more diverse reviewer pool in recent years as well. Moreover, if new research questions are engaged, it may necessitate finding a different set of reviewers than the usual suspects to review them. The discipline is diversifying (Djupe, Smith, and Sokhey Reference Djupe, Smith and Sokhey2022), which means that seeking women and racial-minority reviewers almost necessarily must entail a shift from full professors toward assistant professors.

DATA

The data are from two sources—surveys of political scientists in 2013 and 2024. Both surveys obtained informed consent in the first question and were deemed exempt from Institutional Review Board review. In 2013, Djupe worked with the American Political Science Association (APSA) to surveyFootnote 5 about 3,000 randomly sampled members with a PhD in October 2013. After three reminders (the survey was open for a month), 823 respondents began the survey and 607 finished it, resulting in a completion rate of 22.3% (not counting the 275 emails that bounced). The respondents reflected the gender balance and proportion in PhD programs of the discipline but were not reflective in terms of race or rank—too many whites and too many tenured professors.

The 2024 data gathering took a different tack, following Djupe, Smith, and Sokhey’s (Reference Djupe, Smith and Sokhey2022) approach of starting with a sample of half of APSA-member departments and then taking a census of faculty members in those departments. The list included 4,025 emails (although about 3% failed or bounced). In total, almost a quarter clicked through to start the survey, partial responses were received from 865 respondents, and nearly completed surveys were generated by 637 respondents. This distribution was close to the APSA proportion of women (36% versus 39% in August 2022) but was too white (82% versus 71% of American APSA members), had too many full professors (47%), and 66% were working in PhD programs. We also contacted a range of editors for their perspective on peer reviewing in recent years; several responded with their experiences that were mostly in accord with our data analysis (for data access, see Djupe and Walker Reference Djupe and Walker2025).

Both survey invitations mentioned that the topic of peer review would be covered, so there was potential for a reporting bias; however, we focused on change over time, which to some extent should have ameliorated that potential selection problem. Moreover, this did not explain why reviewing rates appear to be so low.

HAS THE REVIEWING DISTRIBUTION CHANGED?

The remainder of our analysis examines comparable subgroups by rank and working in PhD programs, but this section begins with a quick examination of the distribution of reviewing in the past year in each sample. Figure 1 shows those distributions for 2013 (black outlines) and 2024 (blue boxes), capping the highest bin at 20+ reviews. Two findings stand out for further investigation. The low end appears to have risen, and the top end appears to have shrunk. That is, the 2024 sample shows more faculty members completing a modest number of reviews compared to 2013.

Figure 1 Distribution of Reviewing by Political Scientists, 2013 and 2024

Source: 2013 and 2024 political science surveys.

Another way to understand this distribution is in the cumulative summary of the respondents’ reviewing. Although there were many fewer faculty members out to the right on the distribution, they did many more reviews than the multitudes on the left. In both years, those doing 10+ reviews a year produced as many total reviews as those who did fewer than 10 a year. In 2013, that meant 13% of the sample did half of the reviewing, which increased to 18% in 2024 doing so. This may be good news—more scholars doing 10+ reviews a year. However, they are concentrated in the lower end of that category (i.e., 10–13) and the very top end is compressed. It now takes more people to gain the same number of reviews.

It is not surprising to see this relationship institutionalized (or “formalized,” in the parlance of Douglass North) in both years. In 2024, 58% of those who did 10+ reviews a year served on an editorial board compared to 30% of those who did fewer than 10. In 2013, it was 68% of the 10+ review group who served on an editorial board compared to 30% of those who did fewer than 10 reviews. This may be evidence of editors avoiding a “gatekeeping cabal,” as Political Research Quarterly editor Tony Smith said in an email. Stated another way, editors appear to want to distribute the reviewing process more widely, which several editors explicitly affirmed. However, this also must entail greater effort expended to secure reviewers.

More pointed evidence about the widening distribution of peer reviewing is shown in figure 2. In non-PhD programs, rank does not differentiate reviewer loads—all ranks averaged about three reviews in the past year. Perhaps there were modest gains by assistant and associate professors across the decade. This may reflect a conscious decision by editors as well as increased willingness among reviewers. As one veteran editor commented by email, “Most of our referees are not at the R1s, in fact. But they are publishing scholars.” In PhD programs, 2013 revealed a difference in reviewing by rank, but those gaps have closed such that faculty members in all ranks now average approximately six reviews a year. This represents a significant decrease of 2.5 reviews among full professors and a more modest decrease among associate professors by about one review. Although this is a problem for editors, because the declines were concentrated in the most visible portions of the field, shifting away from full professors also enables the inclusion of more diverse viewpoints in the review process.

Figure 2 Distribution of Reviewing by Rank and Program, 2013 and 2024

Source: 2013 and 2024 political science surveys.

THE NATURE OF THE SHIFT

There likely are numerous reasons for these apparent declines. They may reflect “I’ve put in my time”ism—that is, an unwillingness to continue to contribute to public goods. They may represent taking to heart the quality-of-life discussion that was so prevalent during the pandemic as a response to the exhaustion many scholars experienced. Some faculty groups, however, are clearly moving in the other direction and reviewing more. One thing this decline does not reflect is a weakening belief in peer review: 93% of respondents in 2013 and 95% in 2024 agreed that “I believe in the value of peer reviewing.” There is evidence that can bear on these notions in the form of the review uptake rate: to what proportion of review invitations do political scientists agree?Footnote 6

Figure 3 illustrates this proportion for 2013 and 2024 (left panel), revealing that the uptake rate declined considerably as the number of invitations increased, but only in 2024. In other data, this pattern represents reviewer fatigue (Vesper Reference Vesper2018). In 2013, there was no relationship between invitations and agreeing to review—faculty members continued to say yes when asked. In 2024, uptake rates decreased with invitations in a way that greatly reduced the high end of reviewership. The right panel of figure 3 shows whether this rate changed across years by rank, and the results are revealing. Full professors showed no decline, which must mean that they were being invited to review less often because their average number of reviews declined. Assistant professors showed the greatest decline in the uptake rate, indicating that they were being asked to review more often.Footnote 7

Figure 3 How the Review Uptake Rate Has Shifted by Year, Rank, and Review Requests

Source: 2013 and 2024 political science surveys.

However, one important datapoint cuts against the fatigue explanation, at least outside of PhD programs. About half of faculty outside of PhD programs (not including full professors) stated that they would like to review more often. That also was true of assistant professors in PhD programs. Only a quarter or less of other respondents stated that they would like to review more.

We suspect that these patterns reflect two important developments during the past decade: the efforts to (1) diversify the reviewer pool, and (2) institutionalize peer reviewing. The first development is confirmed dramatically by comparing the distribution in review requests by year and race. In 2013, whites received four more requests to review than nonwhites (10 versus six). By 2024, that gap was reduced to almost zero (8.07 versus 7.98). By gender, the gap was small and in favor of men in 2013 (9.4 versus 9.1), but it had reversed by 2024 (eight for men versus 8.4 for women).Footnote 8 These shifts help us to make sense of the increase in requests of junior scholars because that is where diversification is happening most quickly.Footnote 9 Although diversification is necessary and important, we might be wary of asking junior scholars to take on greater service burdens.

The second development that may bear on reviewing patterns is the institutionalization of peer review. The discipline has changed significantly in this respect since 2013, when it was still the “Wild West.” Since then, Publons came online to track reviewing and then was absorbed into Web of Science. Reviewing now can be a part of an ORCID “permanent record.” Editors, however, are skeptical of the efficacy of this development. As one editor stated: “I think it has had zero impact.” Another veteran editor, long in the trenches, observed: “No one rewards refereeing, and getting an ‘attaboy’ from a fancy citations engine isn’t going to change that at all.” In fact, some editors reward reviewing by providing awards or writing letters to department chairs, but we agree that it is difficult to leverage a participation certificate into a promotion.

However, we suspect that this institutionalization of reviewing has made a difference on the margins. Counting peer reviews sends the signal that it is expected of everyone and is not a special distinction that reflects demonstrated expertise. Peer reviewing currently might be viewed as more like jury duty than being parade grand marshal. If this is true, then we should observe a decline in the perception that being asked to review reflects professional stature, and an increase in the view that reviewing does and should count as service. Moreover, if reviewing is distributed more widely, there may be more mismatches in expertise—that is, more faculty members who believe that they are unqualified to review particular manuscripts.

This evidence, shown in figure 4, largely aligns with this view about the effects of institutionalizing reviewership. There is an uptick among all groups in feeling unqualified to review manuscripts. The fact that full professors shifted the most suggests that the research is shifting along with the reviewer pool. The results also demonstrate a reduced emphasis on reviewing as a badge of honor – there is less agreement with the statement that “being asked to review is a measure of professional stature” for all but full professors, who still maintain that notion. The view that peer review is a way to gatekeep research is also declining dramatically, even among full professors. This suggests a depersonalization of peer review consistent with a broader distribution of reviewing and a declining consideration of reviewing as a mark of prestige.

Figure 4 How Select Beliefs and Attitudes about Reviewing Have Shifted from 2013 to 2024 by Rank and PhD Program

Source: 2013 and 2024 political science surveys.

There is a greater realization of reviewing as a service expected of everyone that should count, with those in PhD programs increasing their agreement such that they now align with those outside of PhD programs. It is interesting to note that more faculty across the board, but especially in PhD programs, perceive that peer review does count as service (40% now agree, 38% disagree). Peer review is viewed by most in the discipline as something everyone should do as a matter of course, and it appears to be viewed as less special, less a mark of distinction. Together, we believe this evidence is consistent with declines in the top end of reviewership and more doing some reviews as peer reviewing is institutionalized.

There is a greater realization of reviewing as a service expected of everyone that should count.

Other norms could impinge on reviewing as well. These questions were asked only in the 2024 survey; mean agreement is shown in figure 5 by rank and being in a PhD program. There was modest agreement with the “golden rule” that scholars should supply as many reviews as they trigger through their own submissions (Lauderdale Reference Lauderdale2014; see Djupe, Smith, and Sokhey Reference Djupe, Smith and Sokhey2022, 221–24, for an assessment of this in practice). Only full professors outside of PhD programs leaned toward disagreement, but agreement overall was anemic at 38% (37% neither agreed nor disagreed). Most respondents disagreed with the idea of refusing to review for journals that they do not submit to, although assistant professors outside of PhD programs leaned slightly more toward agreement. Moreover, there was some resentment toward the substantial profits that academic publishers make, which we all contribute to with our free reviewing labor; however, the most well paid among us (i.e., full professors) are less bothered by it. All of these norms, except the last, tend to support continued reviewing; none provides viable alternative explanations for the decline in reviewing among full professors and the modest increases among junior faculty members.

Figure 5 Support for Reviewing Norms by Rank and PhD Program, 2024

Source: 2024 political science survey.

THE WAY FORWARD

Chicken Little might not be entirely correct—the “sky” of peer reviewing in political science is not falling, but clouds are forming. Overall, faculty are contributing fewer reviews now than they did a decade ago, and editors appear to face modestly lower review acceptance rates that seem to align with global rates. However, this decline is not evenly distributed. It is concentrated among full professors at PhD-granting institutions. We suspect that two main factors are at play. First, editors are casting a wider net in their search for reviewers, out of necessity to find a sufficient number of reviewers as well as a desire to diversify the reviewer pool. Second, peer reviewing is becoming more institutionalized—respondents report that it does and should count for tenure and promotion, and Web of Science provides a supportive framework for verification of service activities. As peer review has become distributed and institutionalized, it has also lost its veneer of prestige.

We argue that the wider distribution of peer reviewing is normatively important. A broader reviewing pool likely translates into more diverse ideas shaping the published record. It also means hearing from members of groups that have long been sidelined, including women, people of color, and less-experienced scholars. Moreover, the institutionalization of peer reviewing is a positive development. A robust body of peer-reviewed scholarship is a public good with wide benefits for scholars, students, journalists, practitioners, and the public. However, peer-reviewed work does not just appear by magic—it requires the service of peer reviewers.

If we are correct that peer review is losing its ability to signal “expert volunteer,” then editors should expect that finding a sufficient number of qualified reviewers will continue to be a challenge. To combat this situation, we advocate for leaning into institutionalization. Pressing the belief that each manuscript submission should be followed by two to three peer reviews should raise the rate of peer reviewing. This norm of reciprocity should be inculcated across the field in graduate schools, conferences, and publications such as PS. However, editors also can play an important role through journal policies. One journal once reminded authors that consistently declining its review requests would preclude manuscript consideration. A combination of norms and policies can contribute to the sustainability of the valuable peer-review process in political science.

If we are correct that peer review is losing its ability to signal “expert volunteer,” then editors should expect that finding a sufficient number of qualified reviewers will continue to be a challenge.

ACKNOWLEDGMENTS

We appreciate the busy journal editors who took the time to respond to our questions about their experiences. We also are grateful to many of our colleagues who responded to our survey. The reviewers provided thoughtful feedback that improved the article. Our thanks also to Zach Broeren and Bear Djupe for their research assistance.

DATA AVAILABILITY STATEMENT

Research documentation and data that support the findings of this study are openly available at the PS: Political Science & Politics Harvard Dataverse at https://doi.org/10.7910/DVN/MGFWJE.

CONFLICTS OF INTEREST

The authors declare that there are no ethical issues or conflicts of interest in this research.

Footnotes

2. We thank Reviewer 1 for these observations about MDPI journals.

3. We should be cognizant, however, that a ratio of two invitations to one acceptance does not necessarily mean that at least one reviewer refused. In many cases, potential reviewers do not respond, which may be a form of pocket veto, but it also could mean that the email address is no longer valid. That distinction may explain why acceptance rates in survey data are much higher than what journals report.

4. Reviewer 3—an admitted editor—told the same sad story. The struggle is real.

5. Per reviewer request, the 2024 survey is available at http://pauldjupe.com/wp-content/uploads/2024/11/2024_Political_Scientist_Survey.pdf.

6. In 2013, the question asked, “Do you often agree to peer review? Please estimate the percentage of requests to review that you agree to complete for journals.” In 2024, the measure took the number of reviews divided by the respondent’s estimate of the number of requests that was asked this way: “Whether or not you said yes, how many times, if any, were you invited to engage in peer review for academic journals in the past year?”

7. Moreover, this is true from these two datasets: assistant professors reported receiving 1.5 more review requests (6.2 in 2013 versus 7.6 in 2024), whereas associate and full professors received fewer—about two fewer for associates (9.4 versus 7.5) and about five fewer for fulls (13.7 versus 8.9). Also, a similar decline among assistant professors registers as a greater rate of decline because they receive fewer requests.

8. The editors of European Political Science found that men were being invited to review at much higher rates (Stockemer et al. 2020). However, they compared the number of women to men (4/10 in invitations and 3/10 in completed reviews) rather than the average number completed by women and men. If there is parity in completed reviews, then our sample also suggests that 40% of reviews come from women because they comprise 40% of the sample. In the data analyzed herein, we found no gender gaps overall nor within ranks.

9. Assistant professors are near gender parity (47% women) compared to 33% women among full and associate professors in this sample. Assistant professors had greater rates of nonwhites (30%) compared to associate professors (15%) and full professors (12%) in the 2024 sample.

References

REFERENCES

Borer, Douglas A. 1997. “The Ugly Process of Journal Submissions: A Call for Reform.” PS: Political Science & Politics 30 (3): 558–60.Google Scholar
Djupe, Paul A. 2015. “Peer Reviewing in Political Science: New Survey Results.” PS: Political Science and Politics 48 (2): 346351.Google Scholar
Djupe, Paul A., and Walker, Brooklyn. 2025. “Replication Data for ‘Reviewing and the State of the Discipline.’” PS: Political Science & Politics. DOI: 10.7910/DVN/MGFWJE.CrossRefGoogle Scholar
Djupe, Paul A., Smith, Amy Erica, and Sokhey, Anand E.. 2022. The Knowledge Polity: Teaching and Research in the Social Sciences. New York: Oxford University Press.CrossRefGoogle Scholar
Dolan, Kathleen, and Lawless, Jennifer L.. 2024. “Annual Report to the Executive Council of the Midwest Political Science Association March 2024.” https://ajps.org/wp-content/uploads/2024/04/FINAL-COPY-AJPS-editorial-report-March-2024.pdf.Google Scholar
Flaherty, Colleen. 2022. “The Peer-Review Crisis.” Inside Higher Ed, June 13. www.insidehighered.com/news/2022/06/13/peer-review-crisis-creates-problems-journals-and-scholars.Google Scholar
Gelbach, Scott. 2013. “A Modest Proposal to Improve the Peer-Review Process.” https://themonkeycage.org/2013/08/27/a-modest-proposal-to-improve-the-peer-review-process. Accessed December 18, 2013.Google Scholar
Lauderdale, Benjamin. 2014. “Journal Review Debt.” https://benjaminlauderdale.net/blog/archives/2014/04/18/journal-review-debt. Accessed June 9, 2021.Google Scholar
Layman, Geoffrey C., Allen, Levi G., James, R. G. Kirk, Wayde, Z. C. Marsh, and Radcliff, Benjamin. 2024. “The Pandemic and Political Behavior: Staying the Course.” PS: Political Science & Politics 57 (3): 414–19. DOI:10.1017/S1049096523000884.Google Scholar
Lewis, Andrew R., and Tepe, Sultan. 2024. “Politics and Religion’s Resilience During the COVID-19 Pandemic.” PS: Political Science & Politics 57 (3): 408–13. DOI:10.1017/S1049096523001087.Google Scholar
Petrescu, Maria, and Krishen, Anjala S.. 2022. “The Evolving Crisis of the Peer-Review Process.” Journal of Marketing Analytics 10:185–86.CrossRefGoogle Scholar
Stockemer, Daniel, and Reidy, Theresa. 2024. “Introduction: Pandemic and Post-Pandemic Publication Patterns in Political Science.” PS: Political Science & Politics 57 (3): 403407. DOI:10.1017/S1049096523001051.Google Scholar
Tripp, Aili Mari, and Dion, Michelle L.. 2023. “American Political Science Review Editorial Report: Executive Summary (Fall 2022).” Political Science Today 3 (1): 2428.CrossRefGoogle Scholar
Vesper, Inga. 2018. “Peer Reviewers Unmasked: Largest Global Survey Reveals Trends.” Nature, September 7. www.nature.com/articles/d41586-018-06602-y.CrossRefGoogle Scholar
Figure 0

Figure 1 Distribution of Reviewing by Political Scientists, 2013 and 2024Source: 2013 and 2024 political science surveys.

Figure 1

Figure 2 Distribution of Reviewing by Rank and Program, 2013 and 2024Source: 2013 and 2024 political science surveys.

Figure 2

Figure 3 How the Review Uptake Rate Has Shifted by Year, Rank, and Review RequestsSource: 2013 and 2024 political science surveys.

Figure 3

Figure 4 How Select Beliefs and Attitudes about Reviewing Have Shifted from 2013 to 2024 by Rank and PhD ProgramSource: 2013 and 2024 political science surveys.

Figure 4

Figure 5 Support for Reviewing Norms by Rank and PhD Program, 2024Source: 2024 political science survey.

Supplementary material: Link

Djupe and Walker Dataset

Link