Hostname: page-component-669899f699-8p65j Total loading time: 0 Render date: 2025-04-27T12:18:50.806Z Has data issue: false hasContentIssue false

Hazards of the (Over-)Standardization of Academic Legal Works

Published online by Cambridge University Press:  22 April 2025

Rights & Permissions [Opens in a new window]

Abstract

To compare and classify objects on the basis of quantitative indicators is nothing new. Bereft of any direct subjectivity, the quantified method of assessing quality may be a useful tool to avoid or reduce any bias or other forms of human frailties in the assessment. However, the effort to assess the quality of academic output with quantifiable metrics is perhaps a relatively new but increasingly common phenomenon. This article argues that while some higher education rankings can serve laudable purposes, such as sending signals to aspiring students and other stakeholders and giving institutions an opportunity for introspecting on their affairs, the ranking of research outputs on quantifiable metrics alone is an inherently hazardous task. Rankings may be used as an indicium of quality; however, they cannot and should not be the sole proxy for assessing the quality of scholarly outputs. Another main argument of the article is that the university-wide “one size fits all” metric should exhibit more deference to discipline-specific norms, such as law.

Type
Article
Copyright
© The Author(s), 2025. Published by International Association of Law Libraries

Introduction

There is a near ubiquity of quantification.Footnote 1 With the increasing availability of various rankings of higher education institutions (HEIs),Footnote 2 individual disciplines,Footnote 3 and academic outputs,Footnote 4 academic administrators who rely on quantification as a proxy for quality are in one way or another nudging academics to play by certain “rules of the game” so that their institutions can feature positively in these rankings. The fervor of academic administrators in ensuring that their respective HEIs feature favorably in the rankings is understandable, as they may directly impact student enrollment, funding from public or other external bodies, and many other tangible outcomes. The rankings prepared by different institutions follow a wide array of metrics, and the concomitant pressure exerted on academics (for obvious reasons, particularly, but not exclusively, younger faculty without tenure) to tailor their research to suit the demands of the HEIs’ administrators are, at times, creating some unwitting outcomes for academia and academics’ research outputs. Even a more senior academic with tenure would find it difficult to be oblivious to the “ranking game” and not factor it in when setting their research agenda. At the institutional level, there is a somewhat similar trend, whereas some of the world’s most elite universities may decide to stay beyond the “ranking game,” although replicating the same for the rest is not so easy.Footnote 5

This article seeks to highlight some concerns that a mechanical adherence to this pressure emanating from the “ranking game” may create. This article does not purport that quantitative evaluation plays no role whatsoever in the measurement of the quality of academic scholarly outputs; indeed, it accepts that quantitative evaluation may play a role in assessing the quality of research outputs. It merely argues that the current trend of excessive reliance on seemingly objective quantitative metrics may not always be truly objective. More importantly, even beyond the question of objectivity, there is a greater fundamental concern about the adverse impacts of overreliance on quantitative metrics.

The article’s parts are the following: firstly, the article discusses the motives for and benefits of a quantitative assessment of scholarly outputs; the next part demonstrates how quantitative assessments may sometimes be counterproductive; the following part shows how this hazard may be more prominent in the Global South than the Global North; and the last part concludes the article. While this article focuses on authors and HEIs, quantified ranking metrics may also exert pressure on journal editors, making them opt for “safe choices” and be lukewarm to embracing well-written works by lesser-known authors or esoteric topics irrespective of their academic merit.Footnote 6

The Advantages of Quantitative Assessment Metrics

Quantitative measures can reduce the scope of discretion, which can in turn reduce the possibility of bias. As one academic explains, it can help dispel the “odious ‘old boy network’ where appointments were determined according to who you knew and who supported you (my Baron is more powerful than your Baron) and away from subjective judgments of quality towards some objective methodology in the interest of fairness and academic excellence.”Footnote 7 As already indicated, this article does not assert that no form of quantified assessment of academic outputs, such as journals or articles appearing in them, is objectively possible. All rankings worth their salt should be based on some form of scientific criteria. If a ranking is not based on sound and objectively assessable methodology, in the longer term, it will fail to garner wider acceptance. The ranking of HEIs may serve some useful purposes. Rankings may provide HEIs with a scope for introspection regarding their academic staffs’ performance and the opportunity to work on their weaknesses. They may help prospective students compare the perceived standings of various HEIs or of individual disciplines across HEIs. Taken in proper perspective and dosage, they should also provide academic institutions with an incentive to engage in healthy competition and strive to perform better. These rankings may also help the world beyond academia to obtain information about the HEIs. The “ranking game” may also push some teaching-intensive universities to take research and publication more seriously.Footnote 8

The Limits and Problems of Quantitative Assessment Metrics

The fundamental purpose of research has always been (at least in theory, if not invariably complied with in practice) to generate knowledge that should aid the progress of humankind and make the world a better place.Footnote 9 A ranking may be useful when it serves as an indicator of quality but is not a direct measure of it.Footnote 10 Religious adherence to rankings or the evaluation of research outputs based rigidly on quantitative metrics can undermine society’s intrinsic expectation of HEIs’ research outputs. This is likely to happen because these expectations may not only determine which outlets will be chosen by academics for their submissions and eventual publication but also what kind of research they will undertake. Perhaps this is because many experienced academics (in the social sciences, at least) would tell you that they generally embark on their research with a clear view of potential outlets where they would like to see their articles published. And the consideration of potential outlets can significantly influence academics’ research agendas.

Let us draw on a hypothetical scenario from law. Let us assume that an academic in an economically backward country is intrigued by a pressing national legal issue, such as the “underwhelming performance in the recovery of loans disbursed by banks and financial institutions,” an area that is chronically under-researched in the researcher’s own discipline.Footnote 11 The lack of secondary sources would make it challenging to accomplish the research. And it is highly likely that the researcher would struggle to find a place in a prestigious foreign outlet that would offer tangible professional rewards to the researcher.

Irrespective of the article’s merit, it may face a desk rejection by a reputable international legal journal. The editorial board may consider that even sending it for peer review would be a waste of time for the scant pool of competent peer reviewers. After all, the article might be outside the journal’s scope, or the readers might have little to no interest in specific issues pertaining to a national jurisdiction with little global clout. In addition, a publication in the most reputed national outlet of the researcher’s own country may not be prestigious enough to fetch them the due reward commensurate with their efforts. For example, such a publication may yield half the reward (say, in the form of points for promotion, salary increment, or similar perks) that a comparable publication in an internationally recognized outlet, such as a Scopus-indexed journal, would bring. Or even worse, an international publication might be doubly rewarding over a national publication for the academic institution. As such, academics are rewarded more for an internationally published article in an institution’s quest for internationalization and/or faring well in the “ranking game.” One may contend that a good researcher should ignore these institutional incentives and seek knowledge for knowledge’s sake. However, just because an academic may be moved by the idea of working to make the world a better place, they would have to shun the idea of making their own life materialistically better, which seems to be an absurd proposition. The pressure of ranking outlets is so prominent that even a well-established scholar deriding another US-centric ranking has lamented that he could not denote his “chagrin that OUP, CUP, Kluwer and their fellow European legal publishers do not get together to produce a database which could be used more accurately to measure the influence of the journals they publish.”Footnote 12

On the other hand, an academic colleague of this author, in writing a review essay on a topic that has a global audience (say, human rights or climate change), even without a compelling narrative, would stand a far better chance of having the essay published in a more prestigious outlet. At least, in the first instance, the second researcher would have a much broader pool of outlets where they could submit their essay for publication consideration. In terms of efforts required to produce the article, this second academic researcher should also have a comparatively easier ride because there may be a wealth of already published scholarly materials on which this researcher can build their own work. While this latter publication, at least in the short term, will contribute to the institution’s reputation, perhaps most academics would agree that, in terms of the expectation from academic research, the first researcher’s output is not any less desirable. In the long run, it is likely to have a far better impact on the academic knowledge base and also on society—that is, beyond the academic community. This is not to trivialize the laborious work of the second researcher, who is perhaps working hard enough to get their research published in a prestigious journal after fierce competition with many other manuscripts. However, this scenario demonstrates a disproportionately lower reward for the first academic’s research endeavor.

Academic research in many disciplines (particularly in the basic sciences and social sciences), unless accompanied by funding through some form of a grant, which again is fiercely competitive, is rarely financially incentivized. Now, if the academic hiring, promotion, and other service benefits that result from academic research outputs would be strictly aligned with the rankings, very few academics would be unable to succumb to the pressure of academic administrators to abide by the “ranking game” rules. This would make their task of engaging in the pursuit of “knowledge for the sake of knowledge” or “passion for finding answers to a problem that piques their own interest” very difficult.

There can also be many discipline-specific nuances that a generic all-discipline-based ranking methodology would not encapsulate. To take an example, an overwhelming number of law journals published in the US are student edited. This phenomenon may have something to do with the fact that, in US law schools, students enrolled in Juris Doctor (JD) programs already have a first degree in another discipline. Thus, even many academics from other social science backgrounds may not be aware that very few US law journals have any faculty involvement and are not peer reviewed in the strictest sense of the term. However, any law academic would probably tell you that the publication process in these law journals is no less competitive than that in traditional peer-reviewed journals.Footnote 13 Now, very few of these US law journals feature in the list of Scopus-indexed journals, which is an important component of the QS World University Rankings.Footnote 14 In this case, an academically sound article in a US law journal would offer little or no recognition for an academic author whose institution puts a premium on publishing in Scopus-indexed journals to feature positively in the QS World University Rankings. Thus, the decision of publication should be driven by the academic’s judgment of such factors as intended readership, the perceived reputation of the journal’s publisher, the reputation of the editorial board, and the reputation of past authors rather than responding to a specific call for papers, etc., which could be relegated to a secondary status. The institutional priority and emphasis on ranking could have a decisive impact on the academic’s publication choice.

The quality of all research outputs cannot be measured by quantitative methods or any ranking based on seemingly “objective” criteria. The Washington & Lee Ranking of Law Journals methodology, for instance, is based on the following:

The Westlaw search results give only the number of citing documents and thus do not show where a citing article or case cites two or more articles in a cited legal periodical. Sources for the citation counts are limited to documents in Westlaw’s Law Reviews & Journals database (over 1,000 primarily U.S. publications) and Westlaw’s Cases database (all U.S. federal and state cases). The searches look for variations of the Bluebook citation format commonly in use in the U.S. (volume number, journal name, [page], year), and the searches are flexible in allowing the year to occur within eight words of the full or abbreviated journal name.Footnote 15

Even citation counts, an age-old proxy for the quality of academic outputs, when they are obfuscated by self-citation (not to serve any real purpose of attribution, but merely as a ploy to increase one’s own citations), can have limitations.Footnote 16 Even the effort to mechanically isolate self-citation may not exclude the vice of self-citation, as multiauthored works with et al. can easily get around the mechanical filtering of self-citation.Footnote 17 One might think that the element of chutzpah in citing one’s works, unless the citation is unavoidable, would prevent many academics from resorting to doing it, but this is not necessarily true.Footnote 18 The value of an academic research output may not be immediately evident. Many published works may not have any direct tangible value at all, and they may benefit society in a very incremental or unexpected way.Footnote 19 Even today’s seemingly esoteric research work may prove to be immensely beneficial to society in the days to come. Thus, academic administrators should be taking a very cautious approach in following a rigid quantitative method of assessing scholarly outputs.

Academic hiring and promotion committee members, consisting of senior academics of respective academic disciplines at any quality institution, would know the quality of various publishing outlets of their respective fields well enough. They would not be ignorant or reckless in assessing the quality of individual research outputs such as journal articles. Of course, when there is a lack of a standardized mechanism, or the assessors are vested with a standard mechanism that allows a significant dose of discretion, that kind of discretion is hazardous and prone to abuse. However, the same concern may be applicable to all kinds of human discretions, and that concern alone cannot be the grounds for eliminating discretion. In other words, academic judgments may not always be amenable to quantifiable judgment, and some degree of subjectivity (hopefully applied objectively) in assessing the value of research outputs may not necessarily be bad. As University College London’s UCL Bibliometrics Policy adopted in 2020 succinctly puts it,

[R]esearch “excellence” and “quality” are abstract concepts that are difficult to measure directly but are often inferred from bibliometrics. Such superficial use of research metrics in research evaluations can be misleading. Inaccurate assessment of research can become unethical when metrics take precedence over expert judgement, where the complexities and nuances of research or a researcher’s profile cannot be quantified.Footnote 20

Deference to Discipline-Wise Norms

Another potential problem is using a “one size fits all” notion as an across-the-board criterion for assessing research outputs in HEIs. For example, this author, in his university’s internal grant application process, would often be questioned on methodology by external reviewers and university committee members who happen to have non-legal backgrounds (generally, in the humanities and social sciences). To these committee members (irrespective of their rich discipline-specific backgrounds), doctrinal research methodology involving the analyses of statutes, judicial precedents, and treaties would draw on secondary sources. However, anyone familiar with legal research methodology would readily recognize that these are primary sources of legal research. In fact, a relatively small portion of legal research output is based on quantitative methodology.Footnote 21 Or, even worse, this author recalls an occasion when an administrator at a previous university commented that legal research methodology is “thin.” While the above instances may seem too anecdotal, there is some evidence that these experiences are not unique.Footnote 22

Few academics outside of law might understand the law journal culture of the US student-edited law journals, where simultaneous submission is not unacceptable and is very much the norm. The citation impact across disciplines also varies.Footnote 23 Sometimes, academics, even very experienced ones, may assume that the norm in their discipline is not universal.Footnote 24 For instance, during an editorial meeting of a new multidisciplinary journal, a colleague of this author from a discipline where research publication tends to be more often funded than not, commented that a journal that does not charge an article processing fee is not a high-quality journal. When in institutional committees, this sort of uninformed, narrow, and discipline-specific standard becomes the norm for assessing quality, the academic output of some disciplines may not receive due deference or recognition.

Disproportionate Impact on the Global South?

It is well-known that most of the prestigious ranking bodies of academic scholarly works and the outlets that acknowledge them are based in the Global North or in large developing countries.Footnote 25 Indeed, the resources necessary for setting up a rigorous ranking mechanism are possibly beyond the reach of many institutions in the Global South. Thus, scholarly works published in the Global North invariably rank higher and offer greater tangible rewards for researchers. Naturally, even scholars in the Global South could be tempted to craft their research agendas in a way that would offer them a better chance of publishing in those outlets. This is not to claim that academic publishing is always bifurcated along geographical lines, but for a jurisdiction-specific subject like law, it is possible that the outlet’s jurisdiction-specific matters would receive more attention. For example, a southern Indigenous problem, which may be a pressing issue for the researcher’s own country, might be ignored in favor of topics that Northern outlets would likely favor. However, this may not be a problem in the natural and physical sciences,Footnote 26 as perhaps most works in those disciplines are country neutral. However, as already pointed out, scholarly works in the social sciences and law, in particular, can have country-specific content, which may often mean that research focusing on issues relevant to a Global South country may be of little or no interest to a Northern scholarly outlet.

Of course, there are many Global North outlets publishing in the social sciences that have a truly global scope, but at the same time, there are many that inevitably only have a national or regional focus. And often, the latter will focus on specific issues, such as constitutional law, corporate law, etc. Even when top-notch journals in the Global North publish about matters of concern to a single jurisdiction in the Global South, they typically focus on larger developing countries, such as Brazil, China, India, South Africa, and so on. Even when top-notch journals in the Global North publish about a single jurisdiction in the Global South’s matters of concern, they typically focus on larger developing countries, such as Brazil, China, India, South Africa, and so on. Thus, it is probable that by putting too much emphasis on quantitative global metrics, the research agendas of many Global South social science researchers are shaped by the priorities of Northern outlets. It may be tempting to brandish this phenomenon as a form of imperialism.Footnote 27 However, it would be rather naïve to brand it as such since the Global North imposes this phenomenon on the Global South; rather, this is a form of overzealous hankering after following the Northern standard as an automatic proxy for quality.

Conclusion

Despite a clamor for caution on excessive reliance on the quantification of scholarly outputs, it is not the argument of this article that ranking scholarly outputs is in and of itself undesirable. Instead, the concern expressed here is about the dogmatic adherence to any particular form of ranking metrics of scholarly outputs and the use of a one-size-fits-all approach in evaluating academic research outputs in a discipline like law, which is by nature nuanced, diverse, and difficult to assess through any strict “objective, quantified scientific assessment.” While the hypothetical examples used in this article are drawn from law, a somewhat similar kind of concern may apply to many other disciplines too. Thus, a regimented approach to assessing scholarly outputs with quantitative methods demands that academic administrators take a cautious approach. Standardization has its benefits, but too much of it can also be problematic. This is possibly more so in a discipline like law, where any metric-based quality analysis may not always serve its intended purpose. And the same may even apply to some other disciplines.

Footnotes

*

Ph.D., Macquarie University; LL.M. (Intellectual Property & Information Technology Law), National University of Singapore; LL.B. (Honours), University of Dhaka; Professor of Law and Dean, School of Humanities and Social Sciences, North South University, Bangladesh. Email: [email protected]. An earlier version of this paper was presented at the 9th Asian Conference on Education & International Development (ACEID2023), organized by the International Academic Forum, Japan, on March 30, 2023 (presented virtually). The author thanks the panelists and participants for their very helpful comments. He is also grateful to Professor James Thuo Gathii (Loyola University Chicago) for his generous comments on a draft version of this paper. He is also grateful to his colleagues Nafiz Ahmed and Sajid Hossain for their valuable comments on a draft version of this paper. He also thanks Ahamed Musa, Fateha Tun Noor, and Sayere Nazabi Sayem for their able research assistance; all errors are the author’s alone.

References

1 Merry, Sally Engle, The Seductions of Quantification: Measuring Human Rights, Gender Violence, and Sex Trafficking (University of Chicago Press, 2016)CrossRefGoogle Scholar.

2 Shanghai Jiao Tong, Academic Ranking of World Universities, https://www.shanghairanking.com/rankings/arwu/2024; Times Higher Education World University Rankings, https://www.timeshighereducation.com/world-university-rankings/latest/world-ranking/; QS World University Rankings, https://www.topuniversities.com/world-university-rankings; Financial Times Global MBA Rankings, https://rankings.ft.com/rankings/2951/mba-2024.

3 For example, in 2010, the Australian Research Council (ARC) prepared a discipline-wise ranking of journals, classifying the journals into A*, A, B, and C categories. For a scholarly critique of the ranking metrics, see Vanclay, Jerome K., “An evaluation of the Australian Research Council’s journal ranking,” Journal of Informetrics 5, no. 2 (2011): 265–75Google Scholar. In view of strong criticism, the ARC subsequently abandoned the ranking; see also Sunanda Creagh, “Journal Rankings Ditched: The Experts Respond,” The Conversation (June 1, 2011), https://theconversation.com/journal-rankings-ditched-the-experts-respond-1598; see also “Editorial, Peer Review – Institutional Hypocrisy and Author Ambivalence; EJIL Roll of Honour; 2020 EJIL Peer Reviewer Prize; Letters to the Editors – A Note from EJIL and I•CON; Legal/Illegal; 10 Good Reads; In This Issue; A Bumper Review Section,” European Journal of International Law 31, no. 4 (2020): 1187–208 (referring to the Research Council and the Commission of the European Union adopting a strict quantitative method in assessing quality in deciding on research funding).

4 See, e.g., Washington & Lee Law Journal Rankings, https://managementtools4.wlu.edu/LawJournals/; Scimago Journal & Country Rank, https://www.scimagojr.com/.

5 See, e.g., “Columbia University Ditches the College-Ranking System,” The Economist (June 8, 2023), https://www.economist.com/united-states/2023/06/08/columbia-university-ditches-the-college-ranking-system (reporting that, in 2022, seventeen medical schools and as many as sixty-two law schools did not submit data to US News).

6 Editorial, “Impact Factor – The Food is Bad and What’s More There is Not Enough of It; EJIL – the Beginning of an Existential Debate; Masthead Changes; In this Issue,” European Journal of International Law 23, no. 3 (2012): 607–12.

7 Editorial, Peer Review – Institutional Hypocrisy and Author Ambivalence” (n 3). In the context of the United States, a scholarly work has been labeled by Hirshman, Linda R., “Foreword: The Waning of the Middle Ages,” Chicago-Kent Law Review 69, no. 2 (1993): 293 Google Scholar.

8 Islam, Md. Rizwanul, “Challenges of Research and Publications on International Law in Bangladesh: A Soliloquy?,” Chittagong University Journal of Law 26 (2024): 22 Google Scholar. However, cynics may argue that this does not mean per se that research and publication would improve. That being said, unless research and publication are encouraged, the question of quality does not even come to the fore.

9 Of course, ranking, in a way, is clearly a competitive exercise, and sometimes some universities may even push it to an extremely pernicious level. For example, some Saudi universities have some reputed researchers use their fleeting appointments in Saudi institutions as their affiliations to fare well in the “ranking game.” See Wagdy Sawahel, “False affiliations boost Saudi university rankings – Report,” May 17, 2023, https://www.universityworldnews.com/post.php?story=20230517141406377.

10 Lindgren, James and Seltzer, Daniel, “The Most Prolific Law Professors and Faculties,” Chicago-Kent Law Review 71, no. 3 (1996): 781 Google Scholar.

11 This is not a mere hypothetical scenario; defaulting on bank loans is a chronic problem for Bangladesh. The broader trend is again not a phenomenon of the wider world, as a scholar amusingly points out that a researcher taking “the decision to write about mortgage lien priorities rather than knotty issues of economic efficiency or constitutional interpretation […] presumably has insufficient desire to become well known or move ‘up’ in the pecking order of the contemporary theory-intoxicated legal academy”; see Balkin, J.M. and Levinson, Sanford, “How to Win Cites and Influence People,” Chicago-Kent Law Review 71, no. 3 (1996): 843 Google Scholar. The same authors also write rather wryly, but in many contexts aptly, at 854: “Writers are often advised to write about what they know. Since this would disable most law professors from writing anything at all, we offer a different suggestion: Write about constitutional law. Over half (55) of the top 103 articles are about the Constitution in one way or another.” The work also points out, at 855, that there may be hardly any nexus between the importance of a work and its placement in a law journal by amusingly pointing out that “[i]f we have one basic piece of advice about topic selection, it’s this: Never confuse what’s important in the world outside law schools with what’s important in law reviews.”

12 Editorial, “Impact Factor – The Food is Bad and What’s More There is Not Enough of It” (n 6).

13 Although lacking in peer review and requiring that all submissions be accompanied by the manuscript author’s brief CV, there is an obvious risk that, at least in some cases, the publication decision of some time-pressed editors would be somewhat influenced by some form of letterhead bias; see, e.g., Yoon, Albert H., “Editorial Bias in Legal Academia,” Journal of Legal Analysis 5, no. 2 (2013): 309–38CrossRefGoogle Scholar; Nance, Jason P. and Steinberg, Dylan J., “The Law Review Article Selection Process: Results from a National Study,” Albany Law Review 71, no. 2 (2008): 565 Google Scholar. For a general critique of student-edited law journals, see Hamilton, Neil, “The Law Faculty’s Ethical Failures Regarding Student-Edited Law Reviews,” The Professional Lawyer 23, no. 2 (2016): 34 Google Scholar; Posner, Richard A., “The Future of the Student-Edited Law Review,” Stanford Law Review 47, no. 6 (1995): 1131 CrossRefGoogle Scholar; cf. Cotton, Natalie C., “Comment: The Competence of Students as Editors of Law Reviews: A Response to Judge Posner,” University of Pennsylvania Law Review 154, no. 4 (2006): 951–82CrossRefGoogle Scholar.

14 See Chloe Lane, “How to use the QS World University Rankings by Subject,” QS World University Rankings (Feb. 2017), https://www.topuniversities.com/subject-rankings/methodology.

15 “Ranking Methodology,” Washington & Lee Law Journal Rankings, https://managementtools4.wlu.edu/LawJournals/Default3.aspx. Thus, with most laws being national in character, there is a strong chance that in these rankings, the US law journals would have distinct advantages over their non-US counterparts.

16 That is not to argue that citation count has no value; as an article has pointed out, “no matter how eagerly we disavow belief in a correlation between citation rates and quality, our fascination with these lists remains, for the strength of citation counts surely bears some connection to a scholar’s importance and influence”; see Balkin and Levinson, “How to Win Cites and Influence People,” 844 (n 11). It is only to argue that citation counts may be manipulated or at least may not always be intrinsically connected with the quality of work and may be based on subjective, quality-neutral factors. A citation may also be a reflection of a desire to be associated with certain big or safe names, as pointed out that an “insecure dinner-party host can walk into the wine shop and ask for ‘the wines most often bought by classy people,’ the insecure legal academic […] can rarely go wrong by associating […] even if only through footnotes, with the articles published by classy people in classy law reviews.” See ibid., 867.

17 Klika, Karel D., “Associated Self-Citations and Propagation Luck Two Problems with Citation Counts,” Journal of Scholarly Publishing 51, no. 4 (2020): 299308 CrossRefGoogle Scholar. Some citation metrics, such as Google Scholar, do not even make any distinction between citation and self-citation.

18 Balkin and Levinson, “How to Win Cites and Influence People,” 857 (n 11).

19 Richard, A. Posner, in “The Deprofessionalization of Legal Teaching and Scholarship,” Michigan Law Review 91, no. 8 (1993): 1921 Google Scholar, observed the following:

I said that I thought that the new legal scholarship should be judged by its best rather than by its worst examples. Judge Edwards might reply that the important thing is the ratio of the one to the other, that if most of the stuff is garbage, the price of the occasional pearl is too high […]. Out of 6000 eggs laid by a female salmon and fertilized by the male, on average only two salmon are born who live to adulthood. Does this mean that 5998 eggs are “wasted?” Only if there is a more efficient method of perpetuating the species. Scholarship, like salmon breeding in the wild, is a high-risk, low-return activity […]. We should not be surprised, or lament, that so much of the new legal scholarship is of little value to anyone.

Cf., Rhode, Deborah L., “Legal Scholarship,” Harvard Law Review 115, no. 5 (2002): 1327–61Google Scholar.

21 Argyrou, Aikaterini, “Making the Case for Case Studies in Empirical Legal Research,” Utrecht Law Review 13, no. 3 (2017): 95 CrossRefGoogle Scholar.

22 Kevin Jon Heller (@kevinjonheller), Twitter post, Sept. 13, 2022, 9:44 pm, https://twitter.com/kevinjonheller/status/1569713709978718215.

23 Arthur R. Bos and Sandrine Nitza, “Interdisciplinary Comparison of Scientific Impact of Publications Using the Citation-Ratio,” Data Science Journal 18 (2019): 1–5; Harzing, Anne-Wil and Alakangas, Satu, “Google Scholar, Scopus and the Web of Science: a longitudinal and cross-disciplinary comparison,” Scientometrics 106, no. 2 (2015): 786804 Google Scholar.

24 Posner seems to have phrased it aptly when he writes that there is “the common academic failing known as ‘what I don’t know is not knowledge.’” See Posner, Richard A., Divergent Paths: The Academy and the Judiciary (Harvard University Press, 2016), 10 CrossRefGoogle Scholar.

25 Some of these rankings are mentioned in supra (n 2).

26 Even in the context of the natural and physical sciences, of course, it is possible that certain matters such as crops, diseases, or environmental challenges may be region-specific. Butler, Declan, “Investigating Journals: The Dark Side of Publishing,” Nature 495 (2013): 433–35CrossRefGoogle ScholarPubMed.

27 That is not to argue that there is no form of imperialism in academia and that academic freedom is free from biases. For a critique of empire-like practices in academia, see Michael Parenti, “The Empire in Academia,” Chapter 10 in Against Empire (City Lights Books, 1995).