Introduction
There is a near ubiquity of quantification.Footnote 1 With the increasing availability of various rankings of higher education institutions (HEIs),Footnote 2 individual disciplines,Footnote 3 and academic outputs,Footnote 4 academic administrators who rely on quantification as a proxy for quality are in one way or another nudging academics to play by certain “rules of the game” so that their institutions can feature positively in these rankings. The fervor of academic administrators in ensuring that their respective HEIs feature favorably in the rankings is understandable, as they may directly impact student enrollment, funding from public or other external bodies, and many other tangible outcomes. The rankings prepared by different institutions follow a wide array of metrics, and the concomitant pressure exerted on academics (for obvious reasons, particularly, but not exclusively, younger faculty without tenure) to tailor their research to suit the demands of the HEIs’ administrators are, at times, creating some unwitting outcomes for academia and academics’ research outputs. Even a more senior academic with tenure would find it difficult to be oblivious to the “ranking game” and not factor it in when setting their research agenda. At the institutional level, there is a somewhat similar trend, whereas some of the world’s most elite universities may decide to stay beyond the “ranking game,” although replicating the same for the rest is not so easy.Footnote 5
This article seeks to highlight some concerns that a mechanical adherence to this pressure emanating from the “ranking game” may create. This article does not purport that quantitative evaluation plays no role whatsoever in the measurement of the quality of academic scholarly outputs; indeed, it accepts that quantitative evaluation may play a role in assessing the quality of research outputs. It merely argues that the current trend of excessive reliance on seemingly objective quantitative metrics may not always be truly objective. More importantly, even beyond the question of objectivity, there is a greater fundamental concern about the adverse impacts of overreliance on quantitative metrics.
The article’s parts are the following: firstly, the article discusses the motives for and benefits of a quantitative assessment of scholarly outputs; the next part demonstrates how quantitative assessments may sometimes be counterproductive; the following part shows how this hazard may be more prominent in the Global South than the Global North; and the last part concludes the article. While this article focuses on authors and HEIs, quantified ranking metrics may also exert pressure on journal editors, making them opt for “safe choices” and be lukewarm to embracing well-written works by lesser-known authors or esoteric topics irrespective of their academic merit.Footnote 6
The Advantages of Quantitative Assessment Metrics
Quantitative measures can reduce the scope of discretion, which can in turn reduce the possibility of bias. As one academic explains, it can help dispel the “odious ‘old boy network’ where appointments were determined according to who you knew and who supported you (my Baron is more powerful than your Baron) and away from subjective judgments of quality towards some objective methodology in the interest of fairness and academic excellence.”Footnote 7 As already indicated, this article does not assert that no form of quantified assessment of academic outputs, such as journals or articles appearing in them, is objectively possible. All rankings worth their salt should be based on some form of scientific criteria. If a ranking is not based on sound and objectively assessable methodology, in the longer term, it will fail to garner wider acceptance. The ranking of HEIs may serve some useful purposes. Rankings may provide HEIs with a scope for introspection regarding their academic staffs’ performance and the opportunity to work on their weaknesses. They may help prospective students compare the perceived standings of various HEIs or of individual disciplines across HEIs. Taken in proper perspective and dosage, they should also provide academic institutions with an incentive to engage in healthy competition and strive to perform better. These rankings may also help the world beyond academia to obtain information about the HEIs. The “ranking game” may also push some teaching-intensive universities to take research and publication more seriously.Footnote 8
The Limits and Problems of Quantitative Assessment Metrics
The fundamental purpose of research has always been (at least in theory, if not invariably complied with in practice) to generate knowledge that should aid the progress of humankind and make the world a better place.Footnote 9 A ranking may be useful when it serves as an indicator of quality but is not a direct measure of it.Footnote 10 Religious adherence to rankings or the evaluation of research outputs based rigidly on quantitative metrics can undermine society’s intrinsic expectation of HEIs’ research outputs. This is likely to happen because these expectations may not only determine which outlets will be chosen by academics for their submissions and eventual publication but also what kind of research they will undertake. Perhaps this is because many experienced academics (in the social sciences, at least) would tell you that they generally embark on their research with a clear view of potential outlets where they would like to see their articles published. And the consideration of potential outlets can significantly influence academics’ research agendas.
Let us draw on a hypothetical scenario from law. Let us assume that an academic in an economically backward country is intrigued by a pressing national legal issue, such as the “underwhelming performance in the recovery of loans disbursed by banks and financial institutions,” an area that is chronically under-researched in the researcher’s own discipline.Footnote 11 The lack of secondary sources would make it challenging to accomplish the research. And it is highly likely that the researcher would struggle to find a place in a prestigious foreign outlet that would offer tangible professional rewards to the researcher.
Irrespective of the article’s merit, it may face a desk rejection by a reputable international legal journal. The editorial board may consider that even sending it for peer review would be a waste of time for the scant pool of competent peer reviewers. After all, the article might be outside the journal’s scope, or the readers might have little to no interest in specific issues pertaining to a national jurisdiction with little global clout. In addition, a publication in the most reputed national outlet of the researcher’s own country may not be prestigious enough to fetch them the due reward commensurate with their efforts. For example, such a publication may yield half the reward (say, in the form of points for promotion, salary increment, or similar perks) that a comparable publication in an internationally recognized outlet, such as a Scopus-indexed journal, would bring. Or even worse, an international publication might be doubly rewarding over a national publication for the academic institution. As such, academics are rewarded more for an internationally published article in an institution’s quest for internationalization and/or faring well in the “ranking game.” One may contend that a good researcher should ignore these institutional incentives and seek knowledge for knowledge’s sake. However, just because an academic may be moved by the idea of working to make the world a better place, they would have to shun the idea of making their own life materialistically better, which seems to be an absurd proposition. The pressure of ranking outlets is so prominent that even a well-established scholar deriding another US-centric ranking has lamented that he could not denote his “chagrin that OUP, CUP, Kluwer and their fellow European legal publishers do not get together to produce a database which could be used more accurately to measure the influence of the journals they publish.”Footnote 12
On the other hand, an academic colleague of this author, in writing a review essay on a topic that has a global audience (say, human rights or climate change), even without a compelling narrative, would stand a far better chance of having the essay published in a more prestigious outlet. At least, in the first instance, the second researcher would have a much broader pool of outlets where they could submit their essay for publication consideration. In terms of efforts required to produce the article, this second academic researcher should also have a comparatively easier ride because there may be a wealth of already published scholarly materials on which this researcher can build their own work. While this latter publication, at least in the short term, will contribute to the institution’s reputation, perhaps most academics would agree that, in terms of the expectation from academic research, the first researcher’s output is not any less desirable. In the long run, it is likely to have a far better impact on the academic knowledge base and also on society—that is, beyond the academic community. This is not to trivialize the laborious work of the second researcher, who is perhaps working hard enough to get their research published in a prestigious journal after fierce competition with many other manuscripts. However, this scenario demonstrates a disproportionately lower reward for the first academic’s research endeavor.
Academic research in many disciplines (particularly in the basic sciences and social sciences), unless accompanied by funding through some form of a grant, which again is fiercely competitive, is rarely financially incentivized. Now, if the academic hiring, promotion, and other service benefits that result from academic research outputs would be strictly aligned with the rankings, very few academics would be unable to succumb to the pressure of academic administrators to abide by the “ranking game” rules. This would make their task of engaging in the pursuit of “knowledge for the sake of knowledge” or “passion for finding answers to a problem that piques their own interest” very difficult.
There can also be many discipline-specific nuances that a generic all-discipline-based ranking methodology would not encapsulate. To take an example, an overwhelming number of law journals published in the US are student edited. This phenomenon may have something to do with the fact that, in US law schools, students enrolled in Juris Doctor (JD) programs already have a first degree in another discipline. Thus, even many academics from other social science backgrounds may not be aware that very few US law journals have any faculty involvement and are not peer reviewed in the strictest sense of the term. However, any law academic would probably tell you that the publication process in these law journals is no less competitive than that in traditional peer-reviewed journals.Footnote 13 Now, very few of these US law journals feature in the list of Scopus-indexed journals, which is an important component of the QS World University Rankings.Footnote 14 In this case, an academically sound article in a US law journal would offer little or no recognition for an academic author whose institution puts a premium on publishing in Scopus-indexed journals to feature positively in the QS World University Rankings. Thus, the decision of publication should be driven by the academic’s judgment of such factors as intended readership, the perceived reputation of the journal’s publisher, the reputation of the editorial board, and the reputation of past authors rather than responding to a specific call for papers, etc., which could be relegated to a secondary status. The institutional priority and emphasis on ranking could have a decisive impact on the academic’s publication choice.
The quality of all research outputs cannot be measured by quantitative methods or any ranking based on seemingly “objective” criteria. The Washington & Lee Ranking of Law Journals methodology, for instance, is based on the following:
The Westlaw search results give only the number of citing documents and thus do not show where a citing article or case cites two or more articles in a cited legal periodical. Sources for the citation counts are limited to documents in Westlaw’s Law Reviews & Journals database (over 1,000 primarily U.S. publications) and Westlaw’s Cases database (all U.S. federal and state cases). The searches look for variations of the Bluebook citation format commonly in use in the U.S. (volume number, journal name, [page], year), and the searches are flexible in allowing the year to occur within eight words of the full or abbreviated journal name.Footnote 15
Even citation counts, an age-old proxy for the quality of academic outputs, when they are obfuscated by self-citation (not to serve any real purpose of attribution, but merely as a ploy to increase one’s own citations), can have limitations.Footnote 16 Even the effort to mechanically isolate self-citation may not exclude the vice of self-citation, as multiauthored works with et al. can easily get around the mechanical filtering of self-citation.Footnote 17 One might think that the element of chutzpah in citing one’s works, unless the citation is unavoidable, would prevent many academics from resorting to doing it, but this is not necessarily true.Footnote 18 The value of an academic research output may not be immediately evident. Many published works may not have any direct tangible value at all, and they may benefit society in a very incremental or unexpected way.Footnote 19 Even today’s seemingly esoteric research work may prove to be immensely beneficial to society in the days to come. Thus, academic administrators should be taking a very cautious approach in following a rigid quantitative method of assessing scholarly outputs.
Academic hiring and promotion committee members, consisting of senior academics of respective academic disciplines at any quality institution, would know the quality of various publishing outlets of their respective fields well enough. They would not be ignorant or reckless in assessing the quality of individual research outputs such as journal articles. Of course, when there is a lack of a standardized mechanism, or the assessors are vested with a standard mechanism that allows a significant dose of discretion, that kind of discretion is hazardous and prone to abuse. However, the same concern may be applicable to all kinds of human discretions, and that concern alone cannot be the grounds for eliminating discretion. In other words, academic judgments may not always be amenable to quantifiable judgment, and some degree of subjectivity (hopefully applied objectively) in assessing the value of research outputs may not necessarily be bad. As University College London’s UCL Bibliometrics Policy adopted in 2020 succinctly puts it,
[R]esearch “excellence” and “quality” are abstract concepts that are difficult to measure directly but are often inferred from bibliometrics. Such superficial use of research metrics in research evaluations can be misleading. Inaccurate assessment of research can become unethical when metrics take precedence over expert judgement, where the complexities and nuances of research or a researcher’s profile cannot be quantified.Footnote 20
Deference to Discipline-Wise Norms
Another potential problem is using a “one size fits all” notion as an across-the-board criterion for assessing research outputs in HEIs. For example, this author, in his university’s internal grant application process, would often be questioned on methodology by external reviewers and university committee members who happen to have non-legal backgrounds (generally, in the humanities and social sciences). To these committee members (irrespective of their rich discipline-specific backgrounds), doctrinal research methodology involving the analyses of statutes, judicial precedents, and treaties would draw on secondary sources. However, anyone familiar with legal research methodology would readily recognize that these are primary sources of legal research. In fact, a relatively small portion of legal research output is based on quantitative methodology.Footnote 21 Or, even worse, this author recalls an occasion when an administrator at a previous university commented that legal research methodology is “thin.” While the above instances may seem too anecdotal, there is some evidence that these experiences are not unique.Footnote 22
Few academics outside of law might understand the law journal culture of the US student-edited law journals, where simultaneous submission is not unacceptable and is very much the norm. The citation impact across disciplines also varies.Footnote 23 Sometimes, academics, even very experienced ones, may assume that the norm in their discipline is not universal.Footnote 24 For instance, during an editorial meeting of a new multidisciplinary journal, a colleague of this author from a discipline where research publication tends to be more often funded than not, commented that a journal that does not charge an article processing fee is not a high-quality journal. When in institutional committees, this sort of uninformed, narrow, and discipline-specific standard becomes the norm for assessing quality, the academic output of some disciplines may not receive due deference or recognition.
Disproportionate Impact on the Global South?
It is well-known that most of the prestigious ranking bodies of academic scholarly works and the outlets that acknowledge them are based in the Global North or in large developing countries.Footnote 25 Indeed, the resources necessary for setting up a rigorous ranking mechanism are possibly beyond the reach of many institutions in the Global South. Thus, scholarly works published in the Global North invariably rank higher and offer greater tangible rewards for researchers. Naturally, even scholars in the Global South could be tempted to craft their research agendas in a way that would offer them a better chance of publishing in those outlets. This is not to claim that academic publishing is always bifurcated along geographical lines, but for a jurisdiction-specific subject like law, it is possible that the outlet’s jurisdiction-specific matters would receive more attention. For example, a southern Indigenous problem, which may be a pressing issue for the researcher’s own country, might be ignored in favor of topics that Northern outlets would likely favor. However, this may not be a problem in the natural and physical sciences,Footnote 26 as perhaps most works in those disciplines are country neutral. However, as already pointed out, scholarly works in the social sciences and law, in particular, can have country-specific content, which may often mean that research focusing on issues relevant to a Global South country may be of little or no interest to a Northern scholarly outlet.
Of course, there are many Global North outlets publishing in the social sciences that have a truly global scope, but at the same time, there are many that inevitably only have a national or regional focus. And often, the latter will focus on specific issues, such as constitutional law, corporate law, etc. Even when top-notch journals in the Global North publish about matters of concern to a single jurisdiction in the Global South, they typically focus on larger developing countries, such as Brazil, China, India, South Africa, and so on. Even when top-notch journals in the Global North publish about a single jurisdiction in the Global South’s matters of concern, they typically focus on larger developing countries, such as Brazil, China, India, South Africa, and so on. Thus, it is probable that by putting too much emphasis on quantitative global metrics, the research agendas of many Global South social science researchers are shaped by the priorities of Northern outlets. It may be tempting to brandish this phenomenon as a form of imperialism.Footnote 27 However, it would be rather naïve to brand it as such since the Global North imposes this phenomenon on the Global South; rather, this is a form of overzealous hankering after following the Northern standard as an automatic proxy for quality.
Conclusion
Despite a clamor for caution on excessive reliance on the quantification of scholarly outputs, it is not the argument of this article that ranking scholarly outputs is in and of itself undesirable. Instead, the concern expressed here is about the dogmatic adherence to any particular form of ranking metrics of scholarly outputs and the use of a one-size-fits-all approach in evaluating academic research outputs in a discipline like law, which is by nature nuanced, diverse, and difficult to assess through any strict “objective, quantified scientific assessment.” While the hypothetical examples used in this article are drawn from law, a somewhat similar kind of concern may apply to many other disciplines too. Thus, a regimented approach to assessing scholarly outputs with quantitative methods demands that academic administrators take a cautious approach. Standardization has its benefits, but too much of it can also be problematic. This is possibly more so in a discipline like law, where any metric-based quality analysis may not always serve its intended purpose. And the same may even apply to some other disciplines.