Hostname: page-component-669899f699-b58lm Total loading time: 0 Render date: 2025-04-25T23:09:18.690Z Has data issue: false hasContentIssue false

Weighted corrected covered area (wCCA): A measure of informational overlap among reviews

Published online by Cambridge University Press:  24 April 2025

Xiangji Ying*
Affiliation:
Department of Epidemiology, University of North Carolina Gillings School of Global Public Health, Chapel Hill, NC, USA
Konstantinos I Bougioukas
Affiliation:
Department of Hygiene, Social-Preventive Medicine & Medical Statistics, School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, University Campus, Thessaloniki, Central Macedonia, Greece
Dawid Pieper
Affiliation:
Center for Health Services Research, Brandenburg Medical School (Theodor Fontane), Rüdersdorf, Germany Faculty of Health Sciences Brandenburg, Brandenburg Medical School (Theodor Fontane), Institute for Health Services and Health System Research, Rüdersdorf, Germany
Evan Mayo-Wilson
Affiliation:
Department of Epidemiology, University of North Carolina Gillings School of Global Public Health, Chapel Hill, NC, USA
*
Corresponding author: Xiangji Ying; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

When conducting overviews of reviews, investigators must measure and describe the extent to which included systematic reviews (SRs) contain the same primary studies. The corrected covered area (CCA) quantifies overlap by counting primary studies included across a set of SRs. In this article, we introduce a modification to the CCA, the weighted CCA (wCCA), which accounts for differences in information contributed by primary studies. The wCCA adjusts the original CCA by weighting studies based on the square roots of their sample sizes. By weighting primary studies according to their precision, wCCA provides a useful and complementary representation of overlap in evidence syntheses .

Type
Research-in-Brief
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of The Society for Research Synthesis Methodology

Highlights

What is already known

  • Systematic reviews (SRs) commonly “overlap” by including the same primary studies.

  • The corrected covered area (CCA) quantifies overlap by counting primary studies included across a set of SRs. It treats all primary studies equally.

What is new

  • When quantifying the amount of overlap between SRs, researchers should consider the amount of information that each study contributes.

  • Study sample size is a useful indicator of the relative information that individual studies contribute to a SR; for example, it is related to the precision of estimates included in meta-analyses.

  • We introduce the weighted CCA (wCCA), a quantitative measure of informational overlap across SRs. wCCA weights the primary studies using the square root of their sample size.

Potential impact for RSM readers

  • The wCCA is a novel tool for authors of overviews of SRs, which can provide a nuanced description of overlap among SRs in an overview.

1 Introduction

The rate of producing systematic reviews (SRs) continues to increase, with SRs exceeding randomized controlled trials (RCTs) in some fields.Reference Niforatos, Weaver and Johansen 1 , Reference Ioannidis 2 This surge has raised concerns about the potential for overlapping and redundant SRs.Reference Ioannidis 2 , Reference Moher 3

Overviews of SRs (also known as umbrella reviews, meta-reviews, or reviews of reviews) synthesize findings from multiple SRs that might include some of the same primary studies. Double-counting primary studies can magnify certain findings and skew overall conclusions. Overview authors often need to quantify overlap to inform their approaches to handling overlapping SRs and to describe the extent of overlap across the included SRs.Reference Pollock, Fernandes, Becker, Pieper and Hartling 4 Reporting guidelines for overviews of SRs recommend specifying the approaches used to illustrate or quantify overlap across SRs.Reference Gates, Gates and Pieper 5 Reference Bougioukas, Bouras, Apostolidou-Kiouti, Kokkali, Arvanitidou and Haidich 7 However, methods to describe and to handle overlap are often reported poorly.Reference Lunny, Brennan, Reid, McDonald and McKenzie 8 , Reference Pamporis, Bougioukas, Karakasis, Papageorgiou, Zarifis and Haidich 9

Various methods have been devised to represent overlap in SRs, including tabular, graphical, and quantitative approaches.Reference Bougioukas, Vounzoulaki and Mantsiou 10 , Reference Bougioukas, Diakonidis, Mavromanoli and Haidich 11 The corrected covered area (CCA)Reference Pieper, Antoine, Mathes, Neugebauer and Eikermann 12 is a commonly used quantitative measure of the frequency with which primary studies are included in a set of SRs adjusted for the number of unique primary studies in the SRs.

Because the CCA method treats all included studies equally, it does not fully capture the extent of shared information across SRs. For example, SRs with many overlapping studies might have very different results if nonoverlapping studies are much larger than overlapping studies. Conversely, two SRs with few overlapping studies might have similar results if overlapping studies contribute most of the information in both SRs.

To quantify informational overlap, we propose a modification to the CCA, the weighted CCA (wCCA), which adjusts the original CCA by weighting primary studies according to their sample sizes.

2 Calculating wCCA

To compute the CCA, researchers first construct a citation matrix by listing primary studies in rows and listing SRs in columns.Reference Pieper, Antoine, Mathes, Neugebauer and Eikermann 12 , Reference Hennessy and Johnson 13 It is calculated using the formula CCA = (N – r) / ((r * c) – r), where N is the number of occurrences of primary studies across all SRs, r is the number of unique primary studies, and c is the number of SRs.Reference Pieper, Antoine, Mathes, Neugebauer and Eikermann 12 , Reference Hennessy and Johnson 13 The degree of study overlap might be interpreted as shown in Box 1.Reference Pieper, Antoine, Mathes, Neugebauer and Eikermann 12 , Reference Hennessy and Johnson 13

Box 1. Example of CCA thresholds and interpretation*

  • - 0% to 5%: ‘Slight study overlap’

  • - 6% to 10%: ‘Moderate study overlap’

  • - 11% to 15%: ‘High study overlap’

  • - >15%: ‘Very high study overlap’

*The thresholds serve as guidelines rather than strict rules.

To compute the wCCA, researchers incorporate the sample sizes of primary studies as follows:

$$\begin{align*}wCCA=\frac{wN- wr}{wr\cdotp c- wr},\end{align*}$$

where:

  • - $c$ is the number of overlapping SRs (i.e., number of columns in the citation matrix).

  • - $wN=\sum \nolimits_{i=1}^{k_1}\sqrt{n_i}+\sum \nolimits_{i=1}^{k_2}\sqrt{n_i}+\dots +\sum \nolimits_{i=1}^{k_c}\sqrt{n_i},$ where ${k}_c$ represents the number of primary studies included in the c-th SR, and $\sqrt{n_i}$ is the square root of the sample size of each primary study. wN is the sum of the square roots of the sample sizes of all primary studies, aggregated across all SRs.

  • - $wr=\sum \nolimits_{i=1}^r\sqrt{n_i}$ , where r is the number of unique primary studies (i.e., number of rows in the citation matrix). wr represents the sum of the square roots of sample sizes for unique primary studies, where each primary study is counted only once.

The formula adapts the structure of CCA, replacing the sum of study counts with the sum of square roots of study sample sizes. This modification ensures that each primary study’s contribution is proportional to its square root of sample size. The square root is used instead of the raw sample size because it better aligns with the precision (e.g., standard error) of individual study estimates and their corresponding weights in meta-analyses (i.e., standard error is the standard deviation divided by the square root of the sample size). Using the square root function, smaller studies contribute to the index without being excessively overshadowed by larger studies. Thus, large studies are given greater weight than small studies—without being dominant—reflecting their typical influence on meta-analyses and SR conclusions.Reference Turner, Bird and Higgins 14

Like the CCA, the above formula can be modified to account for structural missingness,Reference Pérez-Bracchiglione, Meza and Bangdiwala 15 such as studies not included in an SR because of publication after the SR, as described in Appendix 1 of the Supplementary Material.

3 Considerations for using sample size as weight

Sample size is a useful proxy for the relative information that each study contributes to a SR. Although other metrics, such as the inverse of variances/standard errors of the effect sizes and the weights used in random-effects meta-analyses, could provide more precise estimates of informational overlap for individual outcomes, they are typically only available when meta-analyses are conducted. In contrast, the sample size of primary studies is commonly reported in SRs, regardless of whether a meta-analysis is conducted, as recommended in the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA 2020) reporting guidelines.Reference Page, Moher and Bossuyt 16 Moreover, using sample size enables wCCA to be calculated even when studies use different analytical methods. For instance, wCCA can be computed even if one primary study reports the intervention effect as a risk ratio and another uses risk difference. This flexibility makes wCCA a practical and accessible metric for a wide range of SR contexts.

SRs often contain multiple outcomes and analyses, and an overview might not focus on all of them. The sample size contributing to the level of interest (e.g., comparison, outcome, effect measure) should be used (Figure 1). For example, if a primary study includes three arms but the SR focuses on the comparison of two arms, the sample size of those two arms should be used rather than the total sample size. If multiple SRs report different sample sizes for the same primary study, it is important to investigate the reasons for the discrepancy. For example, differences might arise if one SR excludes participants who violated the protocol while another SR includes all randomized participants. Because wCCA uses the square root of the sample size, choosing between two similar numbers will be inconsequential in many cases. As a rule, the smaller of the sample sizes might be considered the overlapping population (i.e., the participants included in more than one SR’s results). Clear mapping of SRs to primary studies and the specific level of interest (e.g., outcomes, comparisons) is essential before calculating wCCA to ensure it accurately reflects the overlap of interest.

Figure 1 Flow chart to assess overlap.

For cluster RCTs, the effective sample size (i.e., the sample size adjusted for the design effect) should be used, if available.

4 Illustrative examples calculating wCCA

Example 1: Overlap among SRs of RCTs.

Suppose researchers are conducting an overview of SRs on the effects of mineral supplements on malaria incidence in children. They decide to include one Cochrane review on zinc supplementation,Reference Imdad, Rogner and Sherwani 17 which includes 6 trials assessing malaria incidence, and another Cochrane review of 14 trials on iron supplementation,Reference Neuberger, Okebe, Yahav and Paul 18 along with other SRs. They find that the two SRs share one RCT in common. To quantify the overlap, they calculate CCA is 5.3%, indicating a slight overlap (Figure 2). They also decide to calculate a wCCA by extracting the sample sizes of the primary RCTs from the forest plots of interest in both SRs. The sample size of the shared RCT (n = 836) is the same in both SRs. Using the wCCA formula, they calculate wCCA is 6.6%, slightly higher than the CCA. If the sample size of the shared trial were 183 or 2,836, the wCCA would be 3.2% and 11.5%, respectively, showing greater deviation from the CCA. See Appendix 2 of the Supplementary Material for a worked example of quantifying overlap among more than two SRs.

Example 2: Overlap among SRs of observational studies.

Figure 2 Illustration of CCA and wCCA calculations for overlap between SRs of RCTs (see Appendix 1 sTable 1 for tabular data; Appendix 3 for R code; and Appendix 4 for analysis-ready data).

Suppose researchers are conducting an overview of SRs that examine the association between allium vegetables and various cancers. They decide to include one SR that investigates the association of allium vegetables with upper aerodigestive tract cancers (nasal cavity, pharynx, larynx, oral cavity, and esophagus)Reference Guercio, Turati, Vecchia, Galeone and Tavani 19 and another SR that examines garlic consumption and its association with gastric cancer,Reference Li, Ying, Shan and Ji 20 among other SRs. When summarizing the association of garlic with cancers, they find that these two SRs share three primary studies that investigate both esophageal and gastric cancers.

They calculate the CCA is 12% (Figure 3), indicating a high overlap (Box 1). Next, they calculate the wCCA (Figure 3). Extracting the sample sizes of the primary studies from Table 1 of each SR, they observe that the three shared studies are case–control studies. Upon reviewing the original articles, they find the control group data are included in both SRs, but the cases differ (i.e., esophageal cancer cases contribute to one SR and gastric cancer cases to the other). Therefore, only the control group contributes overlapping information to the overview and the cases are unique in each SR. The researchers separate the overlapping and unique parts of the shared studies in the calculation, and they calculate a wCCA of 3.8%, considerably lower than the CCA.

Figure 3 Illustration of CCA and wCCA calculations for overlap between SRs of observational studies (see Appendix 1 sTable 2 for tabular data; Appendix 3 for R code; and Appendix 4 for analysis-ready data).

5 Implications and interpretations of wCCA

The wCCA is a weighted version of the CCA. CCA and wCCA will produce similar results when each study in a set of SRs contributes a comparable amount of information. When study sizes vary, wCCA can offer additional information to the CCA alone.

The degree of overlap should be discussed in the context of the posed research questions and the topic’s scope (i.e., whether broad or narrow), as well as the data management decisions made during the study selection, data collection, and synthesis processes. When interpreting wCCA, researchers might apply the cutoff values used for CCA (Box 1). These relatively low cutoffs are chosen because SRs in an overview typically address different research questions within an overall topic. When researchers have attempted to minimize overlap by selecting the “best” SR for each sub-topic or by re-estimating results using primary study data, excessive overlap (e.g., two SRs overlap in 8 out of 10 studies) is relatively rare. As with CCA, these cutoffs serve as guidelines rather than rigid rules. Both CCA and wCCA are aggregate-level indices that describe the extent of overlap across a set of reviews. A low overall overlap does not necessarily indicate minimal duplicate information within all individual reviews.Reference Kirvalidze, Abbadi, Dahlberg, Sacco, Calderón-Larrañaga and Morin 21 Therefore, decisions about whether to include a particular review in an overview should not be based solely on the CCA or wCCA value.

6 Limitations

Computing wCCA requires more information than CCA. Quantifying overlap using either wCCA or CCA might be challenging when dealing with dozens or even hundreds of SRs, especially if SRs include many studies. Other approaches for describing overlap and related concerns might be appropriate when SRs reporting quality is low. For example, neither wCCA nor CCA can be calculated when SRs do not describe or reference included primary studies.

7 Conclusions

The wCCA complements existing measures of overlap in overviews of SRs. It describes informational overlap across SRs using the square roots of sample sizes from primary studies. It enables a more nuanced assessment of overlap compared with CCA alone.

Author contributions

Conceptualization: X.Y.; Formal analysis: X.Y.; Methodology: X.Y., E.M.-W., K.I.B., D.P.; Visualization: X.Y., E.M.-W.; Writing – original draft: X.Y.; Writing – review and editing: X.Y., E.M.-W., K.I.B., D.P.

Competing interest statement

The authors declare that no competing interests exist.

Data availability statement

R code for generating Figures 2 and 3 (including the calculations for CCA and wCCA in these examples) along with the R code for the worked example in Appendix 2 of the Supplementary Material can be found in Appendix 3 of the Supplementary Material.

Funding statement

The authors declare that no specific funding has been received for this article .

Supplementary material

To view supplementary material for this article, please visit http://doi.org/10.1017/rsm.2025.19.

Footnotes

This article was awarded Open Data and Open Materials badges for transparent practices. See the Data availability statement for details.

References

Niforatos, JD, Weaver, M, Johansen, ME. Assessment of publication trends of systematic reviews and randomized clinical trials, 1995 to 2017. JAMA Intern Med. 2019;179(11):15931594. https://doi.org/10.1001/jamainternmed.2019.3013.CrossRefGoogle ScholarPubMed
Ioannidis, JPA. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. Milbank Q. 2016;94(3):485514. https://doi.org/10.1111/1468-0009.12210.CrossRefGoogle ScholarPubMed
Moher, D. The problem of duplicate systematic reviews. BMJ. 2013;347:f5040. https://doi.org/10.1136/bmj.f5040.CrossRefGoogle ScholarPubMed
Pollock, M, Fernandes, RM, Becker, LA, Pieper, , Hartling, L. Chapter V: Overviews of reviews. In: Cochrane Handbook for Systematic Reviews of Interventions, Version 6.4, 2023. Accessed March 13, 2024. https://training.cochrane.org/handbook/current/chapter-v.Google Scholar
Gates, M, Gates, A, Pieper, D, et al. Reporting guideline for overviews of reviews of healthcare interventions: Development of the PRIOR statement. BMJ. 2022;378:e070849. https://doi.org/10.1136/bmj-2022-070849.CrossRefGoogle ScholarPubMed
Bougioukas, KI, Liakos, A, Tsapas, A, Ntzani, E, Haidich, AB. Preferred reporting items for overviews of systematic reviews including harms checklist: A pilot tool to be used for balanced reporting of benefits and harms. J Clin Epidemiol. 2018;93:924. https://doi.org/10.1016/j.jclinepi.2017.10.002.CrossRefGoogle ScholarPubMed
Bougioukas, KI, Bouras, E, Apostolidou-Kiouti, F, Kokkali, S, Arvanitidou, M, Haidich, AB. Reporting guidelines on how to write a complete and transparent abstract for overviews of systematic reviews of health care interventions. J Clin Epidemiol. 2019;106:7079. https://doi.org/10.1016/j.jclinepi.2018.10.005.CrossRefGoogle ScholarPubMed
Lunny, C, Brennan, SE, Reid, J, McDonald, S, McKenzie, JE. Overviews of reviews incompletely report methods for handling overlapping, discordant, and problematic data. J Clin Epidemiol. 2020;118:6985. https://doi.org/10.1016/j.jclinepi.2019.09.025.CrossRefGoogle ScholarPubMed
Pamporis, K, Bougioukas, KI, Karakasis, P, Papageorgiou, D, Zarifis, I, Haidich, AB. Overviews of reviews in the cardiovascular field underreported critical methodological and transparency characteristics: A methodological study based on the preferred reporting items for overviews of reviews (PRIOR) statement. J Clin Epidemiol. 2023;159:139150. https://doi.org/10.1016/j.jclinepi.2023.05.018.CrossRefGoogle ScholarPubMed
Bougioukas, KI, Vounzoulaki, E, Mantsiou, CD, et al. Methods for depicting overlap in overviews of systematic reviews: An introduction to static tabular and graphical displays. J Clin Epidemiol. 2021;132:3445. https://doi.org/10.1016/j.jclinepi.2020.12.004.CrossRefGoogle ScholarPubMed
Bougioukas, KI, Diakonidis, T, Mavromanoli, AC, Haidich, AB. ccaR: A package for assessing primary study overlap across systematic reviews in overviews. Res Synth Methods. 2023;14(3):443454. https://doi.org/10.1002/jrsm.1610.CrossRefGoogle ScholarPubMed
Pieper, D, Antoine, SL, Mathes, T, Neugebauer, EAM, Eikermann, M. Systematic review finds overlapping reviews were not mentioned in every other overview. J Clin Epidemiol. 2014;67(4):368375. https://doi.org/10.1016/j.jclinepi.2013.11.007.CrossRefGoogle Scholar
Hennessy, EA, Johnson, BT. Examining overlap of included studies in meta-reviews: Guidance for using the corrected covered area index. Res Synth Methods. 2020;11(1):134145. https://doi.org/10.1002/jrsm.1390.CrossRefGoogle ScholarPubMed
Turner, RM, Bird, SM, Higgins, JPT. The impact of study size on meta-analyses: Examination of underpowered studies in cochrane reviews. PLoS ONE. 2013;8(3):e59202. https://doi.org/10.1371/journal.pone.0059202.CrossRefGoogle ScholarPubMed
Pérez-Bracchiglione, J, Meza, N, Bangdiwala, SI, et al. Graphical Representation of Overlap for OVErviews: GROOVE tool. Res Synth Methods. 2022;13(3):381388. https://doi.org/10.1002/jrsm.1557.CrossRefGoogle ScholarPubMed
Page, MJ, Moher, D, Bossuyt, PM, et al. PRISMA 2020 explanation and elaboration: Updated guidance and exemplars for reporting systematic reviews. BMJ. 2021;372:n160. https://doi.org/10.1136/bmj.n160.CrossRefGoogle ScholarPubMed
Imdad, A, Rogner, J, Sherwani, RN, et al. Zinc supplementation for preventing mortality, morbidity, and growth failure in children aged 6 months to 12 years - Imdad, A - 2023. Cochrane Library. Accessed September 18, 2024. https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD009384.pub3/full.CrossRefGoogle Scholar
Neuberger, A, Okebe, J, Yahav, D, Paul, M. Oral iron supplements for children in malaria-endemic areas. Cochrane Database Syst Rev. 2016;2016(2). https://doi.org/10.1002/14651858.cd006589.pub4.Google Scholar
Guercio, V, Turati, F, Vecchia, CL, Galeone, C, Tavani, A. Allium vegetables and upper aerodigestive tract cancers: A meta-analysis of observational studies. Mol Nutr Food Res. 2016;60(1): 212222. https://doi.org/10.1002/mnfr.201500587.CrossRefGoogle Scholar
Li, Z, Ying, X, Shan, F, Ji, J. The association of garlic with Helicobacter pylori infection and gastric cancer risk: A systematic review and meta-analysis. Helicobacter. 2018;23(5):e12532.CrossRefGoogle Scholar
Kirvalidze, M, Abbadi, A, Dahlberg, L, Sacco, LB, Calderón-Larrañaga, A, Morin, L. Estimating pairwise overlap in umbrella reviews: Considerations for using the corrected covered area (CCA) index methodology. Res Synth Methods. 2023;14(5):764767. https://doi.org/10.1002/jrsm.1658.CrossRefGoogle ScholarPubMed
Figure 0

Figure 1 Flow chart to assess overlap.

Figure 1

Figure 2 Illustration of CCA and wCCA calculations for overlap between SRs of RCTs (see Appendix 1 sTable 1 for tabular data; Appendix 3 for R code; and Appendix 4 for analysis-ready data).

Figure 2

Figure 3 Illustration of CCA and wCCA calculations for overlap between SRs of observational studies (see Appendix 1 sTable 2 for tabular data; Appendix 3 for R code; and Appendix 4 for analysis-ready data).

Supplementary material: File

Ying et al. supplementary material

Ying et al. supplementary material
Download Ying et al. supplementary material(File)
File 60.3 KB