We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Retracted research publications reached an all-time high in 2023, and COVID-19 publications may have higher retraction rates than other publications. To better understand the impact of COVID-19 on the research literature, we analyzed 244 retracted publications related to COVID-19 in the PubMed database and the reasons for their retraction. Peer-review manipulation (18.4%) and error (20.9%) were the most common reasons for retraction, with time to retraction occurring far more quickly than in the past (13.2 mos, compared with 32.9 mos in a 2012 study). Publications focused on controversial topics were retracted rapidly (mean time to retraction 10.8 mos) but continued to receive media attention, suggesting that retraction alone may be insufficient to prevent the spread of scientific misinformation. More than half of the retractions resulted from problems that could have been detected prior to publication, including compromise of the peer review process, plagiarism, authorship issues, lack of ethics approvals, or journal errors, suggesting that more robust screening and peer review by journals can help to mitigate the recent rise in retractions.
After reviewing a wide range of topics, we conclude that good science requires greater efforts to manage biases and to promote the ethical conduct of research. An important problem is the belief that randomized controlled trials (RCTs) are exempt from systematic bias. Throughout the book, we acknowledge the importance of RCTs, but also emphasize that they are not immune from systematic bias. A second lesson concerns conflict of interest, which must always be taken seriously. Most large RCTs are sponsored by for-profit pharmaceutical companies. We identify leverage points to address these problems. These include cultivating equipoise – the position that research investigators enter a study with the understanding that either a positive, negative, or null result is of value. We return to several other themes prominent throughout this book, including the reporting of research findings and serious problems with our system of peer review. The book concludes with recommendations for reducing conflicts of interest, improving transparency, and reimagining the peer review system.
Contemporary science depends heavily on peer review. Usually without compensation, experts evaluate the reliability and quality of work contributed by other scientists. The system of peer review now confronts serious challenges. The volume of scientific work that requires peer scrutiny has grown exponentially, placing pressure on reviewers’ availability. Academic publishing has been challenged by two trends. First, uncompensated peer reviewers are less willing to offer evaluations. The rate of declining invitations to review has dramatically increased. Second, commercial publishers charge authors exorbitant fees to publish their work. Younger authors, and those from less wealthy countries, can’t afford these charges. We offer several remedies to address these problems. These include reevaluating the relationships between universities or scholarly societies and for-profit publishing houses. An alternative system might return publishing to university libraries and scholarly societies. The system would be funded by the hundreds of millions of dollars that academia currently transfers to commercial enterprises.
The scientific manuscript review process can often seem daunting and mysterious to authors. Frequently, medical journals do not describe the peer-review process in detail, which can further lead to frustration for authors, peer reviewers, and readers. This editorial describes the updated manuscript review process for Prehospital and Disaster Medicine. It is hoped that this editorial will lead to increased clarity and transparency in the review process.
Scientists have started to explore whether novel artificial intelligence (AI) tools based on large language models, such as GPT-4, could support the scientific peer review process. We sought to understand (i) whether AI versus human reviewers are able to distinguish between made-up AI-generated and human-written conference abstracts reporting on actual research, and (ii) how the quality assessments by AI versus human reviewers of the reported research correspond to each other. We conducted a large-scale field experiment during a medium-sized scientific conference, relying on 305 human-written and 20 AI-written abstracts that were reviewed either by AI or 217 human reviewers. The results show that human reviewers and GPTZero were better in discerning (AI vs. human) authorship than GPT-4. Regarding quality assessments, there was rather low agreement between both human–human and human–AI reviewer pairs, but AI reviewers were more aligned with human reviewers in classifying the very best abstracts. This indicates that AI could become a prescreening tool for scientific abstracts. The results are discussed with regard to the future development and use of AI tools during the scientific peer review process.
Blind review is ubiquitous in contemporary science, but there is no consensus among stakeholders and researchers about when or how much or why blind review should be done. In this essay, we explain why blinding enhances the impartiality and credibility of science while also defending a norm according to which blind review is a baseline presumption in scientific peer review.
As the scientific community becomes aware of low replicability rates in the extant literature, peer-reviewed journals have begun implementing initiatives with the goal of improving replicability. Such initiatives center around various rules to which authors must adhere to demonstrate their engagement in best practices. Preliminary evidence in the psychological science literature demonstrates a degree of efficacy in these initiatives. With such efficacy in place, it would be advantageous for other fields of behavioral sciences to adopt similar measures. This letter provides a discussion on lessons learned from psychological science while similarly addressing the unique challenges of other sciences to adopt measures that would be most appropriate for their field. We offer broad considerations for peer-reviewed journals in their implementation of specific policies and recommend that governing bodies of science prioritize the funding of research that addresses these measures.
Research articles in the clinical and translational science literature commonly use quantitative data to inform evaluation of interventions, learn about the etiology of disease, or develop methods for diagnostic testing or risk prediction of future events. The peer review process must evaluate the methodology used therein, including use of quantitative statistical methods. In this manuscript, we provide guidance for peer reviewers tasked with assessing quantitative methodology, intended to complement guidelines and recommendations that exist for manuscript authors. We describe components of clinical and translational science research manuscripts that require assessment including study design and hypothesis evaluation, sampling and data acquisition, interventions (for studies that include an intervention), measurement of data, statistical analysis methods, presentation of the study results, and interpretation of the study results. For each component, we describe what reviewers should look for and assess; how reviewers should provide helpful comments for fixable errors or omissions; and how reviewers should communicate uncorrectable and irreparable errors. We then discuss the critical concepts of transparency and acceptance/revision guidelines when communicating with responsible journal editors.
Peer review is supposed to ensure that published work, in philosophy and in other disciplines, meets high standards of rigor and interest. But many people fear that it no longer is fit to play this role. This Element examines some of their concerns. It uses evidence that critics of peer review sometimes cite to show its failures, as well as empirical literature on the reception of bullshit, to advance positive claims about how the assessment of scholarly work is appropriately influenced by features of the context in which it appears: for example, by readers' knowledge of authorship or of publication venue. Reader attitude makes an appropriate and sometimes decisive difference to perceptions of argument quality. This Element finishes by considering the difference that author attitudes to their own arguments can appropriately make to their reception. This title is also available as Open Access on Cambridge Core.
In the years following FDA approval of direct-to-consumer, genetic-health-risk/DTCGHR testing, millions of people in the US have sent their DNA to companies to receive personal genome health risk information without physician or other learned medical professional involvement. In Personal Genome Medicine, Michael J. Malinowski examines the ethical, legal, and social implications of this development. Drawing from the past and present of medicine in the US, Malinowski applies law, policy, public and private sector practices, and governing norms to analyze the commercial personal genome sequencing and testing sectors and to assess their impact on the future of US medicine. Written in relatable and accessible language, the book also proposes regulatory reforms for government and medical professionals that will enable technological advancements while maintaining personal and public health standards.
In the years following FDA approval of direct-to-consumer, genetic-health-risk/DTCGHR testing, millions of people in the US have sent their DNA to companies to receive personal genome health risk information without physician or other learned medical professional involvement. In Personal Genome Medicine, Michael J. Malinowski examines the ethical, legal, and social implications of this development. Drawing from the past and present of medicine in the US, Malinowski applies law, policy, public and private sector practices, and governing norms to analyze the commercial personal genome sequencing and testing sectors and to assess their impact on the future of US medicine. Written in relatable and accessible language, the book also proposes regulatory reforms for government and medical professionals that will enable technological advancements while maintaining personal and public health standards.
This chapter covers the appraisal of published and unpublished works in fiction and non-fiction, prose and poetry, in single volumes, monographs, series and collections. These works are intended, for the most part, to be published in book formats or in formal journal publications, in print, electronically and online.
To survive and prosper, researchers must demonstrate a successful record of publications in journals well-regarded by their fields. This chapter discusses how to successfully publish research in journals in the social and behavioral sciences and is organized into four sections. The first section highlights important factors that are routinely involved in the process of publishing a paper in refereed journals. The second section features some factors that are not necessarily required to publish a paper but that, if present, can positively influence scientific productivity. The third section discusses some pitfalls scholars should avoid to protect their scientific career. The last section addresses general publication issues within the science community. We also recommend further resources for those interested in learning more about successfully publishing research.
The peer review process of publication has limitations, which are discussed. The influence of the pharmaceutical industry can be beneficial and harmful, both of which are examined.
Despite many flaws, including variable quality and a lack of universal standards, peer review – the formal process of critically assessing knowledge claims prior to publication – remains a bedrock norm of science. It therefore also underlies the scientific authority of the IPCC. Most literature used in IPCC assessments has already been peer reviewed by scientific journals. IPCC assessments are themselves reviewed at multiple stages of composition, first by Lead Authors, then by scientific experts and non-governmental organisations outside the IPCC, and finally by government representatives. Over time, assessment review has become increasingly inclusive and transparent: anyone who claims expertise may participate in review, and all comments and responses are published after the assessment cycle concludes. IPCC authors are required to respond to all comments. The IPCC review process is the most extensive, open, and inclusive in the history of science. Challenges include how to manage a huge and ever-increasing number of review comments, and how to deal responsibly with review comments that dispute the fundamental framing of major issues.
Head and neck (HN) radiotherapy (RT) is complex, involving multiple target and organ at risk (OAR) structures delineated by the radiation oncologist. Site-agnostic peer review after RT plan completion is often inadequate for thorough review of these structures. In-depth review of RT contours is critical to maintain high-quality RT and optimal patient outcomes.
Materials and Methods:
In August 2020, the HN RT Quality Assurance Conference, a weekly teleconference that included at least one radiation oncology HN specialist, was activated at our institution. Targets and OARs were reviewed in detail prior to RT plan creation. A parallel implementation study recorded patient factors and outcomes of these reviews. A major change was any modification to the high-dose planning target volume (PTV) or the prescription dose/fractionation; a minor change was modification to the intermediate-dose PTV, low-dose PTV, or any OAR. We analysed the results of consecutive RT contour review in the first 20 months since its initiation.
Results:
A total of 208 patients treated by 8 providers were reviewed: 86·5% from the primary tertiary care hospital and 13·5% from regional practices. A major change was recommended in 14·4% and implemented in 25 of 30 cases (83·3%). A minor change was recommended in 17·3% and implemented in 32 of 36 cases (88·9%). A survey of participants found that all (n = 11) strongly agreed or agreed that the conference was useful.
Conclusion:
Dedicated review of RT targets/OARs with a HN subspecialist is associated with substantial rates of suggested and implemented modifications to the contours.
Peer review is an essential quality assurance component of radiation therapy planning. A growing body of literature has demonstrated substantial rates of suggested plan changes resulting from peer review. There remains a paucity of data on the impact of peer review rounds for stereotactic body radiation therapy (SBRT). We therefore aim to evaluate the outcomes of peer review in this specific patient cohort.
Methods and materials:
We conducted a retrospective review of all SBRT cases that underwent peer review from July 2015 to June 2018 at a single institution. Weekly peer review rounds are grouped according to cancer subsite and attended by radiation oncologists, medical physicists and medical radiation technologists. We prospectively compiled ‘learning moments’, defined as cases with suggested changes or where an educational discussion occurred beyond routine management, and critical errors, defined as errors which could alter clinical outcomes, recorded prospectively during peer review. Plan changes implemented after peer review were documented.
Results:
Nine hundred thirty-four SBRT cases were included. The most common treatment sites were lung (518, 55%), liver (196, 21%) and spine (119, 13%). Learning moments were identified in 161 cases (17%) and translated into plan changes in 28 cases (3%). Two critical errors (0.2%) were identified: an inadequate planning target volume margin and an incorrect image set used for contouring. There was a statistically significantly higher rate of learning moments for lower-volume SBRT sites (defined as ≤30 cases/year) versus higher-volume SBRT sites (29% vs 16%, respectively; p = 0.001).
Conclusions:
Peer review for SBRT cases revealed a low rate of critical errors, but did result in implemented plan changes in 3% of cases, and either educational discussion or suggestions of plan changes in 17% of cases. All SBRT sites appear to benefit from peer review, though lower-volume sites may require particular attention.
There is currently a heightened need for transparency in pharmaceutical sectors. The inclusion of real-world (RW) evidence, in addition to clinical trial evidence, in decision-making processes, was an important step forward toward a more inclusive established value proposition. This advance has introduced new transparency challenges. Increasing transparency is a critical step toward accelerating improvement in type, quality, and access to data, regardless of whether these originate from clinical trials or from RW studies. However, so far, advances in transparency have been relatively restricted to clinical trials, and there remains a lack of similar expectations or standards of transparency concerning the generation and reporting of RW data. This perspective paper aims to highlight the need for transparency concerning RW studies, data, and evidence across health care sectors, to identify areas for improvement, and provide concrete recommendations and practices for the future. Specific issues are discussed from different stakeholder perspectives, culminating in recommended actions, from individual stakeholder perspectives, for improved RW study, data, and evidence transparency. Furthermore, a list of potential guidelines for consideration by stakeholders is proposed. While recommendations from different stakeholder perspectives are made, true transparency in the processes involved in the generation, reporting, and use of RW evidence will require a concerted effort from all stakeholders across health care sectors.