We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Duchenne muscular dystrophy is a devastating neuromuscular disorder characterized by the loss of dystrophin, inevitably leading to cardiomyopathy. Despite publications on prophylaxis and treatment with cardiac medications to mitigate cardiomyopathy progression, gaps remain in the specifics of medication initiation and optimization.
Method:
This document is an expert opinion statement, addressing a critical gap in cardiac care for Duchenne muscular dystrophy. It provides thorough recommendations for the initiation and titration of cardiac medications based on disease progression and patient response. Recommendations are derived from the expertise of the Advance Cardiac Therapies Improving Outcomes Network and are informed by established guidelines from the American Heart Association, American College of Cardiology, and Duchenne Muscular Dystrophy Care Considerations. These expert-derived recommendations aim to navigate the complexities of Duchenne muscular dystrophy-related cardiac care.
Results:
Comprehensive recommendations for initiation, titration, and optimization of critical cardiac medications are provided to address Duchenne muscular dystrophy-associated cardiomyopathy.
Discussion:
The management of Duchenne muscular dystrophy requires a multidisciplinary approach. However, the diversity of healthcare providers involved in Duchenne muscular dystrophy can result in variations in cardiac care, complicating treatment standardization and patient outcomes. The aim of this report is to provide a roadmap for managing Duchenne muscular dystrophy-associated cardiomyopathy, by elucidating timing and dosage nuances crucial for optimal therapeutic efficacy, ultimately improving cardiac outcomes, and improving the quality of life for individuals with Duchenne muscular dystrophy.
Conclusion:
This document seeks to establish a standardized framework for cardiac care in Duchenne muscular dystrophy, aiming to improve cardiac prognosis.
Efficient evidence generation to assess the clinical and economic impact of medical therapies is critical amid rising healthcare costs and aging populations. However, drug development and clinical trials remain far too expensive and inefficient for all stakeholders. On October 25–26, 2023, the Duke Clinical Research Institute brought together leaders from academia, industry, government agencies, patient advocacy, and nonprofit organizations to explore how different entities and influencers in drug development and healthcare can realign incentive structures to efficiently accelerate evidence generation that addresses the highest public health needs. Prominent themes surfaced, including competing research priorities and incentives, inadequate representation of patient population in clinical trials, opportunities to better leverage existing technology and infrastructure in trial design, and a need for heightened transparency and accountability in research practices. The group determined that together these elements contribute to an inefficient and costly clinical research enterprise, amplifying disparities in population health and sustaining gaps in evidence that impede advancements in equitable healthcare delivery and outcomes. The goal of addressing the identified challenges is to ultimately make clinical trials faster, more inclusive, and more efficient across diverse communities and settings.
Coastal wetlands are hotspots of carbon sequestration, and their conservation and restoration can help to mitigate climate change. However, there remains uncertainty on when and where coastal wetland restoration can most effectively act as natural climate solutions (NCS). Here, we synthesize current understanding to illustrate the requirements for coastal wetland restoration to benefit climate, and discuss potential paths forward that address key uncertainties impeding implementation. To be effective as NCS, coastal wetland restoration projects will accrue climate cooling benefits that would not occur without management action (additionality), will be implementable (feasibility) and will persist over management-relevant timeframes (permanence). Several issues add uncertainty to understanding if these minimum requirements are met. First, coastal wetlands serve as both a landscape source and sink of carbon for other habitats, increasing uncertainty in additionality. Second, coastal wetlands can potentially migrate outside of project footprints as they respond to sea-level rise, increasing uncertainty in permanence. To address these first two issues, a system-wide approach may be necessary, rather than basing cooling benefits only on changes that occur within project boundaries. Third, the need for NCS to function over management-relevant decadal timescales means methane responses may be necessary to include in coastal wetland restoration planning and monitoring. Finally, there is uncertainty on how much data are required to justify restoration action. We summarize the minimum data required to make a binary decision on whether there is a net cooling benefit from a management action, noting that these data are more readily available than the data required to quantify the magnitude of cooling benefits for carbon crediting purposes. By reducing uncertainty, coastal wetland restoration can be implemented at the scale required to significantly contribute to addressing the current climate crisis.
Knowledge of sex differences in risk factors for posttraumatic stress disorder (PTSD) can contribute to the development of refined preventive interventions. Therefore, the aim of this study was to examine if women and men differ in their vulnerability to risk factors for PTSD.
Methods
As part of the longitudinal AURORA study, 2924 patients seeking emergency department (ED) treatment in the acute aftermath of trauma provided self-report assessments of pre- peri- and post-traumatic risk factors, as well as 3-month PTSD severity. We systematically examined sex-dependent effects of 16 risk factors that have previously been hypothesized to show different associations with PTSD severity in women and men.
Results
Women reported higher PTSD severity at 3-months post-trauma. Z-score comparisons indicated that for five of the 16 examined risk factors the association with 3-month PTSD severity was stronger in men than in women. In multivariable models, interaction effects with sex were observed for pre-traumatic anxiety symptoms, and acute dissociative symptoms; both showed stronger associations with PTSD in men than in women. Subgroup analyses suggested trauma type-conditional effects.
Conclusions
Our findings indicate mechanisms to which men might be particularly vulnerable, demonstrating that known PTSD risk factors might behave differently in women and men. Analyses did not identify any risk factors to which women were more vulnerable than men, pointing toward further mechanisms to explain women's higher PTSD risk. Our study illustrates the need for a more systematic examination of sex differences in contributors to PTSD severity after trauma, which may inform refined preventive interventions.
Stress and diabetes coexist in a vicious cycle. Different types of stress lead to diabetes, while diabetes itself is a major life stressor. This was the focus of the Chicago Biomedical Consortium’s 19th annual symposium, “Stress and Human Health: Diabetes,” in November 2022. There, researchers primarily from the Chicago area met to explore how different sources of stress – from the cells to the community – impact diabetes outcomes. Presenters discussed the consequences of stress arising from mutant proteins, obesity, sleep disturbances, environmental pollutants, COVID-19, and racial and socioeconomic disparities. This symposium showcased the latest diabetes research and highlighted promising new treatment approaches for mitigating stress in diabetes.
The U.S. Department of Agriculture–Agricultural Research Service (USDA-ARS) has been a leader in weed science research covering topics ranging from the development and use of integrated weed management (IWM) tactics to basic mechanistic studies, including biotic resistance of desirable plant communities and herbicide resistance. ARS weed scientists have worked in agricultural and natural ecosystems, including agronomic and horticultural crops, pastures, forests, wild lands, aquatic habitats, wetlands, and riparian areas. Through strong partnerships with academia, state agencies, private industry, and numerous federal programs, ARS weed scientists have made contributions to discoveries in the newest fields of robotics and genetics, as well as the traditional and fundamental subjects of weed–crop competition and physiology and integration of weed control tactics and practices. Weed science at ARS is often overshadowed by other research topics; thus, few are aware of the long history of ARS weed science and its important contributions. This review is the result of a symposium held at the Weed Science Society of America’s 62nd Annual Meeting in 2022 that included 10 separate presentations in a virtual Weed Science Webinar Series. The overarching themes of management tactics (IWM, biological control, and automation), basic mechanisms (competition, invasive plant genetics, and herbicide resistance), and ecosystem impacts (invasive plant spread, climate change, conservation, and restoration) represent core ARS weed science research that is dynamic and efficacious and has been a significant component of the agency’s national and international efforts. This review highlights current studies and future directions that exemplify the science and collaborative relationships both within and outside ARS. Given the constraints of weeds and invasive plants on all aspects of food, feed, and fiber systems, there is an acknowledged need to face new challenges, including agriculture and natural resources sustainability, economic resilience and reliability, and societal health and well-being.
The trilobite Needmorella new genus, with type species N. simoni new genus new species from the late Emsian to mid-Eifelian Needmore Shale of West Virginia, is a distinctive member of the subfamily Synphoriinae. It also occurs in the same formation in Pennsylvania and Virginia. It is not very similar to other Devonian representatives of the subfamily and is considered to have its origins in a morphologically less-derived ancestor because it shares certain similarities with Silurian genera, including the very short anterior cephalic border unmodified by crenulations or spines, S2 that is not largely reduced to a deep pit adaxially, the relatively low inflation of L3, and the well-defined interpleural furrows on the pygidium. Other particularly distinctive characters of the genus include the very long genal spines and the abaxially inflated and expanded posterior pleural bands on the thorax and pygidium that project slightly distally. The conventional concept of the Devonian synphoriine Anchiopsis Delo, 1935 appears to be incompatible with the holotype of the type species, judging from the early illustrations of the specimen, and the genus could be a synonym of Synphoria Clarke, 1894.
The goal of disaster triage at both the prehospital and in-hospital level is to maximize resources and optimize patient outcomes. Of the disaster-specific triage methods developed to guide health care providers, the Simple Triage and Rapid Treatment (START) algorithm has become the most popular system world-wide. Despite its appeal and global application, the accuracy and effectiveness of the START protocol is not well-known.
Objectives:
The purpose of this meta-analysis was two-fold: (1) to estimate overall accuracy, under-triage, and over-triage of the START method when used by providers across a variety of backgrounds; and (2) to obtain specific accuracy for each of the four START categories: red, yellow, green, and black.
Methods:
A systematic review and meta-analysis was conducted that searched Medline (OVID), Embase (OVID), Global Health (OVID), CINAHL (EBSCO), Compendex (Engineering Village), SCOPUS, ProQuest Dissertations and Theses Global, Cochrane Library, and PROSPERO. The results were expanded by hand searching of journals, reference lists, and the grey literature. The search was executed in March 2020. The review considered the participants, interventions, context, and outcome (PICO) framework and followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Accuracy outcomes are presented as means with 95% confidence intervals (CI) as calculated using the binomial method. Pooled meta-analyses of accuracy outcomes using fixed and random effects models were calculated and the heterogeneity was assessed using the Q statistic.
Results:
Thirty-two studies were included in the review, most of which utilized a non-randomized study design (84%). Proportion of victims correctly triaged using START ranged from 0.27 to 0.99 with an overall triage accuracy of 0.73 (95% CI, 0.67 to 0.78). Proportion of over-triage was 0.14 (95% CI, 0.11 to 0.17) while the proportion of under-triage was 0.10 (95% CI, 0.072 to 0.14). There was significant heterogeneity of the studies for all outcomes (P < .0001).
Conclusion:
This meta-analysis suggests that START is not accurate enough to serve as a reliable disaster triage tool. Although the accuracy of START may be similar to other models of disaster triage, development of a more accurate triage method should be urgently pursued.
To describe the cumulative seroprevalence of severe acute respiratory coronavirus virus 2 (SARS-CoV-2) antibodies during the coronavirus disease 2019 (COVID-19) pandemic among employees of a large pediatric healthcare system.
Design, setting, and participants:
Prospective observational cohort study open to adult employees at the Children’s Hospital of Philadelphia, conducted April 20–December 17, 2020.
Methods:
Employees were recruited starting with high-risk exposure groups, utilizing e-mails, flyers, and announcements at virtual town hall meetings. At baseline, 1 month, 2 months, and 6 months, participants reported occupational and community exposures and gave a blood sample for SARS-CoV-2 antibody measurement by enzyme-linked immunosorbent assays (ELISAs). A post hoc Cox proportional hazards regression model was performed to identify factors associated with increased risk for seropositivity.
Results:
In total, 1,740 employees were enrolled. At 6 months, the cumulative seroprevalence was 5.3%, which was below estimated community point seroprevalence. Seroprevalence was 5.8% among employees who provided direct care and was 3.4% among employees who did not perform direct patient care. Most participants who were seropositive at baseline remained positive at follow-up assessments. In a post hoc analysis, direct patient care (hazard ratio [HR], 1.95; 95% confidence interval [CI], 1.03–3.68), Black race (HR, 2.70; 95% CI, 1.24–5.87), and exposure to a confirmed case in a nonhealthcare setting (HR, 4.32; 95% CI, 2.71–6.88) were associated with statistically significant increased risk for seropositivity.
Conclusions:
Employee SARS-CoV-2 seroprevalence rates remained below the point-prevalence rates of the surrounding community. Provision of direct patient care, Black race, and exposure to a confirmed case in a nonhealthcare setting conferred increased risk. These data can inform occupational protection measures to maximize protection of employees within the workplace during future COVID-19 waves or other epidemics.
Recent well-powered genome-wide association studies have enhanced prediction of substance use outcomes via polygenic scores (PGSs). Here, we test (1) whether these scores contribute to prediction over-and-above family history, (2) the extent to which PGS prediction reflects inherited genetic variation v. demography (population stratification and assortative mating) and indirect genetic effects of parents (genetic nurture), and (3) whether PGS prediction is mediated by behavioral disinhibition prior to substance use onset.
Methods
PGSs for alcohol, cannabis, and nicotine use/use disorder were calculated for Minnesota Twin Family Study participants (N = 2483, 1565 monozygotic/918 dizygotic). Twins' parents were assessed for histories of substance use disorder. Twins were assessed for behavioral disinhibition at age 11 and substance use from ages 14 to 24. PGS prediction of substance use was examined using linear mixed-effects, within-twin pair, and structural equation models.
Results
Nearly all PGS measures were associated with multiple types of substance use independently of family history. However, most within-pair PGS prediction estimates were substantially smaller than the corresponding between-pair estimates, suggesting that prediction is driven in part by demography and indirect genetic effects of parents. Path analyses indicated the effects of both PGSs and family history on substance use were mediated via disinhibition in preadolescence.
Conclusions
PGSs capturing risk of substance use and use disorder can be combined with family history measures to augment prediction of substance use outcomes. Results highlight indirect sources of genetic associations and preadolescent elevations in behavioral disinhibition as two routes through which these scores may relate to substance use.
We present an overview of the SkyMapper optical follow-up programme for gravitational-wave event triggers from the LIGO/Virgo observatories, which aims at identifying early GW170817-like kilonovae out to
$\sim200\,\mathrm{Mpc}$
distance. We describe our robotic facility for rapid transient follow-up, which can target most of the sky at
$\delta<+10\deg $
to a depth of
$i_\mathrm{AB}\approx 20\,\mathrm{mag}$
. We have implemented a new software pipeline to receive LIGO/Virgo alerts, schedule observations and examine the incoming real-time data stream for transient candidates. We adopt a real-bogus classifier using ensemble-based machine learning techniques, attaining high completeness (
$\sim98\%$
) and purity (
$\sim91\%$
) over our whole magnitude range. Applying further filtering to remove common image artefacts and known sources of transients, such as asteroids and variable stars, reduces the number of candidates by a factor of more than 10. We demonstrate the system performance with data obtained for GW190425, a binary neutron star merger detected during the LIGO/Virgo O3 observing campaign. In time for the LIGO/Virgo O4 run, we will have deeper reference images allowing transient detection to
$i_\mathrm{AB}\approx 21\,\mathrm{mag}$
.
To evaluate broad-spectrum intravenous antibiotic use before and after the implementation of a revised febrile neutropenia management algorithm in a population of adults with hematologic malignancies.
Design:
Quasi-experimental study.
Setting and population:
Patients admitted between 2014 and 2018 to the Adult Malignant Hematology service of an acute-care hospital in the United States.
Methods:
Aggregate data for adult malignant hematology service were obtained for population-level antibiotic use: days of therapy (DOT), C. difficile infections, bacterial bloodstream infections, intensive care unit (ICU) length of stay, and in-hospital mortality. All rates are reported per 1,000 patient days before the implementation of an febrile neutropenia management algorithm (July 2014–May 2016) and after the intervention (June 2016–December 2018). These data were compared using interrupted time series analysis.
Results:
In total, 2,014 patients comprised 6,788 encounters and 89,612 patient days during the study period. Broad-spectrum intravenous (IV) antibiotic use decreased by 5.7% with immediate reductions in meropenem and vancomycin use by 22 (P = .02) and 15 (P = .001) DOT per 1,000 patient days, respectively. Bacterial bloodstream infection rates significantly increased following algorithm implementation. No differences were observed in the use of other antibiotics or safety outcomes including C. difficile infection, ICU length of stay, and in-hospital mortality.
Conclusions:
Reductions in vancomycin and meropenem were observed following the implementation of a more stringent febrile neutropenia management algorithm, without evidence of adverse outcomes. Successful implementation occurred through a collaborative effort and continues to be a core reinforcement strategy at our institution. Future studies evaluating patient-level data may identify further stewardship opportunities in this population.
Item 9 of the Patient Health Questionnaire-9 (PHQ-9) queries about thoughts of death and self-harm, but not suicidality. Although it is sometimes used to assess suicide risk, most positive responses are not associated with suicidality. The PHQ-8, which omits Item 9, is thus increasingly used in research. We assessed equivalency of total score correlations and the diagnostic accuracy to detect major depression of the PHQ-8 and PHQ-9.
Methods
We conducted an individual patient data meta-analysis. We fit bivariate random-effects models to assess diagnostic accuracy.
Results
16 742 participants (2097 major depression cases) from 54 studies were included. The correlation between PHQ-8 and PHQ-9 scores was 0.996 (95% confidence interval 0.996 to 0.996). The standard cutoff score of 10 for the PHQ-9 maximized sensitivity + specificity for the PHQ-8 among studies that used a semi-structured diagnostic interview reference standard (N = 27). At cutoff 10, the PHQ-8 was less sensitive by 0.02 (−0.06 to 0.00) and more specific by 0.01 (0.00 to 0.01) among those studies (N = 27), with similar results for studies that used other types of interviews (N = 27). For all 54 primary studies combined, across all cutoffs, the PHQ-8 was less sensitive than the PHQ-9 by 0.00 to 0.05 (0.03 at cutoff 10), and specificity was within 0.01 for all cutoffs (0.00 to 0.01).
Conclusions
PHQ-8 and PHQ-9 total scores were similar. Sensitivity may be minimally reduced with the PHQ-8, but specificity is similar.
Different diagnostic interviews are used as reference standards for major depression classification in research. Semi-structured interviews involve clinical judgement, whereas fully structured interviews are completely scripted. The Mini International Neuropsychiatric Interview (MINI), a brief fully structured interview, is also sometimes used. It is not known whether interview method is associated with probability of major depression classification.
Aims
To evaluate the association between interview method and odds of major depression classification, controlling for depressive symptom scores and participant characteristics.
Method
Data collected for an individual participant data meta-analysis of Patient Health Questionnaire-9 (PHQ-9) diagnostic accuracy were analysed and binomial generalised linear mixed models were fit.
Results
A total of 17 158 participants (2287 with major depression) from 57 primary studies were analysed. Among fully structured interviews, odds of major depression were higher for the MINI compared with the Composite International Diagnostic Interview (CIDI) (odds ratio (OR) = 2.10; 95% CI = 1.15–3.87). Compared with semi-structured interviews, fully structured interviews (MINI excluded) were non-significantly more likely to classify participants with low-level depressive symptoms (PHQ-9 scores ≤6) as having major depression (OR = 3.13; 95% CI = 0.98–10.00), similarly likely for moderate-level symptoms (PHQ-9 scores 7–15) (OR = 0.96; 95% CI = 0.56–1.66) and significantly less likely for high-level symptoms (PHQ-9 scores ≥16) (OR = 0.50; 95% CI = 0.26–0.97).
Conclusions
The MINI may identify more people as depressed than the CIDI, and semi-structured and fully structured interviews may not be interchangeable methods, but these results should be replicated.
Declaration of interest
Drs Jetté and Patten declare that they received a grant, outside the submitted work, from the Hotchkiss Brain Institute, which was jointly funded by the Institute and Pfizer. Pfizer was the original sponsor of the development of the PHQ-9, which is now in the public domain. Dr Chan is a steering committee member or consultant of Astra Zeneca, Bayer, Lilly, MSD and Pfizer. She has received sponsorships and honorarium for giving lectures and providing consultancy and her affiliated institution has received research grants from these companies. Dr Hegerl declares that within the past 3 years, he was an advisory board member for Lundbeck, Servier and Otsuka Pharma; a consultant for Bayer Pharma; and a speaker for Medice Arzneimittel, Novartis, and Roche Pharma, all outside the submitted work. Dr Inagaki declares that he has received grants from Novartis Pharma, lecture fees from Pfizer, Mochida, Shionogi, Sumitomo Dainippon Pharma, Daiichi-Sankyo, Meiji Seika and Takeda, and royalties from Nippon Hyoron Sha, Nanzando, Seiwa Shoten, Igaku-shoin and Technomics, all outside of the submitted work. Dr Yamada reports personal fees from Meiji Seika Pharma Co., Ltd., MSD K.K., Asahi Kasei Pharma Corporation, Seishin Shobo, Seiwa Shoten Co., Ltd., Igaku-shoin Ltd., Chugai Igakusha and Sentan Igakusha, all outside the submitted work. All other authors declare no competing interests. No funder had any role in the design and conduct of the study; collection, management, analysis and interpretation of the data; preparation, review or approval of the manuscript; and decision to submit the manuscript for publication.
Background: Measurement of cognitive behavioural therapy (CBT) competency is often resource intensive. A popular emerging alternative to independent observers’ ratings is using other perspectives for rating competency. Aims: This pilot study compared ratings of CBT competency from four perspectives – patient, therapist, supervisor and independent observer using the Cognitive Therapy Scale (CTS). Method: Patients (n = 12, 75% female, mean age 30.5 years) and therapists (n = 5, female, mean age 26.6 years) completed the CTS after therapy sessions, and clinical supervisor and independent observers rated recordings of the same session. Results: Analyses of variance revealed that therapist average CTS competency ratings were not different from supervisor ratings, and supervisor ratings were not different from independent observer ratings; however, therapist ratings were higher than independent observer ratings and patient ratings were higher than all other raters. Conclusions: Raters differed in competency ratings. Implications for potential use and adaptation of CBT competency measurement methods to enhance training and implementation are discussed.
Acute kidney injury after cardiac surgery is a frequent and serious complication among children with congenital heart disease (CHD) and adults with acquired heart disease; however, the significance of kidney injury in adults after congenital heart surgery is unknown. The primary objective of this study was to determine the incidence of acute kidney injury after surgery for adult CHD. Secondary objectives included determination of risk factors and associations with clinical outcomes.
Methods
This single-centre, retrospective cohort study was performed in a quaternary cardiovascular ICU in a paediatric hospital including all consecutive patients ⩾18 years between 2010 and 2013.
Results
Data from 118 patients with a median age of 29 years undergoing cardiac surgery were analysed. Using Kidney Disease: Improving Global Outcome creatinine criteria, 36% of patients developed kidney injury, with 5% being moderate to severe (stage 2/3). Among higher-complexity surgeries, incidence was 59%. Age ⩾35 years, preoperative left ventricular dysfunction, preoperative arrhythmia, longer bypass time, higher Risk Adjustment for Congenital Heart Surgery-1 category, and perioperative vancomycin use were significant risk factors for kidney injury development. In multivariable analysis, age ⩾35 years and vancomycin use were significant predictors. Those with kidney injury were more likely to have prolonged duration of mechanical ventilation and cardiovascular ICU stay in the univariable regression analysis.
Conclusions
We demonstrated that acute kidney injury is a frequent complication in adults after surgery for CHD and is associated with poor outcomes. Risk factors for development were identified but largely not modifiable. Further investigation within this cohort is necessary to better understand the problem of kidney injury.
Objectives: Neuropsychological studies of posttraumatic stress disorder (PTSD) have revealed deficits in attention/working memory, processing speed, executive functioning, and retrospective memory. However, little is known about prospective memory (PM) in PTSD, a clinically relevant aspect of episodic memory that supports the encoding and retrieval of intentions for future actions. Methods: Here we examined PM performance in 40 veterans with PTSD compared to 38 trauma comparison (TC) veterans who were exposed to combat but did not develop PTSD. All participants were administered the Memory for Intentions Test (MIST; Raskin, Buckheit, & Sherrod, 2010), a standardized and validated measure of PM, alongside a comprehensive neurocognitive battery, structured diagnostic interviews for psychiatric conditions, and behavioral questionnaires. Results: Veterans with PTSD performed moderately lower than TC on time-based PM, with errors primarily characterized as PM failure errors (i.e., omissions). However, groups did not differ in event-based PM, ongoing task performance, or post-test recognition of PM intentions for each trial. Lower time-based PM performance was specifically related to hyperarousal symptoms of PTSD. Time-based-performance was also associated with neuropsychological measures of retrospective memory and executive functions in the PTSD group. Nevertheless, PTSD was significantly associated with poorer PM above and beyond age and performance in retrospective memory and executive functions. Discussion: Results provide initial evidence of PM dysfunction in PTSD, especially in strategic monitoring during time-based PM tasks. Findings have potential implications for everyday functioning and health behaviors in persons with PTSD, and deserve replication and future study. (JINS, 2016, 22, 724–734)
Objectives: Numerous studies have shown that individuals with posttraumatic stress disorder (PTSD) display reduced performances on neuropsychological tests, although most prior research has not adequately accounted for comorbidities or performance validity concerns that are common in this population and could partially account for the observed neurocognitive findings. Moreover, few studies have examined the functional implications of neuropsychological results in PTSD. Methods: We examined neuropsychological functioning in 44 veterans with PTSD and 40 veteran trauma comparison (TC) participants with combat exposure and no PTSD. Results: After excluding four veterans with PTSD for performance validity concerns, multivariate analyses of variance by neurocognitive domain revealed significantly worse performance by the PTSD group in the domains of speed of information processing (p=.035) and executive functions (p=.017), but no group differences in attention/working memory, verbal/language functioning, visuoconstruction, or episodic memory. Group differences by PTSD status were still present after covarying for depression, a history of head injuries, and substance use disorders. Executive functioning performance was associated with poorer self-reported occupational functioning and physical health-related quality of life, while speed of information processing performance was associated with poorer physical health-related quality of life. Discussion: These results are generally consistent with a fronto-limbic conceptualization of PTSD-associated neuropsychological dysfunction and show that cognitive functioning may be associated with critical functional outcomes. Taken together, results suggest that consideration of neurocognitive functioning may enhance the clinical management of individuals with PTSD. (JINS, 2016, 22, 399–411)