We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
Online ordering will be unavailable from 17:00 GMT on Friday, April 25 until 17:00 GMT on Sunday, April 27 due to maintenance. We apologise for the inconvenience.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Cost-effectiveness models fully informed by real-world epidemiological parameters yield the best results, but they are costly to obtain. Model calibration using real-world data/evidence (RWD/E) on routine health indicators can provide an alternative to improve the validity and acceptability of the results. We calibrated the transition probabilities of the reference chemotherapy treatment using RWE on patient overall survival (OS) to model the survival benefit of adjuvant trastuzumab in Indonesia.
Methods
A Markov model comprising four health states was initially parameterized using the reference-treatment transition probabilities, obtained from published international evidence. We then calibrated these probabilities, targeting a 2-year OS of 86.11 percent from the RWE sourced from hospital registries. We compared projected OS duration and life-years gained (LYG) before and after calibration for the Nelder–Mead, Bound Optimization BY Quadratic Approximation, and generalized reduced gradient (GRG) nonlinear optimization methods.
Results
The pre-calibrated transition probabilities overestimated the 2-year OS (92.25 percent). GRG nonlinear performed best and had the smallest difference with the RWD/E OS. After calibration, the projected OS duration was significantly lower than their pre-calibrated estimates across all optimization methods for both standard chemotherapy (~7.50 vs. 11.00 years) and adjuvant trastuzumab (~9.50 vs. 12.94 years). LYG measures were, however, similar (~2 years) for the pre-calibrated and calibrated models.
Conclusions
RWD/E calibration resulted in realistically lower survival estimates. Despite the little difference in LYG, calibration is useful to adapt external evidence commonly used to derive transition probabilities to the policy context, thereby enhancing the validity and acceptability of the modeling results.
Psychiatric disorders are complex and multifaceted conditions that profoundly impact various aspects of an individual’s life. Although the neurobiology of these disorders is not fully understood, extensive research suggests intricate interactions between genetic factors, changes in brain structure, disruptions in neurotransmitter pathways, as well as environmental influence.
In the case of psychotic disorders, such as schizophrenia, strong genetic components have been identified as a key feature in the development of psychosis. Moreover, alterations in dopamine function and structural brain changes that result in volume loss seem to be pervasive in people affected by these disorders. Meanwhile, mood disorders, including major depressive disorder and bipolar disorder, are characterized by disruptions in neurotransmitter systems responsible for mood regulation, such as serotonin, norepinephrine, and dopamine. Anxiety and personality disorders also exhibit neurotransmitter dysfunction and neuroanatomical changes, in addition to showing a genetic overlap with mood and psychotic disorders.
Understanding the underlying mechanisms in the pathophysiology of these conditions is of paramount importance and involves integrating findings from various research areas, including at the molecular and cellular levels. This brief overview aims to highlight some of the important developments in our current understanding of psychiatric disorders. Future research should aim to incorporate a comprehensive approach to further unravel the complexity of these disorders and pave the way for targeted therapeutic strategies and effective treatments to improve the lives of individuals afflicted by them.
Working memory encompasses the limited incoming information that can be held in mind for cognitive processing. To date, we have little information on the effects of bilingualism on working memory because, absent evidence, working memory tasks cannot be assumed to measure the same constructs across language groups. To garner evidence regarding the measurement equivalence in Spanish and English, we examined second-grade children with typical development, including 80 bilingual Spanish–English speakers and 167 monolingual English speakers in the United States, using a test battery for which structural equation models have been tested – the Comprehensive Assessment Battery for Children – Working Memory (CABC-WM). Results established measurement invariance across groups up to the level of scalar invariance.
This Element surveys the various lines of work that have applied algorithmic, formal, mathematical, statistical, and/or probabilistic methods to the study of phonology and the computational problems it solves. Topics covered include: how quantitative and/or computational methods have been used in research on both rule- and constraint-based theories of the grammar, including questions about how grammars are learned from data, how to best account for gradience as observed in acceptability judgments and the relative frequencies of different structures in the lexicon, what formal language theory, model theory, and information theory can and have contributed to the study of phonology, and what new directions in connectionist modeling are being explored. The overarching goal is to highlight how the work grounded in these various methods and theoretical orientations is distinct but also interconnected, and how central quantitative and computational approaches have become to the research in and teaching of phonology.
This is a revision of John Trimmer’s English translation of Schrödinger’s famous ‘cat paper’, originally published in three parts in Naturwissenschaften in 1935.
Estimates of the economic costs of climate change rely on guesswork in the face of huge uncertainties, and arbitrary judgements about what is important. The models can produce any number their creators want them to; and typically, they trivialise the risks. Despite being described as ‘worse than useless’ by leading academics, economic analysis of this kind has been credited with a Nobel Prize, and it continues to inform government policy.
Amino acids have been detected in some meteorites and are readily synthesized in prebiotic experiments. These molecules may have been precursors of oligomers and polymers in the early Earth. These reactions were likely to happen in the protected confined spaces on the porous surface of olivine and in the interlayer nanospace of montmorillonite. This study describes experimental and theoretical research on the sorption of l-alanine onto surfaces of silicate minerals, olivine and montmorillonite. Kinetics of the sorption of this amino acid at different pH media was performed. This sorption has been also studied at atomic scale by means of quantum mechanical calculations finding that this sorption is energetically favourable. These results strongly support the premise that minerals could have actively participated in prebiotic reactions.
Over the years, the Serengeti has been a model ecosystem for answering basic ecological questions about the distribution and abundance of organisms, populations, and species, and about how different species interact with each other and with their environment. Tony Sinclair and many other researchers have addressed some of these questions, and continue to work on understanding important biotic and abiotic linkages that influence ecosystem functioning. In common with all types of scientific inquiry, ecologists use predictions to test hypotheses about ecological processes; this approach is highlighted by Sinclair’s research that explored why buffalo and wildebeest populations were rapidly expanding. Like other scientists, ecologists use observation, modeling, and experimentation to generate and test hypotheses. However, in contrast with much biological inquiry, ecologists ask questions that link numerous levels of the biological hierarchy, from molecular to global ecology.
Forecasting elections is a high-risk, high-reward endeavor. Today’s polling rock star is tomorrow’s has-been. It is a high-pressure gig. Public opinion polls have been a staple of election forecasting for almost ninety years. But single source predictions are an imperfect means of forecasting, as we detailed in the preceding chapter. One of the most telling examples of this in recent years is the 2016 US presidential election. In this chapter, we will examine public opinion as an election forecast input. We organize election prediction into three broad buckets: (1) heuristics models, (2) poll-based models, and (3) fundamentals models.
How do children process language as they get older? Is there continuity in the functions assigned to specific structures? And what changes in their processing and their representations as they acquire more language? They appear to use bracketing (finding boundaries), reference (linking to meanings), and clustering (grouping units that belong together) as they analyze the speech stream and extract recurring units, word classes, and larger constructions. Comprehension precedes production. This allows children to monitor and repair production that doesn’t match the adult forms they have represented in memory. Children also track the frequency of types and tokens; they use types in setting up paradigms and identifying regular versus irregular forms. Amount of experience with language, (the diversity of settings) plus feedback and practice, also accounts for individual differences in the paths followed during acquisition. Ultimately, models of the process of acquisition need to incorporate all this to account for how acquisition takes place.
This chapter introduces you to foundational knowledge regarding frameworks and models which is applied in later chapters. Theoretical models and frameworks serve as the ‘connective tissue that meshes theory and practice’. The chapter presents an overview of some of the most pertinent models and frameworks that can support you in designing lessons or learning experiences that incorporate digital technologies. It also highlights how you can reflect on the integration of technology into your teaching.
This chapter begins with models of educator knowledge, TPACK and the UNESCO ICT model, followed by the WHO workflow that helps you plan for using digital technologies in learning. The chapter also examines models and frameworks for considering the degree of integration of technology into teaching (SAMR and RAT/PICRAT) and concludes with educator acceptance models (TAM and CBAM).
Inferences are never assumption free. Data summaries that do not account for all relevant effects readily mislead. Distributions for the Pearson correlation and for counts, and extensions accounting for handling extra-binomial and extra-Poisson variation are noted. Notions of statistical power are introduced. Resampling methods, the bootstrap, and permutation tests, extend available inferential approaches. Regression with a single explanatory variable is used as a context in which to introduce residual plots, outliers, influence, robust regression, and standard errors of predicted values. There are two regression lines – that of y on x and that of x on y. Power transformations, with the logarithmic transformation as a special case, are often effective in giving a linear relationship. The training/test approach, and the closely allied of cross-validation approach, can be important for avoiding over-fitting. Other topics include one- and two-way comparisons, adjustments when there are multiple comparisons, and the estimation of false discovery rates when there is severe multiplicity. Discussions of theories of inference, including likelihood, and Bayes Factor and other Bayesian perspectives, ends the chapter.
Trauma is a common cause of morbidity and mortality in humans and companion animals. Recent efforts in procedural development, training, quality systems, data collection, and research have positively impacted patient outcomes; however, significant unmet need still exists. Coordinated efforts by collaborative, translational, multidisciplinary teams to advance trauma care and improve outcomes have the potential to benefit both human and veterinary patient populations. Strategic use of veterinary clinical trials informed by expertise along the research spectrum (i.e., benchtop discovery, applied science and engineering, large laboratory animal models, clinical veterinary studies, and human randomized trials) can lead to increased therapeutic options for animals while accelerating and enhancing translation by providing early data to reduce the cost and the risk of failed human clinical trials. Active topics of collaboration across the translational continuum include advancements in resuscitation (including austere environments), acute traumatic coagulopathy, trauma-induced coagulopathy, traumatic brain injury, systems biology, and trauma immunology. Mechanisms to improve funding and support innovative team science approaches to current problems in trauma care can accelerate needed, sustainable, and impactful progress in the field. This review article summarizes our current understanding of veterinary and human trauma, thereby identifying knowledge gaps and opportunities for collaborative, translational research to improve multispecies outcomes. This translational trauma group of MDs, PhDs, and DVMs posit that a common understanding of injury patterns and resulting cellular dysregulation in humans and companion animals has the potential to accelerate translation of research findings into clinical solutions.
Three different models have been reported previously to describe the kinetics of the transformation of smectite to illite (Pytte 1982; Velde and Vasseur 1992; Huang et al. 1993). In order to evaluate the general utility of these models to calculate the timing and extent of this transformation, each model was applied to four different geologic settings (Denver Basin, Gulf Coast, the Salton Sea Geothermal System, and Paris Basin) in which the ages, geothermal gradients and potassium ion activities vary markedly. The model results are compared to the measured percentages of illite in illite/smectite (I/S) and the K/Ar ages of I/S (if available) to test the utility of a given model to a particular basin.
Although individual models can be applied to study this transformation within a specific setting, none of these models was successful in simulating the transformation for all four basins. The Salton Sea was simulated best using the model by Huang et al. (1993), which incorporated an increased geothermal gradient during the last 20,000 years. These results indicate that a large fraction of illite formed due to this increased geothermal gradient, and underscores that temperature is a dominant kinetic factor in forming illite. The Denver Basin was simulated well by the models of Velde and Vasseur (1992) and Pytte (1982). The Gulf Coast was simulated very well by the model of Huang et al. (1993) using a term that terminates the transformation at 75% illite. For the Paris Basin, the results are mixed. The models can be refined by comparing the calculated and measured ages of illite such as the K/Ar ages of I/S to understand the thermal history of a particular basin. The calculated ages of illitization derived from these refined models can be used to indicate the time at which source rocks became thermally mature to form oil and gas.
The intersection of development and evolution has always harbored conceptual issues, but many of these are on display in contemporary evolutionary developmental biology (evo-devo). These issues include: (1) the precise constitution of evo-devo, with its focus on both the evolution of development and the developmental basis of evolution, and how it fits within evolutionary theory; (2) the nature of evo-devo model systems that comprise the material of comparative and experimental research; (3) the puzzle of how to understand the widely used notion of 'conserved mechanisms'; (4) the definition of evolutionary novelties and expectations for how to explain them; and (5) the demand of interdisciplinary collaboration that derives from investigating complex phenomena at key moments in the history of life, such as the fin-limb transition. This Element treats these conceptual issues with close attention to both empirical detail and scientific practice to offer new perspectives on evolution and development. This title is also available as Open Access on Cambridge Core.
This Element introduces the philosophical literature on models, with an emphasis on normative considerations relevant to models for decision-making. Chapter 1 gives an overview of core questions in the philosophy of modeling. Chapter 2 examines the concept of model adequacy for purpose, using three examples of models from the atmospheric sciences to describe how this sort of adequacy is determined in practice. Chapter 3 explores the significance of using models that are not adequate for purpose, including the purpose of informing public decisions. Chapter 4 provides a basic framework for values in modelling, using a case study to highlight the ethical challenges in building models for decision making. It concludes by establishing the need for strategies to manage value judgments in modelling, including the potential for public participation in the process.
This Element will overview research using models to understand scientific practice. Models are useful for reasoning about groups and processes that are complicated and distributed across time and space, i.e., those that are difficult to study using empirical methods alone. Science fits this picture. For this reason, it is no surprise that researchers have turned to models over the last few decades to study various features of science. The different sections of the element are mostly organized around different modeling approaches. The models described in this element sometimes yield take-aways that are straightforward, and at other times more nuanced. The Element ultimately argues that while these models are epistemically useful, the best way to employ most of them to understand and improve science is in combination with empirical methods and other sorts of theorizing.
The issue of linking research and policy is not unique to health care of the elderly; it has been articulated by numerous stakeholder groups, including those with specific diseases, such as breast cancer. A method of enhancing these links is now being systematically addressed in the Canadian Breast Cancer Initiative with the input of women with breast cancer. The Initiative consists of a number of components and demonstrates a model of consumer participation at multiple levels in setting the agenda for research and policy development, thus enhancing accountability in the transfer of research findings into policy. The concept of consumer participation in linking research and policy is transferable to other diseases and other population groups, such as seniors.
Multilevel multicomponent complex adaptive systems are not reducible to the sum of the causal effects of independent variables. Causal inference, which has a privileged place in contemporary IR (and many other social sciences) cannot address systems effects, which arise from interdependent elements and operations (not the impact of independent variables on dependent variables). Systems effects explanations explain why by showing how. They identify mechanisms and processes of causation. They thus are able to establish causal efficacy; that is, show how processes produce – actually cause – outcomes (rather than merely identify some elements that are part of an unspecified causal process). Such an understanding leads us away from a “laws and theories” conception of science, which remains popular in Physics and Chemistry, towards a “models and mechanisms” understanding, which predominates in the life sciences (which, on their face, seem a much better model for the social sciences).