We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
Online ordering will be unavailable from 17:00 GMT on Friday, April 25 until 17:00 GMT on Sunday, April 27 due to maintenance. We apologise for the inconvenience.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Clinical research professionals (CRPs) are essential members of research teams serving in multiple job roles. However, recent turnover rates have reached crisis proportions, negatively impacting clinical trial metrics. Gaining an understanding of job satisfaction factors among CRPs working at academic medical centers (AMCs) can provide insights into retention efforts.
Materials/Methods:
A survey instrument was developed to measure key factors related to CRP job satisfaction and retention. The survey included 47 rating items in addition to demographic questions. An open-text question solicited respondents to provide their top three factors for job satisfaction. The survey was distributed through listservs of three large AMCs. Here, we present a factor analysis of the instrument and quantitative and qualitative results of the subsequent survey.
Results:
A total of 484 CRPs responded to the survey. A principal components analysis with Varimax rotation was performed on the 47 rating items. The analysis resulted in seven key factors and the survey instrument was reduced to 25 rating items. Self-efficacy and pride in work were top ranked in the quantitative results; work complexity and stress and salary and benefits were top ranked in the qualitative findings. Opportunities for education and professional development were also themes in the qualitative data.
Discussion:
This study addresses the need for a tool to measure job satisfaction of CRPs. This tool may be useful for additional validation studies and research to measure the effectiveness of improvement initiatives to address CRP job satisfaction and retention.
The attitudes toward genomics and precision medicine (AGPM) measure examines attitudes toward activities such as genetic testing, gene editing, and biobanking. This is a useful tool for research on the ethical, legal, and social implications of genomics, a major program within the National Institutes of Health. We updated the AGPM to explore controversies over mRNA vaccines. This brief report examines the factor structure of the updated AGPM using a sample of 4939 adults in the USA. The updated AGPM’s seven factors include health benefits, knowledge benefits, and concerns about the sacredness of life, privacy, gene editing, mRNA vaccines, and social justice.
This study described the development and assessment of the psychometric properties of the Dissociation-Integration of Self-States Scale (D-ISS). This is a new scale to assess dissociation at the ‘between modes’ or self-state (personality) level. The D-ISS is rooted in cognitive behavioural theory and designed to measure between-mode dissociation (dissociation between self-states) in clinical practice and research.
Method:
Study 1: D-ISS scale items were generated and then answered by 344 young adults (16–25 years) who reported experiencing stressful times. An exploratory factor analysis (EFA) was conducted and the results were used to refine the scale to 25 items.
Study 2: The final 25-item D-ISS was completed by 383 adults (18–65 years) who reported experiencing mental health difficulties. A confirmatory factor analysis (CFA) was conducted using the second dataset. Internal consistency, test–retest reliability, convergent validity and divergent validity of the final D-ISS was assessed.
Results:
Study 1: The EFA showed a clear 5-factor solution, which was used to refine the D-ISS to a total of 25 items with five items in each factor.
Study 2: The 5-factor solution from Study 1 was confirmed as a good fit by the CFA using the data collected in Study 2. The D-ISS demonstrated good internal reliability and test–retest reliability. The D-ISS showed no correlations with divergent scales. For convergent validity, the D-ISS showed moderate correlations with the Dissociative Experiences Scale (DES-II).
Conclusions:
The new D-ISS measure of between-mode dissociation is reliable and valid for the population represented by our sample. Further research into its use in clinical populations is required.
Key learning aims
(1) To understand and be able to use a new measure of dissociation at the personality or self-states level.
(2) To understand the cognitive behavioural model of dissociation.
(3) To understand the theoretical underpinnings of the scale, in terms of the effects of childhood and adult adversity and other factors on psychological development.
(4) To consider the potential clinical and research applications of the scale.
(5) To appreciate the limitations of the research so far and the nature of future research required.
This paper aims to identify and analyze geographical patterns of (morpho)syntactic variation in traditional Austrian dialects using non-aggregative dialectometric methods (factor analysis). Based on a comprehensive dialect corpus obtained by direct dialect interviews including 163 speakers from 40 locations throughout Austria, our analyses of 79 variants of 30 (morpho)syntactic variables not only show geographical patterns in Austria’s dialects, but also address the linguistic basis of the geographical structures revealed. In particular, the results show that variables at the morphology–syntax interface contribute most to geographical structuring. We argue that this finding is related to structural conditions of these variables and the historical development of the respective variants.
The present study aimed to adapt the actively open-minded thinking about evidence (AOT-E) scale to Brazilian Portuguese and assess its psychometric properties and nomological network in a Brazilian sample. It begins by investigating the underlying content structure of the AOT-E in its original form. Results from an exploratory factor analysis (EFA) of secondary data serve as the basis for the proposed adaptation, which is subjected to both EFA and confirmatory factor analysis (CFA). A total of 718 participants from various regions of Brazil completed an online survey that included the AOT-E, along with other instruments that allowed for an assessment of the scale’s nomological network, including measures of science literacy, attitude toward science, conspiracy beliefs, and religiosity. The EFA and CFA of both the original and Brazilian samples suggested a unidimensional solution. Despite the differences found in a multigroup CFA, polychoric correlations provided evidence of expected nomological relationships that replicate international findings. Overall, this study contributes to expanding the availability of adapted and validated research instruments in non-WEIRD samples.
We address several issues that are raised by Bentler and Tanaka's [1983] discussion of Rubin and Thayer [1982]. Our conclusions are: standard methods do not completely monitor the possible existence of multiple local maxima; summarizing inferential precision by the standard output based on second derivatives of the log likelihood at a maximum can be inappropriate, even if there exists a unique local maximum; EM and LISREL can be viewed as complementary, albeit not entirely adequate, tools for factor analysis.
A new algorithm to obtain the least-squares or MINRES solution in common factor analysis is presented. It is based on the up-and-down Marquardt algorithm developed by the present authors for a general nonlinear least-squares problem. Experiments with some numerical models and some empirical data sets showed that the algorithm worked nicely and that SMC (Squared Multiple Correlation) performed best among four sets of initial values for common variances but that the solution might sometimes be very sensitive to fluctuations in the sample covariance matrix.
Current practice in structural modeling of observed continuous random variables is limited to representation systems for first and second moments (e.g., means and covariances), and to distribution theory based on multivariate normality. In psychometrics the multinormality assumption is often incorrect, so that statistical tests on parameters, or model goodness of fit, will frequently be incorrect as well. It is shown that higher order product moments yield important structural information when the distribution of variables is arbitrary. Structural representations are developed for generalizations of the Bentler-Weeks, Jöreskog-Keesling-Wiley, and factor analytic models. Some asymptotically distribution-free efficient estimators for such arbitrary structural models are developed. Limited information estimators are obtained as well. The special case of elliptical distributions that allow nonzero but equal kurtoses for variables is discussed in some detail. The argument is made that multivariate normal theory for covariance structure models should be abandoned in favor of elliptical theory, which is only slightly more difficult to apply in practice but specializes to the traditional case when normality holds. Many open research areas are described.
Factor analysis for nonnormally distributed variables is discussed in this paper. The main difference between our approach and more traditional approaches is that not only second order cross-products (like covariances) are utilized, but also higher order cross-products. It turns out that under some conditions the parameters (factor loadings) can be uniquely determined. Two estimation procedures will be discussed. One method gives Best Generalized Least Squares (BGLS) estimates, but is computationally very heavy, in particular for large data sets. The other method is a least squares method which is computationally less heavy. In one example the two methods will be compared by using the bootstrap method. In another example real life data are analyzed.
In an addendum to his seminal 1969 article Jöreskog stated two sets of conditions for rotational identification of the oblique factor solution under utilization of fixed zero elements in the factor loadings matrix (Jöreskog in Advances in factor analysis and structural equation models, pp. 40–43, 1979). These condition sets, formulated under factor correlation and factor covariance metrics, respectively, were claimed to be equivalent and to lead to global rotational uniqueness of the factor solution. It is shown here that the conditions for the oblique factor correlation structure need to be amended for global rotational uniqueness, and, hence, that the condition sets are not equivalent in terms of unicity of the solution.
The Maximum-likelihood estimator dominates the estimation of general structural equation models. Noniterative, equation-by-equation estimators for factor analysis have received some attention, but little has been done on such estimators for latent variable equations. I propose an alternative 2SLS estimator of the parameters in LISREL type models and contrast it with the existing ones. The new 2SLS estimator allows observed and latent variables to originate from nonnormal distributions, is consistent, has a known asymptotic covariance matrix, and is estimable with standard statistical software. Diagnostics for evaluating instrumental variables are described. An empirical example illustrates the estimator.
It is proved for the common factor model with r common factors that under certain condition s which maintain the distinctiveness of each common factor a given common factor will be determinate if there exists an unlimited number of variables in the model each having an absolute correlation with the factor greater than some arbitrarily small positive quantity.
The component loadings are interpreted by considering their magnitudes, which indicates how strongly each of the original variables relates to the corresponding principal component. The usual ad hoc practice in the interpretation process is to ignore the variables with small absolute loadings or set to zero loadings smaller than some threshold value. This, in fact, makes the component loadings sparse in an artificial and a subjective way. We propose a new alternative approach, which produces sparse loadings in an optimal way. The introduced approach is illustrated on two well-known data sets and compared to the existing rotation methods.
Item-level response time (RT) data can be conveniently collected from computer-based test/survey delivery platforms and have been demonstrated to bear a close relation to a miscellany of cognitive processes and test-taking behaviors. Individual differences in general processing speed can be inferred from item-level RT data using factor analysis. Conventional linear normal factor models make strong parametric assumptions, which sacrifices modeling flexibility for interpretability, and thus are not ideal for describing complex associations between observed RT and the latent speed. In this paper, we propose a semiparametric factor model with minimal parametric assumptions. Specifically, we adopt a functional analysis of variance representation for the log conditional densities of the manifest variables, in which the main effect and interaction functions are approximated by cubic splines. Penalized maximum likelihood estimation of the spline coefficients can be performed by an Expectation-Maximization algorithm, and the penalty weight can be empirically determined by cross-validation. In a simulation study, we compare the semiparametric model with incorrectly and correctly specified parametric factor models with regard to the recovery of data generating mechanism. A real data example is also presented to demonstrate the advantages of the proposed method.
Some methods that analyze three-way arrays of data (including INDSCAL and CANDECOMP/PARAFAC) provide solutions that are not subject to arbitrary rotation. This property is studied in this paper by means of the “triple product” [A, B, C] of three matrices. The question is how well the triple product determines the three factors. The answer: up to permutation of columns and multiplication of columns by scalars—under certain conditions. In this paper we greatly expand the conditions under which the result is known to hold. A surprising fact is that the nonrotatability characteristic can hold even when the number of factors extracted is greater than every dimension of the three-way array, namely, the number of subjects, the number of tests, and the number of treatments.
Component loss functions (CLFs) similar to those used in orthogonal rotation are introduced to define criteria for oblique rotation in factor analysis. It is shown how the shape of the CLF affects the performance of the criterion it defines. For example, it is shown that monotone concave CLFs give criteria that are minimized by loadings with perfect simple structure when such loadings exist. Moreover, if the CLFs are strictly concave, minimizing must produce perfect simple structure whenever it exists. Examples show that methods defined by concave CLFs perform well much more generally. While it appears important to use a concave CLF, the specific CLF used is less important. For example, the very simple linear CLF gives a rotation method that can easily outperform the most popular oblique rotation methods promax and quartimin and is competitive with the more complex simplimax and geomin methods.
Factor analysis and principal component analysis result in computing a new coordinate system, which is usually rotated to obtain a better interpretation of the results. In the present paper, the idea of rotation to simple structure is extended to two dimensions. While the classical definition of simple structure is aimed at rotating (one-dimensional) factors, the extension to a simple structure for two dimensions is based on the rotation of planes. The resulting planes (principal planes) reveal a better view of the data than planes spanned by factors from classical rotation and hence allow a more reliable interpretation. The usefulness of the method as well as the effectiveness of a proposed algorithm are demonstrated by simulation experiments and an example.
Component loss functions (CLFs) are used to generalize the quartimax criterion for orthogonal rotation in factor analysis. These replace the fourth powers of the factor loadings by an arbitrary function of the second powers. Criteria of this form were introduced by a number of authors, primarily Katz and Rohlf (1974) and Rozeboom (1991), but there has been essentially no follow-up to this work. A method so simple, natural, and general deserves to be investigated more completely. A number of theoretical results are derived including the fact that any method using a concave CLF will recover perfect simple structure whenever it exists, and there are methods that will recover Thurstone simple structure whenever it exists. Specific CLFs are identified and it is shown how to compare these using standardized plots. Numerical examples are used to illustrate and compare CLF and other methods. Sorted absolute loading plots are introduced to aid in comparing results and setting parameters for methods that require them.
Joint correspondence analysis is a technique for constructing reduced-dimensional representations of pairwise relationships among categorical variables. The technique was proposed by Greenacre as an alternative to multiple correspondence analysis. Joint correspondence analysis differs from multiple correspondence analysis in that it focuses solely on between-variable relationships. Greenacre described one alternating least-squares algorithm for conducting joint correspondence analysis. Another alternating least-squares algorithm is described in this article. The algorithm is guaranteed to converge, and does so in fewer iterations than does the algorithm proposed by Greenacre. A modification of the algorithm for handling Heywood cases is described. The algorithm is illustrated on two data sets.
Situations sometimes arise in which variables collected in a study are not jointly observed. This typically occurs because of study design. An example is an equating study where distinct groups of subjects are administered different sections of a test. In the normal maximum likelihood function to estimate the covariance matrix among all variables, elements corresponding to those that are not jointly observed are unidentified. If a factor analysis model holds for the variables, however, then all sections of the matrix can be accurately estimated, using the fact that the covariances are a function of the factor loadings. Standard errors of the estimated covariances can be obtained by the delta method. In addition to estimating the covariance matrix in this design, the method can be applied to other problems such as regression factor analysis. Two examples are presented to illustrate the method.