SOLOMON: a method for splitting a sample into equivalent subsamples in factor analysis (original) (raw)
Related papers
Partitioned Variates: An Alternative to Factor Analysis
Psychological Reports, 1977
A traditional method of analyzing scores for use in psychological measurement is exemplified by the variation associated with factor analysis. The factor analytic method is shown to contain implicit assumptions concerning the common and unique variance of any variable by exclusion of the latter. A new method of developing psychologically meaningful scores is presented which considers only the unique variance of a variable. These new scores, called partitioned variates, are supported both logically and with external criteria, and a detailed worked example is provided. The methodology and concept suggested here should provide researchers with an alternative to the more traditional viewpoints and methods of data analysis.
On the Interpretation of Factor Analysis
The importance of the researcher’s interpretation of factor analysis is illustrated by means of an example. The results from this example appear to be meaningful and easily interpreted. The example omits any measure of reliability or validity. If a measure of reliability had been included, it would have indicated the worthlessness of the results. A survey of 46 recent papers from 6 journals supported the claim that the example is typical, two-thirds of the papers provide no measure of reliability. In fact, some papers did not even provide sufficient information to allow for replication. To improve the current situation some measure of factor reliability should accompany applied studies that utilize factor analysis. Three operational approaches are suggested for obtaining measures of factor reliability: use of split samples, Monte Carlo simulation, and a priori models.
Exploratory factor analysis (EFA) is a complex, multi-step process. The goal of this paper is to collect, in one article, information that will allow researchers and practitioners to understand the various choices available through popular software packages, and to make decisions about " best practices " in exploratory factor analysis. In particular, this paper provides practical information on making decisions regarding (a) extraction, (b) rotation, (c) the number of factors to interpret, and (d) sample size. Exploratory factor analysis (EFA) is a widely utilized and broadly applied statistical technique in the social sciences. In recently published studies, EFA was used for a variety of applications, including developing an instrument for the evaluation of school principals (Lovett, Zeiss, & Heinemann, 2002), assessing the motivation of Puerto Rican high school students (Morris, 2001), and determining what types of services should be offered to college students (Majors & Sedlacek, 2001). A survey of a recent two-year period in PsycINFO yielded over 1700 studies that used some form of EFA. Well over half listed principal components analysis with varimax rotation as the method used for data analysis, and of those researchers who report their criteria for deciding the number of factors to be retained for rotation, a majority use the Kaiser criterion (all factors with eigenvalues greater than one). While this represents the norm in the literature (and often the defaults in popular statistical software packages), it will not always yield the best results for a particular data set. EFA is a complex procedure with few absolute guidelines and many options. In some cases, options vary in terminology across software packages, and in many cases particular options are not well defined. Furthermore, study design, data properties, and the questions to be answered all have a bearing on which procedures will yield the maximum benefit. The goal of this paper is to discuss common practice in studies using exploratory factor analysis, and provide practical information on best practices in the use of EFA. In particular we discuss four issues: 1) component vs. factor extraction, 2) number of factors to retain for rotation, 3) orthogonal vs. oblique rotation, and 4) adequate sample size. BEST PRACTICE Extraction: Principal Components vs. Factor Analysis PCA (principal components analysis) is the default method of extraction in many popular statistical software packages, including SPSS and SAS, which likely contributes to its popularity. However, PCA is
Exploratory factor analysis with small sample sizes: A comparison of three approaches
Behavioural Processes, 2013
Exploratory factor analysis (EFA) is an extremely popular method for determining the underlying factor structure for a set of variables. Due to its exploratory nature, EFA is notorious for being conducted with small sample sizes, and recent reviews of psychological research have reported that between 40% and 60% of applied studies have 200 or fewer observations. Recent methodological studies have addressed small size requirements for EFA models; however, these models have only considered complete data, which are the exception rather than the rule in psychology. Furthermore, the extant literature on missing data techniques with small samples is scant, and nearly all existing studies focus on topics that are not of primary interest to EFA models. Therefore, this article presents a simulation to assess the performance of various missing data techniques for EFA models with both small samples and missing data. Results show that deletion methods do not extract the proper number of factors and estimate the factor loadings with severe bias, even when data are missing completely at random. Predictive mean matching is the best method overall when considering extracting the correct number of factors and estimating factor loadings without bias, although 2-stage estimation was a close second.
Frontiers in Psychology, 2012
We provide a basic review of the data screening and assumption testing issues relevant to exploratory and confirmatory factor analysis along with practical advice for conducting analyses that are sensitive to these concerns. Historically, factor analysis was developed for explaining the relationships among many continuous test scores, which led to the expression of the common factor model as a multivariate linear regression model with observed, continuous variables serving as dependent variables, and unobserved factors as the independent, explanatory variables. Thus, we begin our paper with a review of the assumptions for the common factor model and data screening issues as they pertain to the factor analysis of continuous observed variables. In particular, we describe how principles from regression diagnostics also apply to factor analysis. Next, because modern applications of factor analysis frequently involve the analysis of the individual items from a single test or questionnaire, an important focus of this paper is the factor analysis of items. Although the traditional linear factor model is well-suited to the analysis of continuously distributed variables, commonly used item types, including Likert-type items, almost always produce dichotomous or ordered categorical variables. We describe how relationships among such items are often not well described by product-moment correlations, which has clear ramifications for the traditional linear factor analysis. An alternative, non-linear factor analysis using polychoric correlations has become more readily available to applied researchers and thus more popular. Consequently, we also review the assumptions and data-screening issues involved in this method. Throughout the paper, we demonstrate these procedures using an historic data set of nine cognitive ability variables.
Evaluation of the exploratory factor analysis programs provided in SPSSX and SPSS/PC+
1993
Given the frequent use of SPSSX and SPSS/PC+ exploratory factor analysis in analyzing multivariate psychometric data, it is germane to examine the limitations of the Factor program as it currently exists. Over recent years, the routines in these packages generally have been developed and expanded considerably. In particular, the exploratory factor analysis procedures have been greatly extended and enhanced with inclusion of additional estimation methods such as minimum likelihood, unweighted least squares, generalized least squares, and so on. Likewise, availability of a Scree plot of the latent roots against factor number has facilitated determination of the appropriate factor-extraction number. Provision of a chi-square goodness-of-fit test, as well as options for sorting and retaining factor loadings greater than a minimum level (usually ± .30), have added to the utility of the SPSSX and SPSS/PC+ statistical routines. These routines have undoubtedly facilitated and stimulated psychometric reports involving the use of factor analysis in the psychological literature.
Comparing Factor Loadings in Exploratory Factor Analysis: A New Randomization Test
Journal of Modern Applied Statistical Methods, 2008
Factorial invariance testing requires a referent loading to be constrained equal across groups. This study introduces a randomization test for comparing group exploratory factor analysis loadings so as to identify an invariant referent. Results show that it maintains the Type I error rate while providing adequate power under most conditions.
2020
Determining the number of factors is one of the most crucial decisions a researcher has to face when conducting an exploratory factor analysis. As no common factor retention criterion can be seen as generally superior, a new approach is proposed - combining extensive data simulation with state-of-the-art machine learning algorithms. First, data was simulated under a broad range of realistic conditions and three algorithms were trained using specially designed features based on the correlation matrices of the simulated data sets. Subsequently, the new approach was compared to four common factor retention criteria with regard to its accuracy in determining the correct number of factors in a large-scale simulation experiment. Sample size, variables per factor, correlations between factors, primary and cross-loadings as well as the correct number of factors were varied to gain comprehensive knowledge of the efficiency of our new method. A gradient boosting model outperformed all other c...
Factor Retention in Exploratory Factor Analysis: A Comparison of Alternative Methods
This study compared the effectiveness of 10 methods of determining the number of factors to retain in exploratory common factor analysis. The 10 methods included the Kaiser rule and a modified Kaiser criterion, 3 variations of parallel analysis, 4 regression-based variations of the scree procedure, and the minimum average partial procedure. The performance of these procedures was evaluated based on the average number of factors retained by each method, the proportion of samples retaining the same number of factors retained by each method, the proportion of samples retaining the same number of factors as the true number of factors in the population, and the proportion of samples retaining the same number of factors when a particular rule of thumb is applied to the population. The performance of the 10 procedures was investigated using Monte Carlo methods in which random samples were generated under known and controlled population conditions. Results clearly suggest that both the choi...