Identification of Facet Models by Means of Factor Rotation: A Simulation Study and Data Analysis of a Test for the Berlin Model of Intelligence Structure (original) (raw)

Bimodality in the Berlin model of intelligence structure (BIS): A replication study

Personality and Individual Differences, 1996

The bimodal structure of intelligence as proposed in the 'Berlin model of intelligence structure' (BIS) (J%ger, 1982) and measured by the BIS-4 test was analysed in a sample of I82 subjects. According to this theory two modalities characterize the structure, both emerging from results in 45 mental tasks and containing a total of seven components: Operations (processing speed, memory, creativity, processing capacity), and contents (verbal, numerical, figural ability), as well as the general factor (g). Exploratory analysis following Jager's approach revealed the existence of four operations and three contents. The simultaneous examination of the bimodality in the structure of the BIS was performed by means of confirmatory factor analysis. The theoretically proposed bimodal model (four operations and three contents) was compared with a unimodal model involving seven correlated factors of the same level and with other alternative unimodal models. In these analyses a slight superiority of operations over contents was observed. The reasons for our preference of the bimodal BIS structure compared to other unimodal solutions are clarified and the role of operations and contents in the construct of intelligence is discussed.

The structure of human intelligence: It is verbal, perceptual, and image rotation (VPR), not fluid and crystallized

Intelligence, 2005

In a heterogeneous sample of 436 adult individuals who completed 42 mental ability tests, we evaluated the relative statistical performance of three major psychometric models of human intelligence-the Cattell-Horn fluidcrystallized model, Vernon's verbal-perceptual model, and Carroll's three-strata model. The verbal-perceptual model fit significantly better than the other two. We improved it by adding memory and higher-order image rotation factors. The results provide evidence for a four-stratum model with a g factor and three third-stratum factors. The model is consistent with the idea of coordination of function across brain regions and with the known importance of brain laterality in intellectual performance. We argue that this model is theoretically superior to the fluid-crystallized model and highlight the importance of image rotation in human intellectual function. D ). Much has been written about psychometric models of the structure of human intelligence, and they are routinely used as underlying assumptions in designing psychological research studies and for developing assessment tools. Surprisingly, however, the most well-established models have been subject to almost no empirical scrutiny in the form of assessment of comparative performance using modern confirmatory factor analytic techniques. In particular, Carroll's (1993) thorough and methodical exploratory analysis of more than 460 data sets of mental ability tests did not address this issue, a point he acknowledged in his final (2003, p. 12) publication, noting that his methodology bsuffered from a lack of adequate procedures for establishing the statistical significance of findingsQ. This is an important omission, as the objective evaluation of these models and the theories that generated them should result in more powerful theories, thereby making better use of monetary and intellectual resources and avoiding conceptual dead ends. The purpose of this study was to correct this omission, in the process using confirmatory factor analysis as a form of bstrong inferenceQ .

On the Detection of the Correct Number of Factors in Two-Facet Models by Means of Parallel Analysis

Educational and Psychological Measurement, 2021

Methods for optimal factor rotation of two-facet loading matrices have recently been proposed. However, the problem of the correct number of factors to retain for rotation of two-facet loading matrices has rarely been addressed in the context of exploratory factor analysis. Most previous studies were based on the observation that two-facet loading matrices may be rank deficient when the salient loadings of each factor have the same sign. It was shown here that full-rank two-facet loading matrices are, in principle, possible, when some factors have positive and negative salient loadings. Accordingly, the current simulation study on the number of factors to extract for two-facet models was based on rank-deficient and full-rank two-facet population models. The number of factors to extract was estimated from traditional parallel analysis based on the mean of the unreduced eigenvalues as well as from nine other rather traditional versions of parallel analysis (based on the 95th percentile of eigenvalues, based on reduced eigenvalues, based on eigenvalue differences). Parallel analysis based on the mean eigenvalues of the correlation matrix with the squared multiple correlations of each variable with the remaining variables inserted in the main diagonal had the highest detection rates for most of the two-facet factor models. Recommendations for the identification of the correct number of factors are based on the simulation results, on the results of an empirical example data set, and on the conditions for approximately rank-deficient and full-rank two-facet models.

Golino, H. F., & Demetriou, A. (in press). Estimating the dimensionality of intelligence like data using Exploratory Graph Analysis. Intelligence.

This study compared various exploratory and confirmatory factor methods for recovering factors of cognitive test-like data. We first note the problems encountered by several widely used methods, such as parallel analysis, minimum average partial procedure, and confirmatory factor analysis, in estimating the number of dimensions underlying performance on test batteries. We then argue that a new method, Exploratory Graph Analysis (EGA), can more accurately uncover underlying dimensions or factors and demonstrate how this method outperforms the other methods. We use several published data sets to demonstrate the advantages of EGA. We conclude that a combination of EGA and confirmatory factor analysis or structural equation modeling may be the ideal in precisely specifying latent factors and their relations.

The structure of human intelligence

In a heterogeneous sample of 436 adult individuals who completed 42 mental ability tests, we evaluated the relative statistical performance of three major psychometric models of human intelligence-the Cattell-Horn fluidcrystallized model, Vernon's verbal-perceptual model, and Carroll's three-strata model. The verbal-perceptual model fit significantly better than the other two. We improved it by adding memory and higher-order image rotation factors. The results provide evidence for a four-stratum model with a g factor and three third-stratum factors. The model is consistent with the idea of coordination of function across brain regions and with the known importance of brain laterality in intellectual performance. We argue that this model is theoretically superior to the fluid-crystallized model and highlight the importance of image rotation in human intellectual function. D ). Much has been written about psychometric models of the structure of human intelligence, and they are routinely used as underlying assumptions in designing psychological research studies and for developing assessment tools. Surprisingly, however, the most well-established models have been subject to almost no empirical scrutiny in the form of assessment of comparative performance using modern confirmatory factor analytic techniques. In particular, Carroll's (1993) thorough and methodical exploratory analysis of more than 460 data sets of mental ability tests did not address this issue, a point he acknowledged in his final (2003, p. 12) publication, noting that his methodology bsuffered from a lack of adequate procedures for establishing the statistical significance of findingsQ. This is an important omission, as the objective evaluation of these models and the theories that generated them should result in more powerful theories, thereby making better use of monetary and intellectual resources and avoiding conceptual dead ends. The purpose of this study was to correct this omission, in the process using confirmatory factor analysis as a form of bstrong inferenceQ .

Structural invariance of multiple intelligences, based on the level of execution

Psicothema, 2011

The independence of multiple intelligences (MI) of Gardner's theory has been debated since its conception. This article examines whether the one-factor structure of the MI theory tested in previous studies is invariant for low and high ability students. Two hundred ninety-four children (aged 5 to 7) participated in this study. A set of Gardner's Multiple Intelligence assessment tasks based on the Spectrum Project was used. To analyze the invariance of a general dimension of intelligence, the different models of behaviours were studied in samples of participants with different performance on the Spectrum Project tasks with Multi-Group Confirmatory Factor Analysis (MGCFA). Results suggest an absence of structural invariance in Gardner's tasks. Exploratory analyses suggest a three-factor structure for individuals with higher performance levels and a two-factor structure for individuals with lower performance levels.

The dependability of the general factor of intelligence: Why small, single-factor models do not adequately represent g

Intelligence, 2011

  1. used generalizability theory to test the reliability of general-factor loadings and to compare three different sources of error in them: the test battery size, the test battery composition, the factor-extraction technique, and their interactions. They found that their general-factor loadings were moderately to strongly dependable. We replicated the methods of Floyd et al. (2009) in a different sample of tests, from the Minnesota Study of Twins Reared Apart (MISTRA). Our first hypothesis was that, given the greater diversity of the tests in MISTRA, the general-factor loadings would be less dependable than in Floyd et al. (2009). Our second hypothesis, contrary to the positions of Floyd et al. (2009) and Jensen and Weng (1994), was that the general factors from the small, randomly-formed test batteries would differ substantively from the general factor from a wellspecified hierarchical model of all available tests. Subtests from MISTRA were randomly selected to form independent and overlapping batteries of 2, 4 and 8 tests in size, and the general-factor loadings of eight probe tests were obtained in each battery by principal components analysis, principal factor analysis and maximum likelihood estimation. Results initially indicated that the general-factor loadings were unexpectedly more dependable than in ; however, further analysis revealed that this was due to the greater diversity of our probe tests. After adjustment for this difference in diversity, and consideration of the representativeness of our probe tests versus those of , our first hypothesis of lower dependability was confirmed in the overlapping batteries, but not the independent ones. To test the second hypothesis, we correlated g factor scores from the random test batteries with g factor scores from the VPR model; we also calculated special coefficients of congruence on the same relation. Consistent with our second hypothesis, the general factors from small nonhierarchical models were found to not be reliable enough for the purposes of theoretical research. We discuss appropriate standards for the construction and factor analysis of intelligence test batteries.

Modeling the construct validity of the Berlin Intelligence Structure Model

Estudos de Psicologia (Campinas), 2015

The Berlin Intelligence Structure Model is a hierarchical and faceted model which is originally based on an almost representative sample of tasks found in the literature. Therefore, the Berlin Intelligence Structure Model is an integrative model with a high degree of generality. The present paper investigates the construct validity of this model by using different confirmatory factor analysis models. The results show that the model assumptions are supported only in part by the data. Moreover, it is demonstrated that there are different possibilities to incorporate the Berlin Intelligence Structure Model assumptions into confirmatory factor analysis models. The results are discussed with regard to the validity of the Berlin Intelligence Structure Model test, and the validity of the model.