Golino, H. F., & Demetriou, A. (in press). Estimating the dimensionality of intelligence like data using Exploratory Graph Analysis. Intelligence. (original) (raw)
Related papers
Intelligence, 2011
- used generalizability theory to test the reliability of general-factor loadings and to compare three different sources of error in them: the test battery size, the test battery composition, the factor-extraction technique, and their interactions. They found that their general-factor loadings were moderately to strongly dependable. We replicated the methods of Floyd et al. (2009) in a different sample of tests, from the Minnesota Study of Twins Reared Apart (MISTRA). Our first hypothesis was that, given the greater diversity of the tests in MISTRA, the general-factor loadings would be less dependable than in Floyd et al. (2009). Our second hypothesis, contrary to the positions of Floyd et al. (2009) and Jensen and Weng (1994), was that the general factors from the small, randomly-formed test batteries would differ substantively from the general factor from a wellspecified hierarchical model of all available tests. Subtests from MISTRA were randomly selected to form independent and overlapping batteries of 2, 4 and 8 tests in size, and the general-factor loadings of eight probe tests were obtained in each battery by principal components analysis, principal factor analysis and maximum likelihood estimation. Results initially indicated that the general-factor loadings were unexpectedly more dependable than in ; however, further analysis revealed that this was due to the greater diversity of our probe tests. After adjustment for this difference in diversity, and consideration of the representativeness of our probe tests versus those of , our first hypothesis of lower dependability was confirmed in the overlapping batteries, but not the independent ones. To test the second hypothesis, we correlated g factor scores from the random test batteries with g factor scores from the VPR model; we also calculated special coefficients of congruence on the same relation. Consistent with our second hypothesis, the general factors from small nonhierarchical models were found to not be reliable enough for the purposes of theoretical research. We discuss appropriate standards for the construction and factor analysis of intelligence test batteries.
Journal of Intelligence, 2021
In a recent publication in the Journal of Intelligence, Dennis McFarland mischaracterized previous research using latent variable and psychometric network modeling to investigate the structure of intelligence. Misconceptions presented by McFarland are identified and discussed. We reiterate and clarify the goal of our previous research on network models, which is to improve compatibility between psychological theories and statistical models of intelligence. WAIS-IV data provided by McFarland were reanalyzed using latent variable and psychometric network modeling. The results are consistent with our previous study and show that a latent variable model and a network model both provide an adequate fit to the WAIS-IV. We therefore argue that model preference should be determined by theory compatibility. Theories of intelligence that posit a general mental ability (general intelligence) are compatible with latent variable models. More recent approaches, such as mutualism and process overl...
Psychometric Approaches to Understanding and Measuring Intelligence
Handbook of Intelligence
t'l'l •• Ufll lit", I '11'1 t III "I '"'''' 1-. ~"'-' '•'~I ~.I!tl. "'" ' 11. •~f.)I". ''' lql ~ III .IIl, ~ II'~IR II r h~I ' I'~ III '11I'-)l,IM "If~fl It. ~I IJ~WI"'f ~,'~I'i ' 'I :, ~ Il4ft l • 1 iII III lill1ll""1 I ft.l\t' 11 ~II 11 "'l •• ", '" " •"W '•I'~ JI Itll" ' ")' .... tllfl"'~1
Measuring Components of Intelligence: Mission Impossible?
Journal of Psychoeducational Assessment, 2013
The two studies conducted by Weiss, Keith, Zhu, and Chen in 2013 on the Wechsler Adult Intelligence Scale (WAIS-IV) and the Wechsler Intelligence Scale for Children (WISC-IV), respectively, provide strong evidence for the validity of a four-factor solution corresponding to the current hierarchical model of both scales. These analyses support the calculation of the four index scores and the Full Scale IQ. In this article, we discuss some limits of these validation efforts: (a) the incomplete measurement of the construct, (b) the problem of correspondence between factors and broad abilities, (c) the gap between a structural model and a functional model of intelligence, and (d) the difficulty of combining the measurement of global intelligence and broad abilities. Finally, the option of a five-factor model underlying the index scores is analyzed and its potential for future Wechsler scales is discussed. provide strong evidence supporting the calculation of the Full Scale IQ and the four Index scores in the Wechsler Intelligence Scale for Children (WISC-IV) and the Wechsler Adult Intelligence Scale (WAIS-IV). The correlations observed between the subtests are consistent with a hierarchical model of intelligence with g at the apex and four or five broad abilities, corresponding to Gc, Gf, Gv, Gsm, and Gs in the Cattell-Horn-Carroll (CHC) model, at the second level of the structure. However, Weiss et al. ( , 2003b results also show that interpretations based on factor index scores is complex and that the relations between the factor index scores and the latent variables in the models are not straightforward. The Weiss et al. ( , 2003b results are supportive of the calculation of the composite scores of the WISC-IV and WAIS-IV, but they are insufficient for a comprehensive interpretation of the index scores. In this article we discuss this issue starting from a historical review of the structure of the Wechsler scales, showing the improvement of the fit between the subtests intercorrelations and the underlying ability structure. Then, we discuss
Factor Analysis (FA) and multidimensional scaling (MDS) have been used to study the structure of human abilities. In his comparison of FA and MDS, MacCallum (1974) suggested one reason why FA has been so dominant. MacCallum noted that FA is based on an explicit model linking the observed test scores to model parameters. This model-based link between observed test scores and parameters makes FA a far richer method for the study of profile type data where rows represent people and columns represent tests or test items. In the absence of an explicit model, MDS parameter estimates are far less rich in meaning. The purpose of this paper is to; (1) describe an explicit MDS model for profile data, and the model was called the PAMS (Profile Analysis via Multidimensional Scaling) model (Davison, 1996, 1994); (2) illustrate the application of the model to the study of the structure of cognitive ability patterns using Woodcock-Johnson Psychoeducational Battery---Revised (WJ-R; Woodcock and Joh...
Speeded testing in the assessment of intelligence gives rise to a speed factor
Intelligence, 2018
This paper reports an investigation of whether data on intelligence obtained by speeded testing have to be represented in confirmatory factor analysis (CFA) by an additional factor besides the ability factor, and whether the additional factor can be identified as a speed factor. The paper further examined whether the hypothesized speed factor influences the relationship between intelligence and working memory. Two independent datasets including data obtained by speeded intelligence testing, measures of processing speed and of working memory were investigated by means of CFA. A hybrid bifactor model was employed to represent the hypothesized speed and the ability factor of the intelligence data. Whereas the factor loadings for representing ability were set free for estimation, the factor loadings for representing speed were constrained according to theory-based expectations. The results showed that a speed factor is necessary for achieving a good fit to the data with speeded testing. The convergent validity of the speed factor was shown by data on measures of processing speed. Furthermore, it turned out that the consideration of the latent speed factor led to a decrease of the correlation between intelligence and working memory. These results suggest that speeded testing influences the assessment of intelligence and may also bias empirical findings regarding the relationships between intelligence and other constructs.
Learning and Individual Differences, 2016
Research on the structure of psychometric intelligence has used hierarchical models like the higher-order and the bi-factor model and has studied the hierarchical relationship between factors within these models. In contrast, research on the structure of personality has not only used hierarchical models but has also studied hierarchies of factor solutions. We clarify the theoretical and conceptual differences between hierarchical models and the solutions-hierarchy approach used in the field of personality research, and suggest that the solutions-hierarchy perspective can provide a novel perspective for intelligence research. We used the solutions-hierarchy approach to study four correlation matrices (N = 230 to 710; 38 to 63 tests), and a large dataset (N = 16,823; 44 tests). Results provided (a) insights into relationships between intelligence constructs across the hierarchy of factor solutions, and (b) evidence that intelligence has a 1-2-3-5 hierarchy of factor solutions with a g factor at the top, gc and gf factors at the second level, a speed-reasoning-knowledge taxonomy at the third level, and possibly a speed-reasoningfluency-knowledge-memory/perception taxonomy at the fifth level.
Latent trait models in the study of intelligence
Intelligence, 1980
This article examines the potential contribution of latent trait models to the study of intelligence. Nontechnical introductions to both unidimensional and multidimensional latent trait models are given, and possible research applications are considered. Latent trait models are shown to resolve several measurement problems in studies of intellectual change, including ability modification studies and life-span development studies. Furthermore, under certain conditions, latent trait models are found useful for construct validation research, since they can represent an individual differences model of cognitive processing on ability test items. Multidimensional latent trait models are shown to be especially useful as processing models, because they can be used to test alternative multiple component theories of test item processing. Furthermore, multidimensional models can be used to decompose test item difficulty into component contributions and estimate individual differences in processing abilities,