Assessing and Improving the Factorial Structures of the Computer Self-Efficacy Scale (original) (raw)
Related papers
2012
The Computer Self-Efficacy Scale (CSE) developed by C. A. Murphy, D. Coover, and S. V. Owen (1989) is an instrument purported to assess computer-related competencies. Previous research into the factor structure of the CSE has yielded conflicting results. In this study, the scale was used to collect data from 216 graduate education students. A principal factor analysis with orthogonal rotation generated a four-factor solution with high alpha reliabilities. Additional analysis supported the convergent and discriminant validity of the scale with measures of computer confidence and computer anxiety. Results suggest that the scale also differentiates between users with high and low amounts of computer use experience. Results support previous research that indicates that the amount of experience people have with computers has an effect on their perceptions of self-efficacy for computer-related tasks. (Contains 3 tables and 31 references.) (Author/SLD) *************************************...
Cyberpsychology, Behavior, and Social Networking, 2014
Computer self-efficacy is an often studied construct that has been shown to be related to an array of important individual outcomes. Unfortunately, existing measures of computer self-efficacy suffer from several deficiencies , including criterion contamination, outdated wording, and/or inadequate psychometric properties. For this reason, the current article presents the creation of a new computer self-efficacy measure. In Study 1, an over-representative item list is created and subsequently reduced through exploratory factor analysis to create an initial measure, and the discriminant validity of this initial measure is tested. In Study 2, the unidimensional factor structure of the initial measure is supported through confirmatory factor analysis and further reduced into a final, 12-item measure. In Study 3, the convergent and criterion validity of the 12-item measure is tested. Overall, this three study process demonstrates that the new computer self-efficacy measure has superb psychometric properties and internal reliability, and demonstrates excellent evidence for several aspects of validity. It is hoped that the 12-item computer self-efficacy measure will be utilized in future research on computer self-efficacy, which is discussed in the current article.
Confirmatory analysis of computer self-efficacy
2003
Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden.
Journal of the Association for Information Systems, 2007
This paper reports an empirical study intended to provide detailed comparisons amongst and between the varieties of available measures of computer self-efficacy (CSE). Our purpose is to ascertain their relative abilities to isolate the CSE construct from other related constructs and to capture variance in performance attributed to changes in CSE level. In addition, we investigate the importance of ensuring the measure being used is sufficiently aligned with the task domain of interest. Finally, we explore the stability of CSE measures as they relate to the current state of evolution within the computing domain. Marakas, Yi, and Johnson (1998) proposed a framework for the construction of instruments intended to measure the CSE construct that we have adopted as a basis for this series of investigations. To that end, we advance and test a set of hypotheses derived from the Marakas et al. (1998) framework. Results of the analyses support the need for adherence to the tenets of the proposed framework as well as provide evidence that CSE measures suffer from degradation of their explanatory power over time. Further, this study brings forth the importance of appropriately validating measures of CSE using approaches intended for a formative rather than a reflective construct. These results suggest that the common practices of instrument validation and reuse of long-standing instruments to measure CSE may not be the most effective approach to the study of the construct. Implications for future research are discussed.