Issues in computer-based test interpretive systems (original) (raw)
Related papers
The Role of Computer-Based Test Interpretation (CBTI) in Occupational Assessment
International Journal of Selection and Assessment, 1995
The paper explores a range of issues relating to the design and use of computer-based test interpretation (CBTI) systems. It describes a number of dimensions along which CBTI systems vary and considers both the positive and negative implications of these for test users, for their clients and for test candidates. A distinction is drawn between systems that generate finished reports and those that generate questions or guidance for the user. The question of how the validity of narrative reports can be assessed is considered. A study on the discriminative validity of reports generated by the PREVUE ICES CBTI system is reported which shows clear differences between judgements made about true and bogus reports. It is concluded that there are two main issues to consider when evaluating a CBTI system: first, the validity of the information generated by it and, second, the ways in which that information is likely to be used. ver the past decade, three main issues have emerged in the computer-based The use of CBTI has grown enormously since Fowler (1972) commented on the lack of Personality test CBTI systems constitute by far assessment (CBA) literature as being key substantial progress made in the field. concerns of practitioners: 0 (1) The equivalence of the same test presented in paper-and-pencil format and on computer. (2) The development and implementation of adaptive testing. (3) The use of computers to generate interpretive reportsespecially of personality inventories.
Developing standards for computerized psychological testing
Computers in Human Behavior, 1985
The growth of computerized psychological testing (CP7) require.~ that we analyze its ethical and legal ramifications. The response of the legal community and the profession of psychology is reviewed. There are potential problems with the administration of tests by computer, especially the interpretation of computer-administered tests with norms and validity data .from paper-and-pencil administrations, and the impact of computerization on certain test-taker groups and item !Tpes. Criteria for assessing the adequacy of clasr{~'calion systems used go a.~s~i~n peop# to interpretive statements, and the vahdity of computer-generated reports, are analyzed. CPT users should review r@orts and developers should disclose the rationale underlying interpretations. Not only the scientific merit, Dug also the fairness and efficiency of CPT will determine whether its potential isjully realized.
Englisia Journal, 2015
These past years have seen the growing popularity of the Computer-Based Tests (CBTs) in various disciplines, for various purposes, although the Paper-and Pencil Based Tests (P&Ps) are still in use. However, many question whether the use of CBTs outperforms the effectiveness of the P&Ps or if the CBTs can become a valid measuring tool compared to the PBTs. This paper tries to present the comparison on both the CBTs and the P&Ps and their respective examinee perspectives in order to figure out if doubts should arise to the emergence of the CBTs over the classic P&Ps. Findings show that the CBTs are advantageous in that they are both efficient (reducing testing time) and effective (maintaining the test reliability) over the P&P versions. Nevertheless, the CBTs still need to have their variables well-designed (e.g., study design, computer algorithm) in order for the scores to be comparable to those in the P&P tests since the score equivalence is one of the validity evidences needed in a CBT.
Computer-based Testing and Validity: a look back into the future
Assessment in Education: Principles, Policy & Practice, 2003
Test developers and organisations that rely on test scores to make important decisions about students and schools are aggressively embracing computer-based testing. As one example, during the past two years, 16 US states have begun to develop computerbased tests that will be administered to students across their state. Without question computer-based testing promises to improve the efficiency of testing and reduce the costs associated with printing and delivering paper-based tests. Computer-based testing may also assist in providing accommodations to students with special needs. However, differences in prior computer experience and the degree to which items from different content areas can be presented and performed on computer ranges widely. In turn, these factors will have different impacts on the validity of test scores. In this paper, we examine the potential benefits and costs associated with moving current paper-based tests to computer, with a specific eye on how validity might be impacted.
Computer-Based Testing and Validity: A Look Back and Into the Future
Assessment in Education: Principles, Policy & Practice, 2003
Test developers and organisations that rely on test scores to make important decisions about students and schools are aggressively embracing computer-based testing. As one example, during the past two years, 16 US states have begun to develop computerbased tests that will be administered to students across their state. Without question computer-based testing promises to improve the efficiency of testing and reduce the costs associated with printing and delivering paper-based tests. Computer-based testing may also assist in providing accommodations to students with special needs. However, differences in prior computer experience and the degree to which items from different content areas can be presented and performed on computer ranges widely. In turn, these factors will have different impacts on the validity of test scores. In this paper, we examine the potential benefits and costs associated with moving current paper-based tests to computer, with a specific eye on how validity might be impacted.
Interpretive Error in Psychological Testing
1999
When evaluating the utility of a psychological test for clinical decision making, both the psychometric properties of the test (i.e., the reliability and validity of the instrument) and the ambiguity of the language by which test results are interpreted or communicated need to be considered. Although each has been studied independently, to date the two have not been related. This paper discusses each of these sources of "interpretive error" with the goal being the development of a model that could systematically relate these two sources of error to the process and outcome of test interpretation. In an example using this new model together with optimistic good-hearted assumptions favorable to tests, it was found that the effect of ambiguity was to more than double the probability that those who test positive will be incorrectly classified. It is suggested that the practice of using non-numeric statements of likelihood be stopped. Instead, results of tests should be presented in tabular form with one row for each diagnostic category. Examples of these tables are provided. It is further noted that there is a significant professional need to move toward empirically supported expertise in test interpretation. Contains 24 references. (Author/MKA)