Test Use and Assessment Practices of School Psychology Training Programs: Findings from a 2020 Survey of U.S. Faculty (original) (raw)
Related papers
American Psychological Association eBooks, 2013
Previous chapters in Part I of this volume have provided comprehensive coverage of issues critical to test quality, including validity, reliability, sensitivity to change, and the consensus standards associated with psychological and educational testing. In-depth analysis of test quality represents a daunting area of scholarship that many test users may understand only partially. The complexity of modern test construction procedures creates ethical challenges for users who are ultimately responsible for the appropriate use and interpretation of tests they administer. Concerns over the ethical use of tests intensify as the high stakes associated with the results grow. Test manuals should provide detailed information regarding test score validity and applicability for specific purposes. However, translating the technical information into usable information may be difficult for test developers, who may be more expert at psychometric research than at making their findings accessible to the general reader. Additionally, the information presented in test manuals can be accurate and understandable but insufficient for appropriate evaluation by users. Fortunately, several sources of review and technical critique are available for most commercially available tests. These critiques are independent of the authors or publishers of the tests. Although test authors and publishers are required by current standards to make vital psychometric information available, experts may be necessary as translators and evaluators of such information. The most wellestablished source of test reviews is the Buros Institute of Mental Measurements. Established more
Examining School Psychologists’ Attitudes Toward Standardized Assessment Tools
2022
Despite a call for evidence-based practice in school psychology, limited research on the topic of evidence-based assessment exists. To begin to address this gap in the research, a modified version of Jenson-Doss and Hawley’s (2010) Attitudes Toward Standardized Assessment (ASA) scale was administered to 371 U.S. school psychologists. Examination of the modified ASA’s factor structure suggested that a bifactor model with a single overall domain and three sub-domains was the most parsimonious. Indices of dimensionality indicated that the overall score may be the best indicator of school psychologist’s perceptions of standardized assessment. Additionally, school psychologists’ reported favorable attitudes of standardized assessment compared to clinical judgment alone. Limitations and implications for future research are discussed.
The Undergraduate Study in Psychology (USP) is a collaborative effort of the American Psychological Association (APA) Board of Educational Affairs, the Education Directorate, and the Center for Workforce Studies to collect information on undergraduate psychology programs, faculty, students, and curriculum. The overall goal of USP is to paint a portrait of undergraduate education in psychology over time by surveying various aspects of undergraduate education biennially. APA's USP gathered data on the 2014 curricular offerings and assessment practices of associate (n ϭ 110) and baccalaureate (n ϭ 329) psychology programs across the nation. The USP included questions concerning learning goals, program reviews, and two clusters of questions from the National Institute on Learning Outcomes Assessment concerning assessment drivers and practices. Eighty-eight percent of associate programs and 94% of baccalaureate programs reported formal learning goals, and the vast majority of those programs incorporated into their learning goals portions of the APA Guidelines for the Undergraduate Psychology Major (2007). Most undergraduate psychology programs routinely performed program reviews; however, 40% of associate programs and 14% of baccalaureate programs did not do so. The most frequently used assessment methods were rubrics to evaluate student work, locally developed exams, locally developed student surveys, and assessment of final projects. Despite considerable heterogeneity in the results, institutional accreditation requirements, faculty/ staff interest, program commitment, institutional commitment, and internal program review requirements were rated as the strongest drivers of assessment among psychology programs, a pattern generally consistent with drivers in other academic disciplines.
Psychology in the Schools, 1999
A national survey investigating the use of dynamic assessment and other nontraditional assessment techniques among school psychologists (N ϭ 226) was conducted. Results of the survey indicated that 42% of respondents were at least "somewhat familiar" with dynamic assessment. However, of those familiar with dynamic assessment, only 39% reported using the techniques once a year or more. The most frequently endorsed reasons for not using dynamic assessment (if familiar with it) were lack of knowledge and time restraints. Learning disabled students were the population of students most often evaluated using dynamic assessment and the dynamic assessment was most often used to determine processing strengths and weaknesses. The majority of those familiar with dynamic assessment became so through independent reading. Only 10% reported learning about dynamic assessment through course work. In response to questions regarding assessment techniques most often used with minority students the majority of respondents reported using traditional assessment tools including the WISC-III (Wechsler Intelligence Scale for Children-Third Edition), BINET IV (Stanford-Binet Intelligence Scale-Fourth Edition), or KABC (Kaufman Assessment Battery for Children). Overall, the results of the survey suggest that although the population is becoming increasingly more diverse and changes in PL94-142 (Public Law 94-142) demand functional assessments, school psychologists continue to rely heavily upon traditional assessment techniques to address referral concerns of all students. This may in large part be due to weaknesses in graduate training programs.
Testing in the Schools: Scientific Challenges to Practice and Policy Implications
1985
This paper reviews eight major problem areas concerning testing practices of interest to professionals practicing psychology in the schools. These areas, identified by the Task Force on Psychology in the Schools of the American Psycnological Association, include: (1) test use; (2) diagnostic versus placement tests; (3) informed parental consent for special education placement; (4) inappropriate placement of minority children in special education; (5) decision-making data regarding individur.1 differences for educational placement and procedures; (6) population test validity issues; (7) quality and appropriate use of tests; and (8) the development of assessment procedures to determine children's educational needs. Attention is also given to seven action areas identified by the Task Force which would address the problems. The remaining portions of this paper discuss challenges faced by professional psychologists in converting research knowledge into improved school testing practices, specifically, improving the assessment and analysis of student's learning abilities and language skills through dynamic assessment of cognitive skills and assessment of communicative competence. Attention is called to improving and updating the scientific knowledge-base of professional psychologists, improving schools' capabilities to monitor and manage implementation of innovative testing practices, and to fostering collaborative relationships between school psychologists and academic researchers on testing innovations. (PH) Abstract -..t Ch
Professional Psychology: Research and Practice, 2016
Training in psychological assessment has been studied periodically since 1960. The goal of this project was to provide an update of training practices in clinical psychology programs and to compare practices across Clinical-Science, Scientist-Practitioner, and Practitioner-Scholar training models. All APAaccredited programs in clinical psychology were invited to respond to an anonymous online survey about program characteristics and assessment training; a 33% response rate was achieved. Assessment training over the past decade was generally stable or increasing. Training in treatment effectiveness and neuropsychology were areas of growth. Across training models, there was remarkable similarity in assessment instruction except for coverage of projective instruments, number of required assessment courses, and training in geriatric assessment. The most popular instruments taught in clinical psychology programs were the
Universal Journal of Educational Research, 2020
This paper compares two “Testing Application” chapters of the 2014 Standards for Educational and Psychological Testing (hereinafter, Standards), jointly published by the American Educational Research Association (AERA), the American Psychological Association (APA), and the National Council on Measurement in Education (NCME). The two chapters are Chapters 10 and 12 of the Standards: “Psychological Testing and Assessment (PTA)” and “Educational Testing and Assessment (ETA)”. It specifically aims to raise some overarching issues related to these chapters. An in-depth comparative analysis was conducted based on specific similarities of and differences between these two chapters. Both PTA and ETA cover the background of and standards regarding test administration, score interpretation, and the use of scores. However, PTA focuses more on test selection and test security, whereas ETA covers more on the test design and development. The overarching issues, questions, and concerns related to both chapters are discussed along with the results of the analysis. The paper concludes with a description of the differences between the two chapters from the current 2014 Standards and those of the previous 1999 version along with some plausible explanations for such discrepancies. The summary and analysis may be useful to test users and graduate students from the psychology and education fields whose interests revolve around testing and assessment practices.
Time Demands of Psychological Assessment: Implications for School Psychology
This study surveys practicing school psychologists (N=86) in both private and public sectors for their estimates of the time required to administer, score, and interpret the tests they regularly administer in their schools. It provides school districts and school psychologists with time estimates, which can be used to quantify the actual time spent in the assessment process. Results show that the various Weschler scales were the first choice of participants. School psychologists expressed a moderate preference for tests that are economical in the terms of time. It seems that tradition also plays a significant role in assessment. Some instruments have a solid theoretical and research foundation that enhances their acceptance and use. School psychologists disagree about the time demands required by various instruments. This is especially obvious in the time estimates of test interpretation. The use of technology in'assessment raises ethical and standards of practice questions. While technology may save time, it may also diminish professional integrity and standing in the psychological community.