High school teachers' attitudes toward a statewide high stakes student performance assessment (original) (raw)
Related papers
Throw 'em Out or Make 'em Better? State and District High-Stakes Writing Assessments
Focus on Exceptional Children, 2011
The writing of school-aged children is assessed for many reasons (Graham, Harris, & Hebert, 2011). Teachers assess writing to monitor students' growth as writers, inform instruction, provide feedback, and evaluate the effectiveness of their teaching. Students assess their own writing to appraise growth, identify strengths, and determine areas in need of further development. Peers assess other students' writing to provide them with feedback on what works in a paper and what still needs work. States and school districts assess writing to determine how many students meet local or state performance standards, identify youngsters who need extra help, and evaluate the effectiveness of individual teachers and schools. The national government administers the National Assessment of Educational Progress (NAEP) writing test to measure American students' collective writing success, evaluating students' writing performance across time. Given the heavy emphasis now placed on assessment and evaluation as a tool for improving and reforming writing and other aspects of education in the United States (Gewertz, & Robelen, 2010; National Commission on Writing, 2003), it is important to ask whether the various forms of assessment, ranging from classroom-based writing assessments to state and district evaluations (the focus of this article) do, in fact, make a difference in improving how well students write? For students with disabilities, such questions are especially important, as so many of these students experience difficulty learning to write. On the 2007 NAEP (Salahu-Din, Persky, & Miller, 2008), just 6% of eighth-grade and 5% of twelfth-grade students with disabilities performed at or above the "proficient" level in writing (defined as solid academic performance). Students scoring below this level are classified as obtaining only partial mastery of the literacy skills needed at their respective grade. Thus, this assessment indicates that 19 of every 20 students with disabilities do not acquire the writing skills needed for success in school. Although classroom-based assessments are not the focus of this article, evidence shows that such assessments can make a difference in improving how well students write. A recent meta-analysis of experimental and quasi-experimental studies conducted mostly with typically developing students (Graham, Kiuhara, McKeown, & Harris, 2011) provided empirical evidence that writing assessments that are part of typical classroom practices improve the overall quality of students' writing. When students receive feedback about their writing and learning progress, writing improves. When students evaluate their own writing, writing
A statewide writing assessment model: Student proficiency and future implications
Assessing Writing, 2008
This paper is an examination of statewide district writing achievement gain data from the Nebraska Statewide Writing Assessment system and implications for statewide assessment writing models. The writing assessment program is used to gain compliance with the United States No Child Left Behind Law (NCLB), a federal effort to influence school district assessment and accountability. While the research literature is mixed with some viewing statewide writing assessment as narrowing writing curriculum and "standardizing" writing, others point out that it facilitates sharing between teachers to improve writing and teachers are empowered by their participation in the process. Nebraska's Statewide Writing Assessment, based on the six-trait model, has strong support from teachers regarding their perception of the assessment in supporting teaching practices and student success. Examination of achievement gain data revealed gains in 4th, 8th, and 11th grades in the average number of students rated proficient from one administration of the assessment to the next. Implications for educational practice, recommendations for future research, and relevant related issues (NCLB concerns and measurement issues) are discussed.
The Impact of State Mandated, Large-Scale Writing Assessment Policies in Pennsylvania
2001
This study examines the impact of the Pennsylvania writing assessment policy on writing instruction and teaching methodology. Drawing on self-reported teacher perceptions and classroom practices, the study also attempts to identify the factors that influence teacher beliefs and their use of materials and processes related to the state writing assessment. The field-tested survey, which contained 60 items, was completed by 168 teachers from 20 schools, 56.9% of whom taught elementary school. The initial study examines the effects of 16 factors on classroom practices and teaching methodology, but this analysis focuses on three key factors: teaching responsibility, holistic training, and Pennsylvania System of School Assessment scores at or above the band of similar schools. Teacher responses to the survey indicate that they are providing students with frequent writing experiences and opportunities. The state assessment seems to have motivated teachers to present these experiences. Teachers also appear to recognize the value of holistic scoring and the characteristics of effective writing as presented in the Pennsylvania Holistic Scoring Guide. However, it was evident that even though many teachers agreed with the beliefs and values of holistic scoring and the characteristics of effective writing, they were reluctant to use the state rubric, descriptors, and writing samples as the basis for classroom instruction. This finding may suggest: a weakness exists in the supporting materials provided by the state, teachers are developing their own rubrics and samples, or that teachers are clinging to traditional methods and not adopting a progressive approach to writing instruction. (Contains 4 tables and 39 references.) (SLD) Reproductions supplied by EDRS are the best that can be made from the original document.
2002
Using a random sample of 1,000 Colorado teachers, this study surveyed the effects of standards, the Colorado Student Assessment Program (CSAP), and school report cards on instruction and test-related practices. Findings show that standards were perceived to have greater impact on improving instruction than did testing. Teachers said they aligned their curriculum, instruction, and lessons to the Colorado standards by adding important content. Attention to the state standards improved the quality of writing instruction and focused instruction in reading, probability, geometry, and mathematics problem-solving explanations. The reported effects of CSAP testing were more mixed. Attention to CSAP improved writing instruction but shifted instruction away from social studies and science, increased the time spent on test format practice, and lowered faculty morale. Nine appendixes provide additional information, including survey responses. (Contains 12 tables and 26 references.
A Survey of Teachers’ Perspectives on Highstakes Testing
2003
The development of state assessments and a school accountability system in Colorado closely parallels trends in standards-based reform efforts nationwide. According to the rhetoric of standards-based reform, setting high standards is expected to improve academic achievement by creating higher expectations and thereby focusing greater effort and resources on student learning. However, critics of standards raise a variety of objections, including the fear that higher standards without additional resources may worsen educational inequities or decrease teacher professionalism. The central role of assessments in standards-based reform has proven to be equally controversial. On the one hand, it is argued that the use of more challenging, open-ended performance assessments, instead of multiple-choice-only, basic-skills tests, will help to better align teaching and learning efforts with ambitious curriculum standards. On the other hand, emphasis on assessments (even good ones) might narrow the curriculum and encourage teachers to teach the test. Regardless of one's position pro or con in the debate about standards and assessments, it is clear that teachers and teachers' classroom practices are expected to be the key intervening variable that will determine the effects of reforms on student learning. The purpose of the present study was to survey a representative sample of teachers in Colorado to examine the effects of standards, the Colorado Student Assessment Program (CSAP), and school report cards, on instruction and test-related practices.
Voices from the Frontlines:Teachers' Perceptions of High-Stakes Testing
education policy analysis archives, 2004
The purpose of this study was to investigate whether teachers perceived Florida's high-stakes testing program to be taking public schools in the right direction. More importantly, we sought to understand why teachers perceived the tests to be taking schools in the right or wrong direction. Based on the survey results of 708 teachers, we categorized their concerns and praises of high-stakes testing into ten themes. Most of the teachers believed that the testing program was not taking schools in the right direction. They commented that the test was used improperly and that the one-time test scores were not an accurate assessment of students' learning and development. In addition, they cited negative effects on the curriculum, teaching and learning, and student and teacher motivation. The positive effects cited were much fewer in number and included the fact that the testing held students, educators, and parents accountable for their actions. Interestingly, teachers were not opposed to accountability, but rather, opposed the manner in which it was currently implemented. Only by understanding these positive and negative effects of the testing program can policymakers hope to improve upon it. To this end, we discuss several implications of these findings, including: limiting the use of test scores, changing the school grading criteria, using alternative assessments, modifying the curriculum, and taking steps to reduce teaching to the test.
1999
This study examines the underlying structure of the teacher language arts and science questionnaires from a project studying the impact of the Maryland School Performance Assessment Program (MSPAP). The dimensions of the questionnaire concerning practices and attitudes related to the MSPAP were studied, as well as the extent to which the on-grade and off-grade teachers differed on these dimensions. The extent to which principals and students differed from teachers with respect to a subset of the dimensions was also examined. The final samples for language arts included the teachers, principals, and some students from 59 elementary and 31 middle schools. For the science questionnaire, the final sample contained 103 elementary and 58 middle schools. Overall, evidence suggests that the MSPAP is having an impact on classroom instruction to some extent. The majority of the language arts and science teachers indicated that they have made changes in their classroom activities to reflect the Maryland learning outcomes and the MSPAP. According to the teachers, the impact has been similar between the on-and off-grade levels. When differences occurred, they were primarily between elementary and middle school teachers, with elementary school teachers tending to respond more favorably than middle school teachers. Most teachers indicated that the MSPAP is a useful tool for improving instruction, but the majority also indicated that they oppose using the MSPAP for identifying schools for rewards or recognition. (Contains 29 tables and 21 references.) (SLD)
Teachers’ Beliefs about the Effects of High Stakes Testing
Journal of Education and Human Development, 2014
Since the enactment of the No Child Left Behind Act of 2001, high stakes testing has continued to be one of the major driving forces behind educational reform. In this study, Georgia teachers' beliefs about the effects of high stakes testing were examined. A random sample of teachers from 100 of Georgia's elementary schools, middle schools, and high schools responded to a 49-item survey measured on a fivepoint Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree). Items were grouped into six domains: curriculum, teaching, work satisfaction, stress, accountability, and students. Teachers' responses did not differ by gender, educational level, or school level. African American teachers responded more positively than White teachers on the survey. Teachers' positive and negative responses were discussed and recommendations were made for teachers and school leaders.