The SAGE Encyclopedia of Educational Research, Measurement, and Evaluation Table of Specifications (original) (raw)
Related papers
Nucleation and Atmospheric Aerosols, 2004
The Lunar Phases Concept Inventory (LPCI) is a twenty-item multiple-choice inventory developed to aid instructors in assessing the mental models their students utilize when answering questions concerning phases of the moon. Based upon an in-depth qualitative investigation of students' understanding of lunar phases, the LPCI was designed to take advantage of the innovative model analysis theory to probe the different dimensions of students' mental models of lunar phases. As part of a national field test, pre-instructional LPCI data was collected for over 750 students from multiple post-secondary institutions across the United States and Canada. Application of model analysis theory to this data set allowed researchers to probe the different mental models of lunar phases students across the country utilize prior to instruction. Results of this analysis display strikingly similar results for the different institutions, suggesting a potential underlying cognitive framework.
Classroom tests provide teachers with essential information used to make decisions about instruction and student grades. A table of specification (TOS) can be used to help teachers frame the decision making process of test construction and improve the validity of teachers' evaluations based on tests constructed for classroom use. In this article we explain the purpose of a TOS and how to use it to help construct classroom tests.
Classroom Test Construction: The Power of a Table of Specifications
Classroom tests provide teachers with essential information used to make decisions about instruction and student grades. A table of specification (TOS) can be used to help teachers frame the decision making process of test construction and improve the validity of teachers' evaluations based on tests constructed for classroom use. In this article we explain the purpose of a TOS and how to use it to help construct classroom tests.
Analysis of assessments on secondary students\u27 development and interpretation of models
2021
As districts are making the shift to three-dimensional learning the development of a coherent set of high-quality task-based assessments has been a challenge. For this research I collected and analyzed twelve of my district\u27s assessments over the scientific skill of modeling from grades seven through 12. The analysis involved two tools developed by NGSS and Achieve.org to determine the extent to which the assessments ask students to perform tasks that are driven by phenomena and use the three-dimensional in service of sense-making, the Task Screener and the Framework to Evaluate Cognitive Complexity in Science Assessments. The findings support what researchers have said about the shift to three-dimensional task-based assessments: Choosing appropriate engaging phenomena is key to developing high-quality rigorous assessments. While most of my district’s modeling assessments were found to be three-dimensional they are not rigorous because the phenomena guiding the tasks are too gene...
European Journal of Education Studies, 2019
The aim of this study is to determine the effect of formative assessment based learning on students' conceptual understanding levels regarding basic astronomical concepts. The study was conducted in a public primary school with 24 5 th graders. The study group was applied a 3 phased teaching process based on formative assessment for the duration of 4 lesson periods. The conceptual understanding levels of students were specified by using 4 formative assessment probes prior to and after the process. Findings were presented and assessed in forms of percentage and frequency together with sample student answers. Prior to the teaching process it was determined that conceptual understanding levels of students regarding basic astronomy were rather low and students had developed various misconceptions. The reason for formation of day and night is the exchange of location of the Sun and the Moon, the reason for lunar phases is the Moon being a light source and radiating light from its different parts in different periods, the reason for different seasons to emerge is the distance change of Earth from the Sun, in terms of coming closer and moving away were some frequently observed misconceptions. After the activities the conceptual understanding levels of students showed increase. It was, nevertheless, challenging for them to explain their correct answers. This indicates the necessity of more studies to be conducted in order to develop scientific reasoning skills of students. This study recommends teachers to use formative assessment based education, which proved to be effective in teaching basic astronomical concepts, in science classes actively.
An Interactive Simulation to Evaluate Student Understanding of Moon Phase Formation
ABSTRACT The development of student understanding of phases of the moon is notoriously difficult due to the spatial reasoning requirements posed by the relative motion of three celestial bodies. We have used the Greek version of Physics by Inquiry to develop two alternative three-body models with a class of prospective elementary teachers.
Relationships between sixth grade students' moon journaling and students' spatial-scientific reasoning after implementation of an Earth/Space unit were examined. Teachers used the projectbased Realistic Explorations in Astronomical Learning curriculum. We used a regression model to analyze the relationship between the students' Lunar Phases Concept Inventory (LPCI) posttest score variables and several predictors, including moon journal score, number of moon journal entries, student gender, teacher experience, and pre-test score. The model shows that students who performed better on moon journals, both in terms of overall score and number of entries, tended to score higher on the LPCI. For every 1 point increase in the overall moon journal score, participants scored 0.18 points (out of 20) or nearly 1% point higher on the LPCI post-test when holding constant the effects of the other two predictors. Similarly, students who increased their scores by 1 point in the overall moon journal score scored approximately 1% higher in the Periodic Patterns (PP) and Geometric Spatial Visualization (GSV) domains of the LPCI. Also, student gender and teacher experience were shown to be significant predictors of post-GSV scores on the LPCI in addition to the pre-test scores, overall moon journal score, and number of entries that were also significant predictors on the LPCI overall score and the PP domain. This study is unique in the purposeful link created between student moon observations and spatial skills. The use of moon journals distinguishes this study further by fostering scientific observation along with skills from across science, technology, engineering, and mathematics disciplines.
Assessment: Constructing and Evaluating
Based from the behavioural educational theories, higher learning institutions has been using assessment to measure the quality or success of a taught course and to evaluate whether the students have achieved (Ellery, 2008) the minimum standard that is acceptable to be awarded with the degree. An assessment can be conducted by means of paper and pencil, presentations, lab work, case studies, essays, multiple choice questions, true/false statements, short essays, etc. During the semester, students may be tested to improve their learning experience; this is called a formative test (continuous assessment), whereas a summative test (final assessment) is done at the end or completion of the course or program. A test can be used to measure students' ability or to determine the basic mastery or skills or competencies acquired during a course. There are several types of tests; such as, placement test, diagnostic test, progress test, achievement test, and aptitude test. A placement test is done to place students in teaching groups or classes so that they are within the same level of ability or competency. A diagnostic test is done to identify students' strengths and weaknesses in a particular course. A progress test is done during the semester to measure the progress of students in acquiring the subject taught. An achievement test is done to determine students' mastery of a particular subject at the end of the semester. Whereas an aptitude test is done to determine the students' ability to learn new skills or the potential to succeed in a particular academic program. A good assessment should be valid, reliable, and practical. In terms of validity, an assessment should test what it is intended to measure. For example, content validity is when the test items adequately cover the syllabus. A valid assessment measures achievement of the course learning outcomes. In terms of reliability, does the assessment allow the examiners to evaluate it consistently and differentiate between varying levels of performance? Whereas in terms of practicality, we need to ensure that the length given to students for their assessments are appropriate. There are two types of tests, objective and subjective. For objective, we can choose multiple choice questions, true/false, or fill in the blanks; whereas for subjective we can choose either short or long essay. Although there are objective and subjective tests, I would like to focus on subjective test (essays) because we use this type most often; especially in final exam. When constructing an assessment, we need to bear in mind the objectives of learning of a particular course. Specifically, we need to refer to the course information of the course learning outcomes before constructing the exam questions. In addition, we need to understand Bloom's Taxonomy or classifications of objectives. The three classifications are cognitive, affective, and psychomotor. The six levels of cognitive domain are knowledge, comprehension, application, analysis, synthesis, and evaluation. The levels for affective domain are receiving, responding, valuing, organizing, and characterizing. Psychomotor levels are imitation, manipulation, precision, articulation, and naturalization. I have discussed in detail about the levels of each domain in the previous issue; thus, in this issue I would like to discuss on cognitive domain because this is the most frequently used in final exam and we are quite familiar with it.
We investigated if instruction on a Table of Specifications (TOS) would influence the quality of classroom test construction. Results should prove informative for educational researchers, teacher educators, and practising teachers interested in evidenced-based strategies that may improve assessment-related practices. Fifty-three college undergraduates were randomly assigned to an experimental (exposed to the TOS strategy) and a comparison condition (no specific strategy support) and given materials for an instructional unit to use to construct a classroom test. Results of a multivariate analysis of covariance suggested that students exposed to the TOS strategy constructed a test with higher test content evidence but not response process evidence scores. Furthermore, we found that treatment participants were able to accurately complete the TOS tool and choose items that reflected the subject matter specified in the TOS tool. However, they experienced difficulty selecting items at the cognitive level specified in the TOS tool.