Computer-based assessment of student-constructed responses (original) (raw)

Exploring the Potential of Multiple Choice Questions in Computer-Based Assessment of Student Learning

The use of multiple choice tests as a tool of assessment are popular in many university courses, particularly in the foundation year where there is large number of students taking common subjects such as mathematics or engineering science. Under such circumstances, the answers to the multiple choice questions (MCQs) are written on special forms with blank ovals. These forms are scanned and marking is done by comparing the answers with those inputs by the examiner. Subsequently, the results of the students are tabulated with the aid of customised software. However, this conventional method of administering MCQs is not practicable when the number of students and the number of questions are small e.g. ≤ 100 and ≤ 10, respectively. The present paper addresses this issue and discusses the development of an unconventional, yet an effective method for administering short and simple MCQs as a means of assessing student learning in a focus area of the subject matter. A MCQs test was designed...

Multiple choice questions can be designed or revised to challenge learners' critical thinking

Multiple choice (MC) questions from a graduate physiology course were evaluated by cognitive-psychology (but not physiology) experts, and analyzed statistically, in order to test the independence of content expertise and cognitive complexity ratings of MC items. Integration of higher order thinking into MC exams is important, but widely known to be challenging—perhaps especially when content experts must think like novices. Expertise in the domain (content) may actually impede the creation of higher-complexity items. Three cognitive psychology experts independently rated cognitive complexity for 252 multiple-choice physiology items using a six-level cognitive complexity matrix that was synthesized from the literature. Rasch modeling estimated item difficulties. The complexity ratings and difficulty estimates were then analyzed together to determine the relative contributions (and independence) of complexity and difficulty to the likelihood of correct answers on each item. Cognitive complexity was found to be statistically independent of difficulty estimates for 88 % of items. Using the complexity matrix, modifications were identified to increase some item complexities by one level, without affecting the item’s difficulty. Cognitive complexity can effectively be rated by non-content experts. The six-level complexity matrix, if applied by faculty peer groups trained in cognitive complexity and without domain-specific expertise, could lead to improvements in the complexity targeted with item writing and revision. Targeting higher order thinking with MC questions can be achieved without changing item difficulties or other test characteristics, but this may be less likely if the content expert is left to assess items within their domain of expertise.

Modifying Multiple-Choice Questions in Computer-Based Instruction

1990

Abstract: Research has shown that multiple-choice questions formed by transforming or paraphrasing a reading passage provide a measure of student comprehension. It is argued that similar transformation and paraphrasing of lesson questions is an appropriate way to form parallel multiple-choice items to be used as a posttest measure of student comprehension. Four parallel items may be derived from a lesson:(1) an item that is neither transformed nor paraphrased, thus testing simple rote memory;(2) an item that is ...