Assessment of MCQs in MBBS program in college of Medicine, King Khalid University Abha (original) (raw)
Related papers
Analysis of multiple choice questions (MCQ): important part of assessment of medical students
International Journal of Medical Research and Review, 2016
Background: Assessment influences students learning process that is why analysis of assessment allows us to conduct it properly and accurately. Tarrant M et al. and few others had done similar studies in past. Further studies are required to support the same and continuing awareness among teachers about better assessment of students. Methods: The observational, non-interventional and prospective study was carried out to analyse 100 MCQs used for assessment of 2 nd MBBS students. All MCQs were having single stem with four options, one being correct and other three distractors. Each MCQ was analyzed with three tools that were Difficulty Index (DIF I), Discrimination Index (DI) and Distracter Efficiency (DE). Chi square test was used for statistical analysis. Results: Total 74 out of 100 MCQs (74%) were recommended (30-70%) according to DIF I. According to DI, total 34 out of 100 MCQs were good (0.25-0.35) and 30 MCQs were excellent (>0.35). There were 18 non-functional (6%) distractors out of total 300. In none of the MCQs, all three were poor distractor. Association between difficulty index and discrimination index was statistically significant according chi-square test. (p value< 0.05) Conclusions: Properly constructed MCQs, according to these analysis tools, are best for student's assessment.
Study on item and test analysis of multiple choice questions amongst undergraduate medical students
International Journal Of Community Medicine And Public Health, 2017
Background: Item analysis is the process of collecting, summarizing and using information from students’ response to assess the quality of test items. However it is said that MCQs emphasize recall of factual information rather than conceptual understanding and interpretation of concepts. There is more to writing good MCQs than writing good questions. The objectives of the study was to assess the item and test quality of multiple choice questions and to deal with the learning difficulties of students, identify the low achievers in the test. Methods: The hundred MBBS students from Government medical college were examined. A test comprising of thirty MCQs was administered. All items were analysed for Difficulty Index, Discrimination Index and Distractor Efficiency. Data entered in MS Excel 2007 and SPSS 21 analysed with statistical test of significance. Results: Majority 80% items difficulty index is within acceptable range. 63% items showed excellent discrimination Index. Distractor e...
Item analysis of multiple choice questions: Assessing an assessment tool in medical students
Aim: Assessment is a very important component of the medical course curriculum. Item analysis is the process of collecting, summarizing, and using information from student's responses to assess the quality of multiple-choice questions (MCQs). Difficulty index (P) and discrimination index (D) are the parameters used to evaluate the standard of MCQs. The aim of the study was to assess quality of MCQs. Materials and Methods: The study was conducted in the Department of Pathology. One hundred and twenty, 2nd year MBBS students took the MCQs test comprising 40 questions. There was no negative marking and evaluation was done out of 40 marks, and 50% score was the passing mark. Postvalidation of the paper was done by item analysis. Each item was analyzed for difficulty index, discrimination index, and distractor effectiveness. The relationship between them for each item was determined by Pearson correlation analysis using SPSS 20.0. Results: Difficulty index of 34 (85%) items was in the acceptable range (P = 30–70%), 2 (5%) item was too easy (P >70%), and 4 (10%) items were too difficult (P <30%). Discrimination index of 24 (60%) items was excellent (D >0.4), 4 (10%) items were good (D =0.3–0.39), 6 (15%) items were acceptable (D =0.2–0.29), and 6 (15%) items were poor (D < 0–0.19). A total 40 items had 120 distractors. Amongst these, 6 (5%) were nonfunctional distracters, 114 (95%) were functional distracters. The discrimination index exhibited positive correlation with difficulty index (r = 0.563, P = 0.010, significant at 0.01 level [two-tailed]). The maximum discrimination (D = 0.5–0.6) was observed in acceptable range (P = 30–70%). Conclusion: In this study, the majority of items fulfilled the criteria of acceptable difficulty and good discrimination. Moderately easy/difficult had the maximal discriminative ability. Very difficult item displayed poor discrimination, but the very easy item had high discrimination index, indicating a faulty item, or incorrect keys. The results of this study would initiate a change in the way MCQ test items are selected for any examination, and there should be proper assessment strategy as part of the curriculum development.
Indian Journal of Community Medicine
Background: Multiple choice questions (MCQs) are frequently used to assess students in different educational streams for their objectivity and wide reach of coverage in less time. However, the MCQs to be used must be of quality which depends upon its difficulty index (DIF I), discrimination index (DI) and distracter efficiency (DE). Objective: To evaluate MCQs or items and develop a pool of valid items by assessing with DIF I, DI and DE and also to revise/ store or discard items based on obtained results. Settings: Study was conducted in a medical school of Ahmedabad. Materials and Methods: An internal examination in Community Medicine was conducted after 40 hours teaching during 1 st MBBS which was attended by 148 out of 150 students. Total 50 MCQs or items and 150 distractors were analyzed. Statistical Analysis: Data was entered and analyzed in MS Excel 2007 and simple proportions, mean, standard deviations, coefficient of variation were calculated and unpaired t test was applied. Results: Out of 50 items, 24 had "good to excellent" DIF I (31 -60%) and 15 had "good to excellent" DI (> 0.25). Mean DE was 88.6% considered as ideal/ acceptable and non functional distractors (NFD) were only 11.4%. Mean DI was 0.14. Poor DI (< 0.15) with negative DI in 10 items indicates poor preparedness of students and some issues with framing of at least some of the MCQs. Increased proportion of NFDs (incorrect alternatives selected by < 5% students) in an item decrease DE and makes it easier. There were 15 items with 17 NFDs, while rest items did not have any NFD with mean DE of 100%. Conclusion: Study emphasizes the selection of quality MCQs which truly assess the knowledge and are able to differentiate the students of different abilities in correct manner.
Evaluation of multiple choice questions by item analysis in a medical college at Pondicherry, India
International Journal of Community Medicine and Public Health, 2016
Assessment is an integral part of any learning and training. Medical students are evaluated and assessed by different methods. One of the methods for evaluation is by using multiple choice questions (MCQs). MCQs are having high objectivity which avoids inter-examiner bias, these are difficult to frame but easy to administer. The results are easy to compile and analyse. Although MCQs are not commonly used in assessment of MBBS and medical postgraduate students, these are often the choice for most of the graduate and postgraduate medical entrance examinations. MCQs can be designed to assess the higher cognitive levels of the students. An MCQ has one item stem and possible options. Stem can be in question form or can be an incomplete statement. Mostly an MCQ (with single best answer) has four options with one correct answer and three wrong options which act as distractors.
Evaluation of Multiple Choice Questions by Item Analysis in a Medical College- A Pilot Study
journal of medical science and clinical research, 2017
Introduction: Assessment is an integral part of any learning and training. Multiple choice questions (MCQs) are a widely used tool in assessment protocols. To increase the validity of MCQs standard prevalidation and post validation protocols are recommended. Item Analysis is a post validation procedure. Aims and Objectives: Difficulty index, Discrimination index and Distracter effectiveness are the parameters used to evaluate the standard of MCQs Materials and Methods: This study was conducted in the Department of Pathology ata Medical College. This is a retrospective study. The Term end examination MCQ paper after the 1 st semester was assessed. Based on the answers marked by the students the Difficulty index, Discrimination index and Distracter effectiveness were calculated. Results: In the present study, according to the Difficulty index criteria 50% of the MCQs were acceptable, of which 15% were ideal. On the basis of Discrimination index, 60% were good discriminator and 35% of the MCQs were excellent with a DI greater than 0.35%. In 45% MCQs the distracters were effective. Only 7 out of 20 MCQs satisfied all the criteria for an ideal MCQ. Conclusion: This exercise was an eye opener revealing the quality of the MCQs and it will also help while formulating MCQs for future exams.
International Journal of Research in Medical Sciences, 2016
Background: Multiple choice questions (MCQs) are usually used to assess students in different educational streams. However, the MCQs to be used should be of quality which depends upon its difficulty index (DIF I), discrimination index (DI) and number of Non-functional distracter (NFD). Objective of the study is to evaluate the quality of MCQs, for creating a valid question bank for future use and to identify the low achievers, whose problems can be corrected by counselling or modifying learning methods. This study was done in Kalinga Institute of Medical Science (KIMS) Bhubaneswar. Methods: A part completion test in the department of pediatrics was done. Total 25 MCQs and 75 distracters were analyzed. Item analysis was done for DIF I and DI and presence of number of NFD. Results: Difficulty index of 14 (56%) items was in the acceptable range (p value 30-70%), 8 (32%) items were too easy (p value >70%) and 2 (8%) items were too difficult (p value <30%). Discrimination index of 12 (48%) items was excellent (d value>0.35), 3 (12%) items was good (d value 0.20-0.34) and 8(32%) items were poor (d value<0.2%). Out of 75 distracters, 40 (53.4%) NFDs were present in 22 items. 3 (12%) items had no NFDs, whereas 8 (32%), 10 (40%), and 4 (16%) items contained 1, 2, and 3 NFD respectively. Conclusion: Item analysis is a simple and feasible method of assessing valid MCQs in order to achieve the ultimate goal of medical education.
2015
Introduction: Multiple-choice questions (MCQs) are most widely used test format in health sciences today. The efficiency of MCQs as an efficient tool for evaluation solely rests upon theirquality which is best assessed by item and test analysis. Objectives:Toassess item and test quality and to explore the relationship between difficulty index (p-value) and discrimination indices (DI) with distractor efficiency (DE). Materials and Methods:The study was conducted among 40 fourth semester MBBS students in a medical college of Kolkata. Thirty MCQs administered in an internal examination in Community Medicine, were analysed for p-value, DI and DE. Reliability of the test was assessed by estimating the Kuder-Richardson 20 coefficient (KR20). Results:The mean score was 66.35 ± 17.29. Mean p value and DI were 61.92 ± 25.1% and 0.31 ± 0.27, respectively. DI was noted to be maximum at p value range between 40% and 60%.Combining the two indices, 14(46.67 %) items could be called 'ideal'...
Sudan Medical Monitor, 2015
Background: The multiple-choice questions (MCQs) part of the final exam in internal medicine at the College of Medicine, King Khalid University is composed of 100 questions of the one best answer type with four options. Although some basic forms of item analysis have been carried out by the department of internal medicine before, the data generated has not been used regularly to assess the quality of the questions or for feedback for the purpose of quality improvement. Aim: The aim of this study was to assess the quality of MCQs used in the final exam in internal medicine during the 1 st week of January 2013. Methods: The total number of the students of this batch was 58, and the total number of MCQs was 100. Item analysis was done using Microsoft Excel 2007. The parameters obtained included difficulty index, discrimination index, point biserial correlation, and reliability of the exam using Kuder-Richardson formula (KR-20), in addition to analysis of distractors. Results: The mean difficulty of the questions was 0.55 (STD = 0.2) and the mean discrimination index was 0.24 (STD = 0.2) with 41 questions having values below 0.20. Regarding point biserial correlation, the mean was 0.16 (STD = 0.12). KR-20 was found to be 0.79; indicating good reliability and the student scores were believed to be reliable. From the 300 distractors assessed, 41% were nonfunctioning. The mean number of functioning distractors per item was 1.76. Conclusion: The MCQs exam was quite reliable, and the difficulty of the questions was reasonable. The discrimination power of most of the questions was acceptable; however, a relatively high proportion of the questions had unacceptable discrimination index values.
Item analysis and validation of MCQs in paediatrics for final MBBS examinations
2014
Objectives: The study was designed to validate the MCQs used in Graduate examinations in Paediatrics using item analysis. Material and Methods: MCQs and their answers responded by 100 students in each batch appearing for their final MBBS examinations in paediatrics were analysed for 3 consecutive years from 2012 to 2014. Item analysis of each question were undertaken for difficulty index, discriminative index and discrimination effectiveness. Result: Wide variation in difficulty and discriminative indices were noted, often outside prescribed range, making many of these unsuitable for inclusion in the question bank. Conclusion: The teaching faculty, question setters and moderators need to be sensitized about its implication