Towards a Common Instrument for Measuring Prior Programming Knowledge (original) (raw)
Related papers
Developing Assessments to Determine Mastery of Programming Fundamentals
Proceedings of the 2017 ITiCSE Conference on Working Group Reports, 2018
Current CS1 learning outcomes are relatively general, specifying tasks such as designing, implementing, testing and debugging programs that use some fundamental programming constructs. These outcomes impact what we teach, our expectations, and our assessments. Although prior work has demonstrated the utility of single concept assessments, most assessments used in formal examinations combine numerous heterogeneous concepts, resulting in complex and difficult tasks. As a consequence, teachers may not be able to diagnose the actual difficulties faced by students and students are not provided with accurate feedback about their achievements. Such limitations on the nature and quality of feedback to teachers and students alike may contribute to the perceived difficulty and high dropout rates commonly observed in introductory programming courses. In this paper we review the concepts that CS education researchers have identified as important for novice programming. We survey learning outcomes for introductory programming courses that characterize the expectations of CS1 courses, and analyse assessments designed for CS1 to determine the individual components of syntax and semantics required to complete them. Having recognized the implicit and explicit expectations of novice programming courses, we look at the relationships between components and progression between concepts. Finally, we demonstrate how some complex assessments can be decomposed into atomic elements that can be assessed independently.
ACM SIGCSE …, 2001
In computer science, an expected outcome of a student's education is programming skill. This working group investigated the programming competency students have as they complete their first one or two courses in computer science. In order to explore options for assessing students, the working group developed a trial assessment of whether students can program. The underlying goal of this work was to initiate dialog in the Computer Science community on how to develop these types of assessments. Several universities participated in our trial assessment and the disappointing results suggest that many students do not know how to program at the conclusion of their introductory courses. For a combined sample of 216 students from four universities, the average score was 22.89 out of 110 points on the general evaluation criteria developed for this study. From this trial assessment we developed a framework of expectations for first-year courses and suggestions for further work to develop more comprehensive assessments.
The Impact of Students' Pre-Knowledge on Learning Computer Programming
2019
Students taking courses of computer programming have different pre-knowledge on the topic, which not only includes their knowledge of a specific computer programming language but also their ability for algorithmic thinking and the understanding of the concept of computer programming. At the beginning of the course, they were asked to self-evaluate their knowledge of computer programming and take a pre-course test of their knowledge in JavaScript. In the scope of the research, we have compared the results of the pre-course test of students with technical and general High School education, as well as the results they achieved at the post-course test. Altogether, 55 students collaborated in both pre and post-course tests. The results indicate that students with technical and general High School backgrounds scored similarly on a pre-course test of computer programming and that students with general High School backgrounds acquired more knowledge in the course compared to the ones with a...
Types of assessing student-programming knowledge
High failure and dropout rates are common in higher education institutions with introductory programming courses. Some researchers advocate that sometimes teachers don't use correct methods of assessment and that many students pass in programming without knowing how to program. In this paper authors describe the assessment methodology applied to a first year, first semester, Biomedical Engineering programming course (2015/2016). Students' programming skills were tested by playing a game in the first class, then they were assessed with three tests and a final exam, each with topics the authors considered fundamental for the students to master. A correlation analyses between the different types of tests and exam questions is done, to evaluate the most suitable, for assessing programming knowledge, showing that it is possible to use different question types as a pedagogical strategy, to assess student difficulty levels and programming skills, that help students acquire abstract, reasoning and algorithm thinking in an acceptable level. Also, it is shown that different forms of questions are equivalent to assess equal knowledge and that it is possible to predict the ability of a student to program at an early stage.
How Students "Measure Up": Creation of an Assessment Instrument for Introductory Computer Science
Many recent innovations in the computer science curriculum have focused on the first-year computer science courses, and much work has been done to help determine what predicts success in the first year. However, many of these investigations lack an appropriately validated assessment instrument to confirm their findings. There are several assessment instruments available to computer science faculty, but each of them is flawed, therefore making them inappropriate for the task of assessment of the first-year computer science courses. I propose to create an assessment instrument that can be administered to students during their first year of study of computer science. This instrument will be constructed using the guidelines given in Computing Curricula 2001 for programming-first introductory courses. This instrument will be assessed for its reliability and validity and administered to students in their first year of study in computer science. The creation of this instrument will enable ...
Designing an Assessment for Introductory Programming Concepts in Middle School Computer Science
Proceedings of the 51st ACM Technical Symposium on Computer Science Education, 2020
Teaching of computer science (CS is rapidly expanding in schools. Learning to program is a key ingredient of school CS curricula, and consequently, there is a need for quality measures of student learning of foundational programming concepts. However, highquality tools for measuring student learning in introductory CS have been underdeveloped and under-researched. This experience report shares the process of design and refinement of a summative paperbased assessment (that could also be administered online) for introductory programming in middle grades (6-8). We share our experiences with the use of assessment as a pre-post measure in a middle school introductory programming course in diverse, urban school classrooms in the US and use that data to conduct validity, reliability and item discrimination analyses.
A taxonomic study of novice programming summative assessment
australasian computing education conference, 2009
Learning to program is difficult, a situation that is largely responsible for high attrition rates in Computer Science schools. Novice programmers struggle to grasp an early understanding of programming, which can lead to frustration and eventually surrender. The problem has generated interest in a range of enquiries, and has given impetus to the need for a teaching-research nexus towards a better understanding of novice programming problems. We continue the trend in this paper and report on a study we have conducted of novice programmers' efforts in summative assessment. Our study involves multiplechoice questions and coding question drawn from a programming examination. We analyse the answers provided by novices to final examination questions, and attempt to understand why students make such errors. We aim to categorise and classify the questions in the context of two well-known learning taxonomies: Bloom's Taxonomy and the SOLO Taxonomy.
Individualized exercises for self-assessment of programming knowledge
Journal on Educational Resources in Computing, 2005
Individualized exercises are a promising kind of educational content in modern e-learning. The focus of this paper is the QuizPACK system, which is able to generate parameterized exercises for the C language and automatically evaluate the correctness of student answers. We introduce QuizPACK and present the results of its comprehensive classroom evaluation when used for four consecutive semesters. Our studies demonstrate that when QuizPACK is used for out-of-class self-assessment, it is an exceptional learning tool. The students' work with QuizPACK significantly improved their knowledge of semantics and positively affected higher-level knowledge and skills. The students themselves praised the system highly as a helpful learning tool. We also demonstrated that the use of the system in self-assessment mode can be significantly increased by basing later classroom paper-and-pencil quizzes on QuizPACK questions, motivating students to practice them more.
Assessing the Success of an Introductory Programming Course
Journal of Information Technology Education: Research, 2010
With universities having difficulty attracting students to study information technology (IT), the scores needed for entry into IT degrees have dropped markedly. IT schools are thus having to cope by adjusting their introductory courses to ensure that students will still learn what is expected but without negatively impacting on pass rates. This paper considers short objective tests, designed by other researchers, to examine whether students who have passed an introductory course have achieved an understanding of fundamental concepts in programming. The Dehnadi test, which was originally designed to be taken before a programming course to predict who would be successful, proved to be useful in showing that many students who had passed an introductory programming course had little or no understanding of fundamental concepts. The test was useful if the number of correct responses was considered. Implications of students passing an introductory course but being unable to respond correctly on a multiple choice test of the most fundamental concepts of programming are considered.