Data-Driven Generation of Rubric Parameters from an Educational Programming Environment (original) (raw)
Related papers
Data-driven generation of rubric criteria from an educational programming environment
Proceedings of the 8th International Conference on Learning Analytics and Knowledge, 2018
We demonstrate that, by using a small set of hand-graded student work, we can automatically generate rubric criteria with a high degree of validity, and that a predictive model incorporating these rubric criteria is more accurate than a previously reported model. We present this method as one approach to addressing the often challenging problem of grading assignments in programming environments. A classic solution is creating unit-tests that the studentgenerated program must pass, but the rigid, structured nature of unit-tests is suboptimal for assessing the more open-ended assignments students encounter in introductory programming environments like Alice. Furthermore, the creation of unit-tests requires predicting the various ways a student might correctly solve a problem-a challenging and time-intensive process. The current study proposes an alternative, semi-automated method for generating rubric criteria using low-level data from the Alice programming environment. CCS CONCEPTS • Applied computing → Computer-assisted instruction; Computermanaged instruction;
Automatic test-based assessment of programming
Journal on Educational Resources in Computing, 2005
Systems that automatically assess student programming assignments have been designed and used for over forty years. Systems that objectively test and mark student programming work were developed simultaneously with programming assessment in the computer science curriculum. This article reviews a number of influential automatic assessment systems, including descriptions of the earliest systems, and presents some of the most recent developments. The final sections explore a number of directions automated assessment systems may take, presenting current developments alongside a number of important emerging e-learning specifications.
Automatic Assessment of Programming Assignments to Enable New Educational Paradigms
2017
Automating the assessment of programming assignments in higher education institutions is important to provide prompt feedback to the students, reduce teachers’ workload on repetitive tasks, avoid human errors, and enable the exploration of new educational paradigms such as gamification and course adaptation based on learning analytics. However, the automatic assessment of programming assignments is challenging because of the variety of programming languages, the variety of assessment strategies, the difficulty to assess quality attributes beyond functional correctness, and the need to integrate with e-learning and students management platforms. There are several platforms for automatic assessment that are used namely in programming contests, but that support only one assessment strategy or do not integrate with students management platforms. To overcome those limitations, the authors, from the Faculty of Engineering of the University of Porto, developed an extensible web based platf...
Building a Comprehensive Automated Programming Assessment System
IEEE Access
Automated Programming Assessment Systems (APAS) are used for overcoming problems associated with manually managed programming assignments, such as objective and efficient assessing in large classes and providing timely and helpful feedback. In this paper we survey the literature and software in this field and identify the set of necessary features that make APAS comprehensive-such that it can support all key stages in the assessment process. Put differently, comprehensive APAS is generic enough to meet the demands of "any" computer science course. Despite the vast number of publications, the choice of software turns out to be very limited. We contribute by developing Edgar, a comprehensive open-source APAS which, to the best of our knowledge, exceeds any other similar free and/or open-source tool. Edgar is the result of three years of development and usage in, for the time being, eight courses dealing with various programming languages and paradigms (C, Java, SQL, etc.). Edgar supports various text-based programming languages, multi-correct multiple-choice questions, provides rich exam logging and monitoring infrastructure to prevent potential fraudulent behaviour, and subsequent data analysis and visualization of students' scores, exams, question quality, etc. It can be deployed on all major operating systems and is written in a modular fashion so that it can be adjusted and scaled to a custom fit. We comment on the architecture and present data from real-world use-cases to support these claims. Edgar is in active use today (1000+ students per semester) and it is being constantly developed with new features.
Fully automatic assessment of programming exercises
ACM Sigcse Bulletin, 2001
Automatic assessment of programming exercises has become an important method for grading students' exercises and giving feedback for them in mass courses. We describe a system called Scheme-robo, which has been designed for assessing programming exercises written in the functional programming language Scheme. The system assesses individual procedures instead of complete programs. In addition to checking the correctness of students' solutions the system provides many different tools for analysing other things in the program like its structure and running time, and possible plagiarism. The system has been in production use on our introductory programming course with some 300 students for two years with good results.
AUTOMATED GENERATION OF COMPUTER GRADED UNIT TESTING-BASED PROGRAMMING ASSESSMENTS FOR EDUCATION
Automatic assessment of code, in particular to support education, is an important feature included in several Learning Management Systems (LMS), at least to some extent. Several kinds of assessments can be designed, such as exercises asking to "fill the following code", "write a function that", or "correct the bug in the following program", for example. One difficulty for instructors is to create such programming exercises, in particular when they are somewhat complex. Indeed, instructors need to write the statement of the exercise, think about the solution and provide all the additional information necessary to the platform to grade the assessment. Another difficulty occurs when instructors want to use their exercises on another LMS platform. Since there is no standard way to define and describe a coding exercise yet, instructors have to re-encode their exercises into the other LMS. This paper presents a tool that can automatically generate programming exercises, from one single and unique description, and that can be solved in several programming languages. The generated exercises can be automatically graded by the same platform, providing intelligent feedback to its users to support their learning. This paper focuses on and details unit testing-based exercises and provides insights into new kinds of exercises that could be generated by the platform in the future, with some additional developments.
Parameterized and automated assessment on an introductory programming course
Anais do XXXI Simpósio Brasileiro de Informática na Educação (SBIE 2020)
The generation of individualized exams can contribute to a more reliable assessment of the students. Manually performing this procedure may not be feasible, even more on a large scale. An alternative to deal with it is the automatic generation of questions. This paper discusses an innovative solution to simplify test generation and correction through parameterized questions in the context of a four-month Introduction to Programming course under a blended- learning (IP-BL) approach. It combines the open-source tool MCTest with Moodle and VPL plugin to generate and also automatically evaluate parameterized programming language questions. We applied an intervention based on this solution in two IP-BL groups (a total of 171 enrolled students) using Java.
Generic Assessment Rubrics for Computer Programming Courses
Turkish Online Journal of Educational Technology, 2016
In programming, one problem can usually be solved using different logics and constructs but still producing the same output. Sometimes students get marked down inappropriately if their solutions do not follow the answer scheme. In addition, lab exercises and programming assignments are not necessary graded by the instructors but most of the time by the teaching assistants or lab demonstrators. This results in grading inconsistencies in terms of the marks awarded when the same solution is being graded by different person. To address this issue, a set of assessment rubric is necessary in order to provide flexibility for critical and creative solutions among students as well as to improve grading consistencies among instructors and teaching assistants or demonstrators. This paper reports the development of assessment rubric for each domain in computer programming courses; cognitive, psychomotor, and affective. The rubrics were then implemented for one academic semester consisting of 14...