Building a Comprehensive Automated Programming Assessment System (original) (raw)

Automatic Assessment of Programming Assignments to Enable New Educational Paradigms

2017

Automating the assessment of programming assignments in higher education institutions is important to provide prompt feedback to the students, reduce teachers’ workload on repetitive tasks, avoid human errors, and enable the exploration of new educational paradigms such as gamification and course adaptation based on learning analytics. However, the automatic assessment of programming assignments is challenging because of the variety of programming languages, the variety of assessment strategies, the difficulty to assess quality attributes beyond functional correctness, and the need to integrate with e-learning and students management platforms. There are several platforms for automatic assessment that are used namely in programming contests, but that support only one assessment strategy or do not integrate with students management platforms. To overcome those limitations, the authors, from the Faculty of Engineering of the University of Porto, developed an extensible web based platf...

Extending and contributing to an open-source web-based assessment system for the automated assessment of programming problems

This paper describes the development of a web-based programming and assessment environment for use in supporting programming fundamentals courses (CS1, CS2) taught in Java. This environment is currently linked with WeBWorK, an open source web-based system developed at the University of Rochester that is popular for administering and assessing mathematics and physics coursework, but is designed for the potential integration with other course management system (CMS) environments. In addition to the traditional multiple-choice and short answer questions that have been augmented with the extensive randomization and customization routines of WeBWorK, this new environment (called WeBWorK-JAG where JAG stands for Java Auto Grader) can automatically collect and grade free-form program fragments written in Java. Novel pedagogy has been developed based on the capabilities of this extension and preliminary classroom results are discussed in this paper. For example, when students contributed to WeBWorK by creating WeBWorK-JAG questions for their peers, they are exposed to the reality of creating comprehensive unit tests and to the wider quality assurance aspects of formulating questions and their solution sets. This work is described in the context of an emerging commercial market for web-based programming assistants and its unique contributions are summarized.

Automatic test-based assessment of programming

Journal on Educational Resources in Computing, 2005

Systems that automatically assess student programming assignments have been designed and used for over forty years. Systems that objectively test and mark student programming work were developed simultaneously with programming assessment in the computer science curriculum. This article reviews a number of influential automatic assessment systems, including descriptions of the earliest systems, and presents some of the most recent developments. The final sections explore a number of directions automated assessment systems may take, presenting current developments alongside a number of important emerging e-learning specifications.

GAME: a Generic Automated Marking Environment for programming assessment

International Conference on Information Technology: Coding and Computing, 2004. Proceedings. ITCC 2004., 2004

In this paper, a Generic Automated Marking Environment (GAME) is proposed for assessing student programming projects and exercises with an aim to facilitate student-centred learning. GAME has been designed to automatically assess programming assignments written in a variety of languages. The system has been implemented in Java and contains marker modules that are tailored to each specific language. A framework has been set in place to enable easy addition of new marker modules to extend the system's functionality. Currently, the system is able to mark programs written in Java and the C language. To use the system, instructors are required to provide a simple "marking schema" for any given assessment item, which includes pertinent information such as the location of files and the model solution. GAME has been tested on a number of student programming exercises and assignments providing encouraging results.

Are Automated Assessment Tools Helpful in Programming Courses?

2015 ASEE Annual Conference and Exposition Proceedings, 2015

teaches courses in programming, artificial intelligence, objected oriented design, algorithms, theory of computation, and related subjects in ACU's School of Information Technology and Computing. Prior to joining the ACU faculty, he spent twenty years in software development, research, and training the Air Force Research Lab and NASA's Langley Research Center as well as private industry. His current research focuses on how automated assessment tools interact with student learning in university programming courses.

Extending and contributing to an open source web-based system for the assessment of programming problems

2007

This paper describes the development of a web-based programming and assessment environment for use in supporting programming fundamentals courses (CS1, CS2) taught in Java. This environment is currently linked with WeB-WorK, an open source web-based system developed at the University of Rochester that is popular for administering and assessing mathematics and physics coursework, but is designed for the potential integration with other course management system (CMS) environments. In addition to the traditional multiple-choice and short answer questions that have been augmented with the extensive randomization and customization routines of WeBWorK, this new environment (called WeBWorK-JAG where JAG stands for Java Auto Grader) can automatically collect and grade free-form program fragments written in Java. Novel pedagogy has been developed based on the capabilities of this extension and preliminary classroom results are discussed in this paper. For example, when students contributed to WeBWorK by creating WeBWorK-JAG questions for their peers, they are exposed to the reality of creating comprehensive unit tests and to the wider quality assurance aspects of formulating questions and their solution sets. This work is described in the context of an emerging commercial market for webbased programming assistants and its unique contributions are summarized.

Drop Project: An automatic assessment tool for programming assignments

Cipriano, B.P., Fachada, N. & Alves, P. (2022). Drop Project: An automatic assessment tool for programming assignments. SoftwareX, 18. 101079., 2022

Automated assessment tools (AATs) are software systems used in teaching environments to automate the evaluation of computer programs implemented by students. These tools can be used to stimulate the interest of computer science students in programming courses by providing quick feedback on their work and highlighting their mistakes. Despite the abundance of such tools, most of them are developed for a specific course and are not production-ready. Others lack advanced features that are required for certain pedagogical goals (e.g. Git integration) and/or are not flexible enough to be used with students having different computer literacy levels, such as first year and second year students. In this paper we present Drop Project (DP), an automated assessment tool built on top of the Maven build automation software. We have been using DP in our teaching activity since 2018, having received more than fifty thousand submissions between projects, classroom exercises, tests and homework assignments. The tool's automated feedback has allowed us to raise the difficulty level of the course's projects, while the grading process has become more efficient and consistent between different teachers. DP is an extensively tested, production-ready tool. The software's code and documentation are available in GitHub under an open-source software license.

E-Lab: Web Based System for Automatic Assessment of Programming Problems

2012

E-Lab is a system developed at Faculty of Computer Science and Engineering for solving and auto-grading programming problems from introduction to programming courses. The main goal is to simplify and improve the organization and the process of solving programming problems from large group of students in dedicated computer labs using centralized server. All the work from the students is done in a web browser using a web-based code editor and everything is stored, compiled and executed on the server. The system keeps records of all problem attempts from identified students which are used as attendance records. All the problems and solutions are under version control system (Git). The platform supports different types of problems in several programming languages (C, C++, Java) and it's designed to be easily extended.