Monte Carlo evaluations of methods of grade distribution in group projects: Simpler is better (original) (raw)

Self and Peer Rating in Multi-Disciplinary Group Projects

2012

The ability to self and peer assess performance within a multi-disciplinary group is a valuable skill and one of many detailed in the CDIO syllabus as being desirable for graduate engineers. The design, build and test environment in which a student applies technical knowledge and develops teambuilding and other professional skills such as self and peer assessment is complex and challenging for both students and assessors. Operating such activities as multi-disciplinary and multi-national projects adds further layers of complexity. This paper seeks to examine the reliability of students’ self and peer assessments in such complex team environments. The study examined peer ratings from 5 teams from Queen’s University Belfast (QUB) and 6 teams from the Royal Institute of Technology (KTH) in Stockholm, and involved in total 105 students from 6 different disciplines. The analysis suggested that students tended to show bias in favour of those from the same discipline and against those from...

SCORING MODELS FOR PEER ASSESSMENT IN TEAM-BASED LEARNING PROJECTS

Proceedings of EDULEARN19 Conference, 2019

Peer assessment has a double goal in higher education: it is both used to enhance learning of difficult topics or complex skills like working in projects and higher learning competences on the part of students , and to support, deepen or broaden the assessment by tutors and assessors of group work and project outcomes of team-based or online learning projects. Research on peer assessment splits into two areas, too. On the one hand, empirical research on peer assessment is mainly focussed on the design, evaluation and improvement of peer assessment practices in order to enable tutors to offer students the best possible peer assessment environment and experience. On the other hand, methodological research on peer assessment primarily deals with problems and principles of measurement and related technical issues which guarantee that the quantitative results of peer assessments are meaningful and satisfy several other qualitative criteria specific to the learning objectives and in line with regulations and restrictions imposed by the administrative staff or the educational quality assurance department. In this paper we will take up the latter-methodological-perspective. The primary reason is that since the pioneering work of Lejk et al. [1] in the 90's and a single publication of Sharp [2] more than 10 years ago, no substantial progress has been made in this area beyond some attempts to implement those early, mainly statistical approaches to peer assessment [2]. What is indeed painfully missing is a systematic study of the ways in which peer ratings, i.e. the mutual judgements of student performance in a group, can be aligned and combined with tutor's judgment about the student project's outcomes in the form of a team score, such that each student gets a representative , well-balanced and fair individual score, mark or grade. In short: there is still no theory of peer assessment based on sound concepts and principles. Therefore, we will start our systematic study with the development of an algebraic theory of educational metrics, edumetrics for short, which is rich enough to formulate explicitly all concepts and principles needed for peer assessment. We want to be able to talk about different scoring scales (2), scoring models (2), scoring classes, or types (3) and-altogether-scoring rules (2 * 2 * 3 = 12). We will clarify why and how the latter differ and we will point out their respective strengths and weaknesses. Furthermore, we show that all scoring rules can be generalized by introducing a so-called impact factor , which allows peer ratings to have more or less effect on the team score. This kind of weighting parameter offers tutors a systematic way to tailor peer assessment to whatever (unforeseen) contex-tual factors force them to rethink and redo the impact that peer ratings may have on the team score. Finally, we will explain the most important peer assessment principle or property we discovered and which we dubbed Split-Join-Invariance (SJI): an unbiased, well-balanced and fair scoring rule for peer assessment will be such that the mean of (final) student scores will be equal to the (initial) team score given by the tutor. Violation of this principle may lead to unpleasant arguments between students and tutor why the team score should deviate from a representative summary (mean) of the students scores. Most of the scoring rules which we constructed satisfy SJI, but some of them (including two called "assessment by adjustment"-a kind of mixed additive-multiplicative scoring procedure sometimes used by tutors) lack this property, which makes them somewhat suspect and deficient, though they are definitely better than the majority of scoring rules found in practice.

Solving the group project assessment quandary : Can the instructor ’ s equal partitioning of each group ’ s topic be the solution ?

2018

Team work is one of the generic skills that undergraduate students are expected to acquire by the time they graduate. Nevertheless, the traditional method of assessing group projects has been – in addition to its other shortcomings – sadly inaccurate in measuring individual students’ contributions to the project. In this paper, I present a new technique for allocating group project topics and for assessing such projects, whereby the instructor – not the students in each group – divides each topic into several sub-topics, to accord with the number of students allocated to each group. The new technique also obliges group members individually to upload their parts in the project to their own accounts on Turnitin, rather than jointly uploading the whole project to one account, as had previously been the case. From my perspective, the new method has to date been very successful as it addresses the shortcomings – in terms of accuracy and justice – of the traditional assessment of group pr...

Peer assessment in group projects accounting for assessor reliability by an iterative method

Teaching in Higher Education, 2014

The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden.

Effectiveness of group-work assessment formula for final year projects

11th Annual Australasian Association for Engineering Education Conference, Adelaide, 27-29 September, pp 253-259., 1999

An assessment formula addressing equity issues in final year group-work projects was presented in AAEE98. The formula appears formidable but its meaning and use are easier than it appears. The formula was used for the final year project subject in Electronic Engineering at University of South Australia in 1998. This paper discusses the results of formula use. Difficulties were found with certain anomalous situations, but in general the formula produced results matching the judgement of all assessors in the subject, based on their observation of the work of the students and partial assessments.

Solving the group project assessment quandary: Can instructor’s partitioning of groups topics among students be the solution?

Compass: Journal of Learning and Teaching

Team work is one of the generic skills that undergraduate students are expected to acquire by the time they graduate. Nevertheless, the traditional methodology used for assessing group projects suffered from several drawbacks which center mainly in the inaccurate assessment of each student’s contribution in the project, in addition to other shortcomings. In this paper we present a new technique in allocating topics in group projects, and in assessment of such projects, where the instructor – not the students in each group – divides each topic into several subtopics depending on the number of students allocated in each group. In addition, the new technique obliges each student to separately upload his/her part in the project in his own account on turnitin, rather than uploading the whole project in one account as previously done. The results were superior compared to conventional projects as the students’ complaints of injustice were minimized, while marking accuracy was maximized.

Assessing Individual Student Performance in Assessing Individual Student Performance in Collaborative Projects: A Case Study Collaborative Projects: A Case Study

New college graduates who can collaborate, share skills and knowledge, and communicate their ideas effectively are valuable to businesses. Skills learned from team projects translate into the workplace, creating employees with these abilities. Reaffirming the importance of team projects, this article investigates alternative solutions to the dilemma facing educators who must evaluate team members who do not contribute equally to the team's accomplishments. It posits and evaluates several alternatives to the challenge of evaluating team members. The article presents a methodology for implementing the most useful alternative. Finally, the article offers concluding comments based upon student evaluations from a course where the recommended methodology was implemented.

Student evaluation in an international collaborative project course

Applications and the …, 2001

Grading is a frequently discussed and contentious issue. There are several views on how best to do grading and deciding how to grade students who participate in a joint international project-oriented course is far from trivial. This paper examines some in-situ observations and concerns, here referred to as myths, which arose during the project. Some statistical information extracted from the assessment data is used to examine the truth and relevance of these myths

Reliable Peer Assessment for Team-project-based Learning using Item Response Theory

2015

In recent years, assessment has been facing a shift from traditional testing to authentic assessment. As an assessment method of learner's authentic abilities, assessment in team-project-based learning has been attempted. Peer assessment is an effective method to assess not only outcomes but also processes occurring within team-project-based learning without burdening instructors, even when the number of teams increases. However, it has been pointed out that reliability of peer assessment is generally lower than that of instructor assessment. To improve reliability of peer assessment, several item response models have been proposed that incorporate rater characteristic parameters. This study was undertaken to improve reliability of peer assessment in team-project-based learning using the item response models. However, the following problems can occur when applying item response models to peer assessment in team-project-based learning. (1) Earlier item response models incorporate...