Charting new territory: assessing the online frontier of student ratings of instruction (original) (raw)
Related papers
Addressing Common Concerns about Online Student Ratings of Instruction: A Research-Informed Approach
Canadian Journal of Higher Education
Concerns over the usefulness and validity of student ratings of instruction (SRI) have continued to grow with online processes. This paper presents seven common and persistent concerns identified and tested during the development and implementation of a revised SRI policy at a Canadian research-intensive university. These concerns include bias due to insufficient sample size, student academic performance, polarized student responses, disciplinary differences, class size, punishment of rigorous instructor standards, and timing of final exams. We analyzed SRI responses from two mandatory Likert scale questions related to the course and instructor, both of which were consistent over time and across all academic units at our institution. The results show that overall participation in online SRIs is representative of the student body, with academically stronger students responding at a higher rate, and the SRIs, themselves, providing evidence that may moderate worries about the concerns.
How do we rate? An evaluation of online student evaluations
Assessment & Evaluation in Higher Education, 2009
This paper analyses the popular RateMyProfessors (RMP) website where students evaluate instructors in higher education. A study was designed to measure (1) the awareness and utilisation of the RMP website, (2) the internal and external validity of the RMP ratings in measuring teaching effectiveness, and (3) variation in the above across disciplines. It is concluded that the category of ratings, created by the website, establishes an anti-intellectual tone that manifests itself in comments about instructors' personality, easiness of workload and entertainment value rather than knowledge attained. The Rate My Professors (RMP) website (ratemyprofessors.com) offers a community forum where 'students do the grading', anonymously sharing evaluations of instructors with other students. The website aims to make information available so students can choose better instructors and courses, thereby improving their education. The merits of the website are often contested within the walls of academe, especially by the faculty that are directly scrutinised, but largely this website lacks academic or public evaluation. Although the debate concerning the website usually centres on the quality of data provided by the website, there is little information to substantiate the credibility of the data. This paper contributes to the emerging literature (see Felton et al. 2004; Coladarci and Kornfield 2007) that provides information on the services and disservices of this popular website to the academic community. There are many similar websites, such as pickaprof.com, campusdirt.com, myprofessorsucks. com, and rateyourprof.com, but RMP is by far the most popular student rating site. Since RMP started in 1999,1 the for-profit website has observed great success and now sells ads to numerous international corporations such as VISA, Monster, MSN, Citibank and Allstate. In early 2004, RMP had almost 1.5 million ratings from nearly 4000 schools. Three years later, the number of postings increased by 300% to over six million. Currently over 6000 colleges and universities and nearly 800,000 instructors are listed across United States, Canada, England, Scotland and Wales. With over eight million student members, the daily traffic averages over 200,000 unique visitors per day. One thing is for sure, the website is not going away. Like it or not, the information on the website is very profitable.
Online and Face-to-Face Teaching: How Do Student Ratings Differ?
2014
This paper reports on a study whose purpose was to compare student ratings of instruction in online and face-to-face (F2F) higher education courses in an effort to better understand how faculty can strengthen their teaching in the online environment. Student ratings of instruction in online and on-campus courses were examined in two different ways that yielded similar findings. First, more than 8,000 student ratings of online and on-campus, traditional courses were compared. The ratings were based on 172 online courses and 470 on-campus courses. Results indicated that on-campus courses were rated significantly higher than online courses in Communication, Faculty/Student Interaction, Grading, Instructional Methods, and Course Outcomes; effect sizes were small. Student Effort was rated significantly higher for online courses than for F2F courses, also with a small effect size. A second analysis, using 11 pairs of the same course and same instructor, yielded similar findings. Students ...
Examining Online Course Evaluations and the Quality of Student Feedback
Journal of Educational Informatics, 2022
The purpose of this article was to provide a comprehensive review of research on the quality of student feedback from post-secondary institutions using online course evaluations versus traditional paper-pencil methods. Nineteen peer-reviewed articles published from 2000 to 2020 were examined for changes to course evaluations following a transition to online collection methods. Three themes emerged from the literature: effects on response rates, presence of non-response bias, and effects on comment quality. Results suggest that using online methods for collecting student feedback tends to decrease response rates somewhat, however, the effect is often temporary. Further, using online methods generated conflicting results on the presence of a non-response bias in open-ended comments with online methods. Many studies demonstrated that online methods increase the word counts in student-provided comments and that the constructive nature of the comments improved as well. The results may in...
The purpose of the present study was to compare student ratings of instruction in online and face-to-face (F2F) higher education courses in an effort to better understand how faculty can strengthen their teaching in the online environment. Student ratings of instruction in online and on-campus courses were examined in two different ways that yielded similar findings. First, more than 8,000 student ratings of online and on-campus, traditional courses were compared. The ratings were based on 172 online courses and 470 on-campus courses. Results indicated that on-campus courses were rated significantly higher than online courses in Communication, Faculty/Student Interaction, Grading, Instructional Methods, and Course Outcomes; effect sizes were small. Student Effort was rated significantly higher for online courses than for F2F courses, also with a small effect size. A second analysis, using 11 pairs of the same course and same instructor, yielded similar findings. Students rated on-ca...
Student Evaluation of Teaching: The Inequity of Faculty Scores in Online versus Face-to-Face Courses
2013
Researchers have conducted a multitude of studies over the last century on Student Evaluations of Teaching (SET); however, very few have been conducted in the new digital age (Loveland, 2007). More work is in progress as researchers try to define the differences in student responses and thoughts about the online teaching environment. The unfortunate side of this can be the administrative decision to use a one-size-fits-all mentality when many authors including Dziuban and Moskal (2011) have outlined several research-based alternatives for evaluation of online instructional effectiveness. SETs are important to faculty because they often are the determining factor in merit pay and tenure/promotion. Faculty use SETs to guide decision-making about their curriculum and instructional strategies used to deliver the course (Sheehan & DuPrey, 1999). The power of the SET is recognized as a driving force in academia. What are often not discussed are the differences between SET for face-to-face courses versus SET for online courses.
A comparison of student evaluations of teaching between online and face-to-face courses
The Internet and Higher Education, 2007
The literature contains indications of a bias in student evaluations of teaching (SET) against online instruction compared to faceto-face instruction. The present case study consists of content analysis of anonymous student responses to open-ended SET questions submitted by 534 students enrolled in 82 class sections taught by 41 instructors, one online and one face-to-face class section for each instructor. There was no significant difference in the proportion of appraisal text segments by delivery method, suggesting no delivery method bias existed. However, there were significant differences in the proportion of text segments for topical themes and topical categories by delivery method. Implications of the findings for research and practice are presented.
Pre-and post-grade responses in online student evaluations:–what do they tell us?
stu.canterbury.ac.nz
Online surveys can bypass the semester restrictions of paper-based surveys with the timing extendable to not only include and go beyond exams, but also grade release. This paper presents the results of 2 follow-up studies to a small online survey trial in the middle of 2008 . That original trial lead to a wider trial at the end of 2008 involving 9227 students in 40 courses (units) that used a similar structure to the original. The students in 3 groups who responded prior to grade release were later emailed their original response. They were asked if, knowing their grade, they wished to change their response and provided with a second survey to do so with 43% responding. Only 17% opted to change their ratings. A third, small trial of 20 surveys ran at the end of semester one in 2009. In contrast to the 2008 surveys, the 2009 surveys were simply allowed to run from 2 weeks before the end of the semester through to 10 days after grade release. Students were again asked to estimate their grade and their actual grades were matched to the responses. The results confirmed the first trial with a weak positive correlation between ratings and grades. Most importantly, the results of all 3 trials lacked any significant effect on ratings as a result of grade knowledge.
The medium is the message: Comparing paper-based and web-based course evaluation modalities
Journal of Accounting Education, 2013
An increasing number of universities have moved student evaluation of faculty and courses out of the classroom, where it had resided for many years, and onto the web. The increased efficiency of the web-based administrative modality of these instruments seems self-apparent. However, whether the measures obtained using the new modality are the same as the old modality is unclear. This paper compares the results of questionnaires administered on the web with those collected from the same students while they were in class. Data from 181 course offerings over seven terms at one university were utilized. Significantly lower evaluation scores for both the instructor and the course are produced when a webbased modality is used. In general, these results did not vary for courses at different levels of matriculation or at different levels of student participation. However, the magnitude of modality differences varied between highly rated and poorly rated courses. Implications for faculty evaluation are offered.