exams – Techdirt (original) (raw)
Dartmouth's Insane Paranoia Over 'Cheating' Leads To Ridiculous Surveillance Scandal
from the this-is-dumber-than-it-looks dept
The NY Times had an incredible story a few days ago about an apparent “cheating scandal” at Dartmouth’s medical school. The problem was, it doesn’t seem like there was any actual cheating. Instead, it looks like a ton of insane paranoia and an overreliance on surveillance technology by an administration which shouldn’t be in the business of educating kindergarteners, let alone med students. We’ve had a few posts about the rise of surveillance technology in schools, and its many downsides — and those really ramped up during the pandemic, as students were often taking exams from home.
So much of the paranoia is based on the silly belief that if you don’t have everything crammed totally into your head, you haven’t actually learned anything. Out here in the real world, it seems like a more sensible realization is that if you teach people how they can look up the necessary details when they need them, you’ve probably done a good job. Yes, there may be some exceptions and some scenarios where full knowledge is important. But for most things, the ability to know how to find the right answer is a lot more important than making sure trivial details are all remembered and can be regurgitated on an exam. Indeed, studies have shown repeatedly, that trying to cram the details into your head for an exam often means they don’t stick in long term memory.
In short, this type of insane test taking tests people on exactly the wrong thing, and instead encourages the kind of behavior that leads to worse outcomes in the long run.
But the situation at Dartmouth is — believe it or not — even dumber. 17 Dartmouth medical students have been accused of cheating — but those accusations were based on a tool that is not designed to spot cheating. It was based on Canvas, a popular platform for professors to post assignments and for students to submit homework through. And here’s what happened, according to the NY Times:
To hinder online cheating, Geisel requires students to turn on ExamSoft ? a separate tool that prevents them from looking up study materials during tests ? on the laptop or tablet on which they take exams. The school also requires students to keep a backup device nearby. The faculty member?s report made administrators concerned that some students may have used their backup device to look at course material on Canvas while taking tests on their primary device.
Geisel?s Committee on Student Performance and Conduct, a faculty group with student members that investigates academic integrity cases, then asked the school?s technology staff to audit Canvas activity during 18 remote exams that all first- and second-year students had taken during the academic year. The review looked at more than 3,000 exams since last fall.
The tech staff then developed a system to recognize online activity patterns that might signal cheating, said Sean McNamara, Dartmouth?s senior director of information security. The pattern typically showed activity on a Canvas course home page ? on, say, neurology ? during an exam followed by activity on a Canvas study page, like a practice quiz, related to the test question.
?You see that pattern of essentially a human reading the content and selecting where they?re going on the page,? Mr. McNamara said. ?The data is very clear in describing that behavior.?
The audit identified 38 potential cheating cases. But the committee quickly eliminated some of those because one professor had directed students to use Canvas, Dr. Compton said.
In emails sent in mid-March, the committee told the 17 accused students that an analysis showed they had been active on relevant Canvas pages during one or more exams. The emails contained spreadsheets with the exam?s name, the test question number, time stamps and the names of Canvas pages that showed online activity.
If you just read that, it might sound like at least some evidence that those students were doing something they weren’t supposed to be doing (even if you think the rules are dumb). But, even that seems to not be accurate. There are some of us (and I am guilty of this) who rarely, if ever, shut down tabs that have important tools or information for our work. Plenty of students are the same, and likely leave Canvas open all the time. And that’s what many of the students have claimed.
Geisel students said they often had dozens of course pages open on Canvas, which they rarely logged out of. Those pages can automatically generate activity data even when no one is looking at them, according to The Times?s analysis and technology experts.
School officials said that their analysis, which they hired a legal consulting firm to validate, discounted automated activity and that accused students had been given all necessary data in their cases.
But at least two students told the committee in March that the audit had misinterpreted automated Canvas activity as human cheating. The committee dismissed the charges against them.
In another case, a professor notified the committee that the Canvas pages used as evidence contained no information related to the exam questions his student was accused of cheating on, according to an analysis submitted to the committee. The student has appealed.
The school’s paranoia over this went further. When it confronted the 17 students, it more or less pressured them into pleading guilty rather than fighting their case:
Dartmouth had reviewed Mr. Zhang?s online activity on Canvas, its learning management system, during three remote exams, the email said. The data indicated that he had looked up course material related to one question during each test, honor code violations that could lead to expulsion, the email said.
Mr. Zhang, 22, said he had not cheated. But when the school?s student affairs office suggested he would have a better outcome if he expressed remorse and pleaded guilty, he said he felt he had little choice but to agree. Now he faces suspension and a misconduct mark on his academic record that could derail his dream of becoming a pediatrician.
?What has happened to me in the last month, despite not cheating, has resulted in one of the most terrifying, isolating experiences of my life,? said Mr. Zhang, who has filed an appeal.
The article notes other students were told they had 48 hours to respond to charges — and that they weren’t provided the evidence the school supposedly had on them, while also being pressured to admit guilt:
They said they had less than 48 hours to respond to the charges, were not provided complete data logs for the exams, were advised to plead guilty though they denied cheating or were given just two minutes to make their case in online hearings, according to six of the students and a review of documents.
There are just layers upon layers of ridiculousness here. Not only is it bad pedagogically to teach this way, it’s dangerous to engage in this kind of surveillance (in the middle of a pandemic, no less), and to just build up an entire atmosphere of mistrust.
EFF did a long and detailed post on this in which they note that the data in question could not have shown cheating, and arguing that these students have been denied basic due process.
But after reviewing the logs that were sent to EFF by a student advocate, it is clear to us that there is no way to determine whether this traffic happened intentionally, or instead automatically, as background requests from student devices, such as cell phones, that were logged into Canvas but not in use. In other words, rather than the files being deliberately accessed during exams, the logs could have easily been generated by the automated syncing of course material to devices logged into Canvas but not used during an exam. It?s simply impossible to know from the logs alone if a student intentionally accessed any of the files, or if the pings exist due to automatic refresh processes that are commonplace in most websites and online services. Most of us don?t log out of every app, service, or webpage on our smartphones when we?re not using them.
Meanwhile, the student free speech advocacy organization FIRE has been demanding answers from Dartmouth as well. To make matters worse, FIRE noticed that Dartmouth recently hid its “due process policies” from public view (convenient!):
We also asked why the college appears to have recently password-protected many of its due process policies. Of course, doing so conveniently hides them from the scrutiny of the public and prospective students who might be curious whether they will have rights ? and what those rights might be ? if they matriculate at Dartmouth.
And, of course, all of this could have been avoided if Dartmouth wasn’t so overly paranoid about the idea that medical students might (gasp!) be able to look up relevant information. When I go to a medical professional, I don’t necessarily need them to have perfect recall of every possible symptom or treatment. What I hope they’re able to do is use their knowledge, combined with their ability to reference the proper materials, to figure out the best solution. Perhaps I should avoid doctors who graduated from Dartmouth if I want that.
Filed Under: cheating, exams, medical students, surveillance, trust
Companies: canvas, dartmouth
England's Exam Fiasco Shows How Not To Apply Algorithms To Complex Problems With Massive Social Impact
from the let-that-be-a-lesson-to-you-all dept
The disruption caused by COVID-19 has touched most aspects of daily life. Education is obviously no exception, as the heated debates about whether students should return to school demonstrate. But another tricky issue is how school exams should be conducted. Back in May, Techdirt wrote about one approach: online testing, which brings with it its own challenges. Where online testing is not an option, other ways of evaluating students at key points in their educational career need to be found. In the UK, the key test is the GCE Advanced level, or A-level for short, taken in the year when students turn 18. Its grades are crucially important because they form the basis on which most university places are awarded in the UK.
Since it was not possible to hold the exams as usual, and online testing was not an option either, the body responsible for running exams in the UK, Ofqual, turned to technology. It came up with an algorithm that could be used to predict a student’s grades. The results of this high-tech approach have just been announced in England (other parts of the UK run their exams independently). It has not gone well. Large numbers of students have had their expected grades, as predicted by their teachers, downgraded, sometimes substantially. An analysis from one of the main UK educational associations has found that the downgrading is systematic: “the grades awarded to students this year were lower in all 41 subjects than they were for the average of the previous three years.”
Even worse, the downgrading turns out to have affected students in poorly performing schools, typically in socially deprived areas, the most, while schools that have historically done well, often in affluent areas, or privately funded, saw their students’ grades improve over teachers’ predictions. In other words, the algorithm perpetuates inequality, making it harder for brilliant students in poor schools or from deprived backgrounds to go to top universities. A detailed mathematical analysis by Tom SF Haines explains how this fiasco came about:
Let’s start with the model used by Ofqual to predict grades (p85 onwards of their 319 page report). Each school submits a list of their students from worst student to best student (it included teacher suggested grades, but they threw those away for larger cohorts). Ofqual then takes the distribution of grades from the previous year, applies a little magic to update them for 2020, and just assigns the students to the grades in rank order. If Ofqual predicts that 40% of the school is getting an A [the top grade] then that’s exactly what happens, irrespective of what the teachers thought they were going to get. If Ofqual predicts that 3 students are going to get a U [the bottom grade] then you better hope you’re not one of the three lowest rated students.
As this makes clear, the inflexibility of the approach guarantees that there will be many cases of injustice, where bright and hard-working students will be given poor grades simply because they were lower down in the class ranking, or because the school did badly the previous year. Twitter and UK newspapers are currently full of stories of young people whose hopes have been dashed by this effect, as they have now lost the places they had been offered at university, because of these poorer-than-expected grades. The problem is so serious, and the anger expressed by parents of all political affiliations so palpable, that the UK government has been forced to scrap Ofqual’s algorithmic approach completely, and will now use the teachers’ predicted grades in England. Exactly the same happened in Scotland, which also applied a flawed algorithm, and caused similarly huge anguish to thousands of students, before dropping the idea.
The idea of writing algorithms to solve this complex problem is not necessarily wrong. Other solutions — like using grades predicted by teachers — have their own issues, including bias and grade inflation. The problems in England arose because people did not think through the real-life consequences for individual students of the algorithm’s abstract rules — even though they were warned of the model’s flaws. Haines offers some useful, practical advice on how it should have been done:
The problem is with management: they should have asked for help. Faced with a problem this complex and this important they needed to bring in external checkers. They needed to publish the approach months ago, so it could be widely read and mistakes found. While the fact they published the algorithm at all is to be commended (if possibly a legal requirement due to the GDPR right to an explanation), they didn’t go anywhere near far enough. Publishing their implementations of the models used would have allowed even greater scrutiny, including bug hunting.
As Haines points out, last year the UK’s Alan Turing Institute published an excellent guide to implementing and using AI ethically and safely (pdf). At its heart lie the FAST Track Principles: fairness, accountability, sustainability and transparency. The fact that Ofqual evidently didn’t think to apply them to its exam algorithm means its only gets a U grade for its work on this problem. Must try harder.
Follow me @glynmoody on Twitter, Diaspora, or Mastodon.
Filed Under: algorithms, education, exams, grads, predictions, predictive algorithms, protests, testing
Subtle: Iraq Flips The Internet Switch For 3 Hours To Combat Cheating Students And Corrupted Teachers
from the well-okay-then dept
We’ve talked about cheating in academia in the past, usually revolving around whether or not what used to be called cheating might be better thought of as collaboration. Beyond that, we’ve also talked about some of the strategies used to combat the modernity of “cheating”, which has included the monitoring of students online activities to make sure they weren’t engaged in cheating behavior.
Well, the nation of Iraq doesn’t have time for all of this monitoring and sleuthing. When its students have their standardized tests, they simply shut the damned internet off completely.
For a few hours each morning, the Iraqi government keeps cutting off internet access—to keep students from cheating on their end-of-year exams. As reported by DYN research, which tracks internet blackouts around the world, the country’s access went almost entirely dead between 5 a.m. and 8 a.m. in the morning on Saturday, Sunday and again on Monday.
And this isn’t the first time the Iraqi government has gone about things in this way. Last year, they pulled the same lever to shut down internet access to the country, with the same explanation that it was combatting a scourge of question and answer sharing occuring online. What’s interesting about this is that the real problem appears to be the teachers, not the students. Teachers in Iraq are apparently regularly bribed by students to share the questions and answers to tests and that those leaks are then spread across the internet for other Iraqi students to see.
“What happens usually is that some teachers would be giving the exams questions to students who pay money, then [those] students would sell online questions all over country,” one Iraqi, who requested his name not be used in a story, told Vocativ. “Between 5 a.m. to 8 a.m. [is when teachers finalize questions] so this is the time when teachers [who have been paid off would] give questions to students by Facebook or Viber or Whatsapp and so on.”
Now, perhaps this move is effective in its aims. I don’t know, since students looking to cheat haven’t exactly always required the internet to do so. Still, even if it were, there must be another more subtle yet effective way to combat this cheating scourge. Perhaps one that doesn’t interrupt internet access for, oh I don’t know, everyone else in the entire country. Because the effects of this blackout aren’t exactly limited to students.
Human rights groups were outraged at the outage. “We see this, especially in such a destabilized country as Iraq, as really terrible. It’s a lot of people under a media and communications blackout,” Deji Olukotun, Senior Global Advocacy Manager at the internet freedom nonprofit, told Vocativ.
Come on guys, figure this out.
Filed Under: cheating, exams, internet, iraq, students
DailyDirt: Can Computers Grade Written Essays?
from the urls-we-dig-up dept
Technology aimed at education could really benefit an incredible number of students by making classes and learning (potentially) a more pleasant and efficient experience. Computers can’t replace a really good human teacher, but they can make it easier for good human teachers to reach a vast audience of students. Massively open online courses (MOOCs) promise to change how education works, but there are some technological tools that might be missing. It’s pretty straightforward to test students on math problems in an automated way, but grading essays is a much more daunting problem. There have been some calls for automated grading software from various organizations (like the Hewlett Foundation). But at the same time, the National Council of Teachers of English argues that computers simply can’t grade essays. Here are just a few more links on this debate over the use of algorithms over English professors (or grad students).
- EdX, the non-profit started by Harvard and MIT, is releasing some software to automagically grade human-written essays. Some see this software as just another tool for educators to use for more immediate feedback to students, while others are worried that these algorithms will be used incorrectly and lead to disastrous educational policies and outcomes. [url]
- There are studies that show algorithms are statistically comparable to humans when it comes to ranking essays on a 5 point scale. There are things machines can do better and things humans do better — just make sure you know the differences and automated essay grading can be done productively in the right context. [url]
- Automated essay readers can grade 16,000 essays in 20 seconds. The Educational Testing Service is testing out automation, so students may soon be facing algorithmic grading for their college entrance exams. [url]
- Grading a few sentences can be harder than it might look. Professional (human) teachers are obviously better at interpreting the insights and ideas behind the words a student writes, but computers scale much better and never tire of horrible spelling mistakes or misplaced modifiers…. [url]
If you’d like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post via StumbleUpon.
Filed Under: ai, algorithms, artificial intelligence, automation, education, edx, essays, ets, exams, grading, mooc, nlp, tests
Companies: harvard, hewlett foundation, kaggle, mit
DailyDirt: Wuzzle Means To Mix. Sculch Is Junk. Alate Means To Have Wings. A Baloo Is A Bear….
from the urls-we-dig-up dept
There are a lot of standardized tests for kids to take, but it’s not always clear what the results of the tests actually mean. If society wants to create a huge population of adults who can memorize some facts or fill out circles with no.2 pencils, then we’re doing a pretty good job of it. Here are a few links that question the usefulness of certain kinds of tests.
- The New York state Education Department recently threw out standardized test questions related to a nonsensical story about talking animals and a sleeveless pineapple. Apparently, a lot of 8th graders were confused about the moral of this story, but the larger lesson might be that standardized tests shouldn’t be taken too seriously. [url]
- The headmaster of one of NYC’s top private schools (Riverdale) doesn’t have a high opinion of standardized IQ tests for admissions. “This push on tests … is missing out on some serious parts of what it means to be a successful human.” [url]
- In Florida, it looks like 5th graders are getting their answers marked wrong even when they’re correct. Science is so subjective these days. [url]
- Finnish schools don’t administer standardized tests until the last year of high school, but somehow Finnish students seem to do well on the PISA (Program for International Student Assessment) exams. Is there something to be learned from the Finnish school system? [url]
- To discover more interesting education-related content, check out what’s currently floating around the StumbleUpon universe. [url]
By the way, StumbleUpon can recommend some good Techdirt articles, too.
Filed Under: education, exams, finland, schools, students, teachers, tests
Companies: riverdale