software – Techdirt (original) (raw)

Kaspersky Leaves U.S., Deletes Itself, Swaps Everybody’s Antivirus For Software Nobody Asked For

from the didn't-ask-for-this dept

Back in 2017, the Trump administration signed new rules banning Russian-based Kaspersky software on all government computers. Last June, the Biden administration took things further and banned distribution and sale of the software, stating that the company’s ties to the Russian government made its intimacy with U.S. consumer devices and data a national security threat.

While there are justifiable security concerns here, much like the ban of TikTok, the decision wasn’t absent of lobbying influence of domestic companies looking to dismantle a competitor. It’s relatively easy to get Congress heated up about national security concerns, because it tends to mask anti-competitive lobbying in a way you can brush aside non transparently for the greater good of the world [echoes].

Nor is a ban entirely consistently with broader U.S. policy, since U.S. government corruption prevents it from passing a meaningful privacy law, or regulating dodgy international data brokers that traffic in no limit of sensitive U.S. location and behavior data.

China and Russia don’t really need TikTok or AV software, they can simply buy access to your daily movement and browsing data from data brokers. Or, thanks to our lack of privacy laws or real accountability for lazy and bad actors, they can hack into any number of dodgy apps, software, or hardware with substandard security.

Regardless, this week Kaspersky Labs effectively left the U.S., but not before engaging in a practice that doesn’t exactly scream “high security standards.” The company effectively deleted its products from U.S. user computers without anybody’s consent, then replaced it with UltraAV’s antivirus solution — also without informing users.

Many users understandably saw this nonconsensual transaction take place and assumed they’d been hacked or infected with a virus:

“I woke up and saw this new antivirus system on my desktop and I tried opening kaspersky but it was gone. So I had to look up what happened because I was literally having a mini heart attack that my desktop somehow had a virus which uninstalled kaspersky somehow,” one user said.”

One problem is that Kaspersky had emailed customers just a few weeks ago, assuring them they would continue receiving “reliable cybersecurity protection.” They didn’t make any mention of the fact that this would involve deleting software and making installation choices consumers hadn’t approved of, suggesting that their exit from the security software industry won’t be all that big of a loss.

That said, it would be nice if U.S. consternation about consumer privacy were somewhat more… consistent.

The U.S. isn’t actually serious about U.S. consumer privacy because we make too much money off of the reckless collection and sale of said data to even pass baseline privacy laws. And the U.S. government has grown too comfortable being able to buy consumer data instead of getting a warrant. But we do like to put on a show that protecting consumer data is a top priority all the same.

Filed Under: antivirus, ban, consumers, national security, privacy, security, software
Companies: kaspersky

FTC Pushed To Crack Down On Companies That Ruin Hardware Via Software Updates Or Annoying Paywalls

from the you-don't-own-what-you-buy dept

Mon, Sep 9th 2024 05:30am - Karl Bode

We’ve noted for years how you no longer really own the things you buy. Whether it’s smart home hardware that becomes useless paperweights when the manufacturer implodes, or post-purchase firmware updates that actively make your device less useful, you simply never know if the product you bought yesterday will be the same product tomorrow.

Now a coalition of consumer groups, activists, and lawmakers are pushing the FTC to crack down on “smart” device manufacturers that suddenly pull support for products or make them less useful — either by simply removing features or hiding them behind annoying new subscription paywalls.

In a letter sent last week to key FTC officials, a coalition of seventeen different groups (including Consumer Reports, iFixit, and US PIRG) requested that the agency take aim at several commonplace anti-consumer practices, including “software tethering” (making hardware useless or less useful later via firmware update), or the act of suddenly locking key functionality behind subscriptions:

Both practices are examples of how companies are using software tethers in their devices to infringe on a consumer’s right to own the products they buy. While the FTC has taken some limited actions with regard to this issue, a lack of clarity and enforcement has led to an ecosystem where consumers cannot reliably count on the connected products they buy to last.

The letter cites numerous instances of consumer harms Techdirt has covered at length, ranging from Peloton’s recent decision to charge used bike owners a $95 fee for no coherent reason, to the “smart” baby bassinet maker that recently decided to paywall most of the device’s most popular features.

The letter correctly points out that this environment, where consumers are constantly shelling out significant money for devices that can be killed or rendered less useful (often without clear communications to end users), is resulting in a “death by a thousand cuts” for consumer rights. And, the groups note, it’s likely to only get worse without clear guidance and enforcement by the FTC.

The FTC has occasionally made inquiries in this space, but often only superficially. For example the FTC launched an investigation into Google’s decision to turn Revolv smart home hardware into useless crap but then took no substantive action and implemented no meaningful consumer reforms.

But the (intentionally) underfunded, understaffed, and endlessly embattled agency only has so many resources, and struggles to tackle even far more pressing issues like widespread monopolization or privacy violations. Still, some federal guidance and a few warnings would probably go a long way in a “smart” hardware sector that’s become a hot mess in the cloud computing age.

Filed Under: bricked, consumers, ftc, hardware, ownership, smart home, software, subscriptions

Move Over, Software Developers – In The Name Of Cybersecurity, The Government Wants To Drive

from the unconstitutional-camel-noses dept

Earlier this year the White House put out a document articulating a National Cybersecurity Strategy. It articulates five “pillars,” or high-level focus areas where the government should concentrate its efforts to strengthen the nation’s resilience and defense against cyberattacks: (1) Defend Critical Infrastructure, (2) Disrupt and Dismantle Threat Actors, (3) Shape Market Forces to Drive Security and Resilience, (4) Invest in a Resilient Future, and (5) Forge International Partnerships to Pursue Shared Goals. Each pillar also includes several sub-priorities and objectives as well.

It is a seminal document, and one that has and will continue to spawn much discussion. For the most part what it calls for is too high level to be particularly controversial. It may even be too high level to be all that useful, although there can be value in distilling into words any sort of policy priorities. After all, even if what the government calls for may seem obvious (like “defending critical infrastructure,” which of course we’d all expect it do), going to the trouble to actually articulate it as a policy priority provides a roadmap for more constructive efforts to follow and may also help to martial resources, plus it can help ensure that any more tangible policy efforts the government is inclined to directly engage in are not at cross-purposes with what the government wants to accomplish overall.

Which is important because what the rest of this post discusses is how the strategy document itself reveals that there may already be some incoherence among the government’s policy priorities. In particular, it lists as one of the sub-priorities an objective with troubling implications: imposing liability on software developers. This priority is described in a few paragraphs in the section entitled, “Strategic Objective 3.3: Shift Liability for Insecure Software Products and Services,” but the essence is mostly captured in this one:

The Administration will work with Congress and the private sector to develop legislation establishing liability for software products and services. Any such legislation should prevent manufacturers and software publishers with market power from fully disclaiming liability by contract, and establish higher standards of care for software in specific high-risk scenarios. To begin to shape standards of care for secure software development, the Administration will drive the development of an adaptable safe harbor framework to shield from liability companies that securely develop and maintain their software products and services. This safe harbor will draw from current best practices for secure software development, such as the NIST Secure Software Development Framework. It also must evolve over time, incorporating new tools for secure software development, software transparency, and vulnerability discovery.

Despite some equivocating language, at its essence it is no small thing that the White House proposes: legislation instructing people on how to code their software and requiring adherence to those instructions. And such a proposal raises a number of concerns, including in both the method the government would use to prescribe how software be coded, and the dubious constitutionality of it being able to make such demands. While with this strategy document itself the government is not yet prescribing a specific way to code software, it contemplates that the government someday could. And it does so apparently without recognizing how significantly shaping it is for the government to have the ability to make such demands – and not necessarily for the better.

In terms of method, while the government isn’t necessarily suggesting that a regulator enforce requirements for software code, what it does propose is far from a light touch: allowing enforcement of coding requirements via liability – or, in other words, the ability of people to sue if software turns out to be vulnerable. But regulation via liability is still profoundly heavy-handed, perhaps even more so than regulator oversight would be. For instance, instead of a single regulator working from discrete criteria there will be myriad plaintiffs and courts interpreting the language however they understand it. Furthermore, litigation is notoriously expensive, even for a single case, let alone with potentially all those same myriad plaintiffs. We have seen all too many innovative companies be obliterated by litigation, as well as seen how the mere threat of litigation can chill the investment needed to bring new good ideas into reality. This proposal seems to reflect a naïve expectation that litigation will only follow where truly deserved, but we know from history that such restraint is rarely the rule.

True, the government does contemplate there being some tuning to dull the edge of the regulatory knife, particularly through the use of safe harbors, such that there are defenses that could protect software developers from being drained dry by unmeritorious litigation threats. But while the concept of a safe harbor may be a nice idea, they are hardly a panacea, because we’ve also seen how if you have to litigate whether they apply then there’s no point if they even do. In addition, even if it were possible to craft an adequately durable safe harbor, given the current appetite among policymakers to tear down the immunities and safe harbors we currently have, like Section 230 or the already porous DMCA, the assumption that policymakers will actually produce a sustainable liability regime with sufficiently strong defenses and not be prone to innovation-killing abuse is yet another unfortunately naïve expectation.

The way liability would attach under this proposal is also a big deal: through the creation of a duty of care for the software developer. (The cited paragraph refers to it as “standards of care,” but that phrasing implies a duty to adhere to them, and liability for when those standards are deviated from.) But concocting such a duty is problematic both practically and constitutionally, because at its core, what the government is threatening here is alarming: mandating how software is written. Not suggesting how software should ideally be written, nor enabling, encouraging, nor facilitating it to be written well, but instead using the force of law to demand how software be written.

It is so alarming because software is written, and it raises a significant First Amendment problem for the government to dictate how anything should be expressed, regardless how correct or well-intentioned the government may be. Like a book or newspaper, software is something that is also expressed through language and expressive choices; there is not just one correct way to write a program that does something, but rather an infinite number of big and little structural and language decisions made along the way. But this proposal basically ignores the creative aspect to software development (indeed, software is even treated as eligible for copyright protection as an original work of authorship). Instead it treats it more like a defectively-made toaster than a book or newspaper, replacing the independent expressive judgment of the software developer with the government’s. Courts have also recognized the expressive quality to software, so it would be quite a sea change if the Constitution somehow did not apply to this particular form of expression. And such a change would have huge implications, because cybersecurity is not the only reason that the government keeps proposing to regulate software design. The White House proposal would seem to bless all these attempts, no matter how ill-advised or facially censorial, by not even contemplating the constitutional hurdles any legal regime to regulate software design would need to hurdle.

It would still need to hurdle them even if the government truly knew best, which is a big if, even here, and not just because the government may lack adequate enough or current enough expertise. The proposal does contemplate a multi-stakeholder process to develop best practices, and there is nothing wrong in general with the government taking on some sort of facilitating role to help illuminate what these practices are and making sure software developers are aware of them – it may even be a good idea. The issue is not that there may be no such thing as any best practices for software development – obviously there are. But they are not necessarily one-size-fits-all or static; a best practice may depend on context, and constantly need to evolve to address new vectors of attack. But a distant regulator, and one inherently in a reactive posture, may not understand the particular needs of a particular software program’s userbase, nor the evolving challenges facing the developer. Which is a big reason why requiring adherence to any particular practice through the force of law is problematic, because it can effectively require software developers to make their code the government’s way rather than what is ultimately the best way for them and their users. Or at least put them in the position of having to defend their choices, which up until now the Constitution had let them make freely. And which would amount to a huge, unprecedented burden that threatens to chill software development altogether.

Such chilling is not an outcome the government should want to invite, and indeed, according to the strategy document itself, does not want. The irony with the software liability proposal is that it is inherently out-of-step with the overall thrust of the rest of the document, and even the third pillar it appears in itself, which proposes to foster better cybersecurity through the operation of more efficient markets. But imposing design liability would have the exact opposite effect on those markets. Even if well-resourced private entities (ex: large companies) might be able to find a way to persevere and navigate the regulatory requirements, small ones (including those potentially excluded from the stakeholder process establishing the requirements) may not be able to meet them, and individual people coding software are even less likely to. The strategy document refers to liability only on developers with market power, but every software developer has market power, including those individuals who voluntarily contribute to open source software projects, which provide software users with more choices. But those continued contributions will be deterred if those who make them can be liable for them. Ultimately software liability will result in fewer people writing code and consequently less software for the public to use. So far from making the software market work more efficiently through competitive pressure, imposing liability for software development will only remove options for consumers, and with it the competitive pressure the White House acknowledges is needed to prompt those who still produce software to do better. Meanwhile, those developers who remain will still be inhibited from innovating if that innovation can potentially put them out of compliance with whatever the law has so far managed to imagine.

Which raises another concern with the software liability proposal and how it undermines the rest of the otherwise reasonable strategy document. The fifth pillar the White House proposes is to “Forge International Partnerships to Pursue Shared Goals”:

The United States seeks a world where responsible state behavior in cyberspace is expected and rewarded and where irresponsible behavior is isolating and costly. To achieve this goal, we will continue to engage with countries working in opposition to our larger agenda on common problems while we build a broad coalition of nations working to maintain an open, free, global, interoperable, reliable, and secure Internet.

On its face, there is nothing wrong with this goal either, and it, too, may be a necessary one to effectively deal with what are generally global cybersecurity threats. But the EU is already moving ahead to empower bureaucratic agencies to decide how software should be written, yet without a First Amendment or equivalent understanding of the expressive interests such regulation might impact. Nor does there seem to be any meaningful understanding about how any such regulation will affect the entire software ecosystem, including open source, where authorship emerges from a community, rather than a private entity theoretically capable of accountability and compliance.

In fact, while the United States hasn’t yet actually specified requirements for design practices a software developer must comply with, the EU is already barreling down the path of prescriptive regulation over software, proposing a law that would task an agency to dictate what criteria software must comply with. (See this post by Bert Hubert for a helpful summary of its draft terms.) Like the White House, the EU confuses its stated goal of helping the software market work more efficiently with an attempt to control what can be in the market. For all the reasons that an attempt by the US stands to be counterproductive, so would EU efforts be, especially if born from a jurisdiction lacking a First Amendment or equivalent understanding of the expressive interests such regulation would impact. Thus it may turn out to be European bureaucrats that attempt to dictate the rules of the road for how software can be coded, but that means that it will be America’s job to try to prevent that damage, not double-down on it.

It is of course true that not everything software developers currently do is a good idea or even defensible. Some practices are dreadful and damaging. It isn’t wrong to be concerned about the collateral effects of ill-considered or sloppy coding practices or for the government to want to do something about it. But how regulators respond to these poor practices is just as important, if not more so, than that they respond, if they are going to make our digital environment better and more secure and not worse and less. There are a lot of good ideas in the strategy document for how to achieve this end, but imposing software design liability is not one of them.

Filed Under: 1st amendment, chilling effects, coding, computer security, cybersecurity, duty of care, innovation, liability, national cybersecurity strategy, software, standards of care, white house

EFF Tells Court Defendants Must Be Allowed To Examine The DNA Software Used To Convict Them

from the rolling-dice-with-more-sides-but-they're-still-just-dice dept

A proper adversarial system means the accused can confront the accuser. But that’s rarely the case when crime solving software is involved. The FBI doesn’t allow accused child porn downloaders to examine the malicious software it used to identify their computers. Multiple law enforcement agencies have dropped cases rather than discuss Stingray devices in open court.

All DNA analysis is handled by software. Most DNA analysis utilizes proprietary code created by private companies which license it to government agencies. The analysis may be performed by government agencies and employees, but when it comes to giving defense lawyers and their clients a chance to examine the software used to generate evidence, it suddenly becomes a very private matter.

Companies routinely intercede in criminal cases, telling judges that handing over source code or other information about their algorithms would somehow make it impossible for them to compete in the crime solving market. In most cases, judges are sympathetic to claims about trade secrets and proprietary code, allowing the accused to only confront their accuser by proxy, via a government expert or an employee of the software company.

In rare cases, the court actually finds in favor of the defendant. Earlier this year, a case involving third-party DNA software and the EFF’s intercession went the defendant’s way with a federal judge in Pennsylvania telling the government it couldn’t hide behind third-party trade secret assertions to keep this code out of the accused’s hands. As the court reasoned then, if DNA evidence is central to the case against the defendant, the defendant should have access to the evidence and the software that created it.

The EFF is hoping for a similar outcome in a case being handled in California. It deals with the possibly wrongful conviction of a 70-year-old man for rape. And it involves a DNA software company whose algorithm was the only one that tied the suspect to the crime.

An elderly woman was sexually assaulted and murdered in her home and two witnesses described seeing a black man in his 50s on the property on the day of the murder. Dozens of people had passed through the victim’s home in the few months leading up to the murder, including Mr. Davis and another individual. Mr. Davis is an African American man who was in his 70s at the time of the murder and suffers from Parkinson’s disease. Another individual who met the witnesses’ description had a history of sex crimes including sexual assault with a foreign object.

DNA samples were taken from dozens of locations and items at the crime scene. Mr. Davis’s DNA was not found on many of those, including a cane that was allegedly used to sexually assault the victim. Traditional DNA software was not able to match Mr. Davis to the DNA sample from a shoelace that was likely used to tie up the victim—but STRMix did, and the prosecution relied heavily on the latter before the jury.

As the EFF points out in its brief [PDF], DNA software is anything but infallible. STRMix was caught a half-decade ago when a bug in its code possibly led to dozens of false arrests and convictions. Presumably that bug has been patched, but if no one outside of STRMix is allowed to examine the code, it’s impossible to see if it might be leading prosecutors and government experts to overstate the certainty of DNA matches.

The necessity of independent source code review for probabilistic DNA programs was starkly demonstrated when FST (a counterpart to STRmix that was used in New York crime labs) was finally provided to a defense team for analysis. According to a defense expert, the undisclosed portion of the code could incorrectly tip the scales in favor of the prosecution’s hypothesis that a defendant’s DNA was present in a mixture. Reply Mem. of Law in Supp. as to Kevin Johnson at 19-21, United States v. Kevin Johnson, (S.D.N.Y. Feb. 27, 2017) (No. 15-CR-565 (VEC), D.I. 110). In fact, STRmix8 has suffered from programming errors that created false results in 60 cases in Queensland, Australia.

The problems caused by nondisclosure are especially acute in the context of the latest generation of probabilistic DNA analysis because there is no objective baseline truth against which the output from the program may be evaluated—and thus it is impossible to gauge the accuracy of these programs by examining their results.

If there’s no objective baseline, every DNA analysis program is allowed to grade on its own curve. DNA matches aren’t actually matches. They just reflect the likelihood of a match. With no baseline, the probability of it being an actual match is left to the discernment of prosecutors and their expert witnesses — all of whom come out looking better if they can secure a conviction.

Unlike breathalyzers, the latest generation of complex DNA analysis tools cannot be measured against an objective truth. Instead, these DNA programs are more akin to probabilistic election forecasting models, such as those designed by FiveThirtyEight and The Economist. The outputted results are based on the calculation of the probability of events—that the defendant, rather than a random person, contributed to the DNA mixture or that person X will win an election—a value that is not an objectively measurable fact. This is why different DNA programs, and even different laboratories using the same program, will generate substantially different results for the same sample.

This is why courts should allow defendants to examine the software that has, for the most part, accused them of committing crimes. If different algorithms produce different outcomes using the same inputs, none are to be trusted until they’re independently examined. And DNA software companies aren’t interested in that happening — not solely because of any trade secrets but because any defendant who successfully casts doubt on the accuracy of test results undermines their business model.

But protecting a business model isn’t the court’s business. The courts are there to serve justice, which means protecting the rights of the accused from accusers utilizing proprietary tech while waving around signed NDAs.

Filed Under: dna, due process, evidence, software

UK Court Overturns 39 Convictions Of Post Office Workers Caused By Buggy Software

from the shittiest-skynet-ever dept

Never underestimate the power of technology to destroy lives. Flawed software used for the last 20 years by the UK postal service resulted in dozens of wrongful criminal convictions which are only just now being overturned.

Judges have quashed the convictions of 39 former postmasters after the UK’s most widespread miscarriage of justice.

They were convicted of stealing money, with some imprisoned, after the Post Office installed the Horizon computer system in branches.

[…]

The clearing of the names of 39 people follows the overturning of six other convictions in December, This means more people have been affected than in any other miscarriage of justice in the UK.

The notoriously buggy software debuted in 1999. Apparently it was unable to do math properly, resulting in reported cash shortages that actually weren’t happening. Some employees attempted to make up these faux shortfalls with their own money by digging into savings or remortgaging their homes. Rather than address the problematic software, the UK Post Office went into prosecutorial overdrive, bringing cases against employees at the rate of one per week… for fourteen years straight. A total of 736 employees were prosecuted by the Post Office from 2000 to 2014.

And yet, the UK Post Office continued to rely on software that was actively destroying lives.

Marriages broke down, and courts have heard how some families believe the stress led to health conditions, addiction and premature deaths.

“The past nine years have been hellish and a total nightmare. This conviction has been a cloud over my life,” said former Oxfordshire sub-postmaster Vipinchandra Patel, whose name was cleared late last year.

Seema Misra was pregnant with her second child when she was convicted of theft and sent to jail in 2010. She said that she had been “suffering” for 15 years as a result of the saga.

By the end of 2019, the Post Office had agreed to settle claims brought by 555 employees. And now the last of the wrongful convictions have been overturned. But, so far, no one at the Post Office or Fujitsu (the software developer) has been held accountable for the nearly 20-year run of destruction they oversaw.

That could change in the near future. The UK court seems completely unimpressed with the Post Office’s actions (or lack thereof).

At the Royal Courts of Justice in London, Lord Justice Holroyde said the Post Office “knew there were serious issues about the reliability of Horizon” and had a “clear duty to investigate” the system’s defects.

But the Post Office “consistently asserted that Horizon was robust and reliable” and “effectively steamrolled over any sub-postmaster who sought to challenge its accuracy”, the judge added.

Sure, everyone at the Post Office seems pretty apologetic now. But that’s after 15 years of ignoring the problem and choosing to believe software rather than the people hired to do the job. Tech can make things better and increase productivity, but it’s rarely flawless and generally shouldn’t be considered more trustworthy than the people who have to interact with it on a daily basis.

Filed Under: criminal convictions, software, uk, uk postal service

EFF, College Student Sue Proctorio Over DMCAs On Fair Use Critique Tweets Of Software

from the failing-grade dept

Late last year, while the COVID-19 pandemic was gearing up to hit its peak here in the States, we wrote about one college student and security researcher taking on Proctorio, a software platform designed to keep remote students from cheating on exams. Erik Johnson of Miami University made a name for himself on Twitter not only for giving voice to a ton of criticism Proctorio’s software has faced over its privacy implications and inability to operate correctly for students of varying ethnicities, but also for digging into Proctorio’s available source code, visible to anyone that downloads the software. But because he posted that code on PasteBin to demonstrate his critique of Proctorio, the company cried copyright infringement and got Twitter to take his tweets down initially as a result, before they were later restored.

But if Proctorio thought that would be the end of the story, it was wrong. The EFF has now gotten involved and has filed a lawsuit against Proctorio in an effort to end any online harassment of Johnson.

The lawsuit intends to address the company’s behavior toward Johnson in September of last year. After Johnson found out that he’d need to use the software for two of his classes, Johnson dug into the source code of Proctorio’s Chrome extension and made a lengthy Twitter thread criticizing its practices — including links to excerpts of the source code, which he’d posted on Pastebin. Proctorio CEO Mike Olsen sent Johnson a direct message on Twitter requesting that he remove the code from Pastebin, according to screenshots viewed by The Verge. After Johnson refused, Proctorio filed a copyright takedown notice, and three of the tweets were removed. (They were reinstated after TechCrunch reported on the controversy.)

In its lawsuit, the EFF is arguing that Johnson made fair use of Proctorio’s code and that the company’s takedown “interfered with Johnson’s First Amendment right.”

“Copyright holders should be held liable when they falsely accuse their critics of copyright infringement, especially when the goal is plainly to intimidate and undermine them,” said EFF Staff Attorney Cara Gagliano in a statement.

Frankly, it’s difficult to understand what Proctorio’s rebuttal to any of that would be. What Johnson did with his tweets and the replication of the source code that was the subject of his criticism is about as square an example of Fair Use as I can imagine. The use was not intended to actually replicate what Protctorio’s software does. Quite the opposite, in fact. It was intended as evidence for why Proctorio’s software should not be used. It was limited in its use as part of a critique of the company’s software. And it was decidedly non-commercial in nature.

In other words, it was clearly an attempt by Proctorio to silence a critic, rather than any legitimate concern over the reproduction of the source code, which is again freely available to anyone who downloads the browser extension. It’s also worth noting that there is a pattern of behavior of this sort of thing by Proctorio.

Proctorio has engaged critics in court before, although more often as a plaintiff. Last October, the company sued a technology specialist at the University of British Columbia who made a series of tweets criticizing the platform. The thread contained links to unlisted YouTube videos, which Proctorio claimed contained confidential information. The lawsuit drew ire from the global education community: hundreds of university faculty, staff, administrators, and students have signed an open letter in the specialist’s defense, and a GoFundMe for his legal expenses has raised $60,000 from over 700 donors.

It’s the kind of behavior that doesn’t end just because some tweets get reinstated or there is a modicum of public outrage. Instead, it takes a concerted effort by groups like the EFF to force a corporate bully to change its ways. Given Proctorio’s bad behavior in all of this, let’s hope the courts don’t let them off the hook.

Filed Under: bogus copyright claims, cheating, copyright, copyright as cenosrship, criticism, dmca, erik johnson, remote exams, software
Companies: eff, proctorio

Arizona's $24-Million Prison Management Software Is Keeping People Locked Up Past The End Of Their Sentences

from the taking-life,-wasting-tax-dollars dept

The Arizona Department of Corrections is depriving inmates of freedom they’ve earned. Its $24 million tracking software isn’t doing what it’s supposed to when it comes to calculating time served credits. That’s according to whistleblowers who’ve been ignored by the DOC and have taken their complaints to the press. Here’s Jimmy Jenkins of KJZZ, who was given access to documents showing the bug has been well-documented and remains unfixed, more than a year after it was discovered.

According to Arizona Department of Corrections whistleblowers, hundreds of incarcerated people who should be eligible for release are being held in prison because the inmate management software cannot interpret current sentencing laws.

KJZZ is not naming the whistleblowers because they fear retaliation. The employees said they have been raising the issue internally for more than a year, but prison administrators have not acted to fix the software bug. The sources said Chief Information Officer Holly Greene and Deputy Director Joe Profiri have been aware of the problem since 2019.

The management software (ACIS) rolled out during the 2019 Thanksgiving holiday weekend, which is always the best time to debut new systems that might need a lot of immediate tech support. Since its rollout, the software has generated 19,000 bug reports. The one at the center of this ongoing deprivation of liberty arose as the result of a law passed in June of that year. The law gave additional credit days to inmates charged with low-level drug offenses, increasing the credit from one day for every six served to three days for every seven.

Qualified inmates are only supposed to serve 70% of their sentences, provided they also complete some other prerequisites, like earning a GED or entering a substance abuse program. That law hasn’t been implemented in the Arizona prison system because the $24 million software can’t seem to figure out how to do it.

To be sure, legislation that changes time served credits for only a certain percentage of inmates creates problems for prison management systems. But that’s why you spend $24 million buying one, rather than just asking employees if they’re any good at Excel.

But that’s what has actually happened. With the expensive software unable to correctly calculate time served credits, prison employees are doing it by hand.

Department sources said this means “someone is sitting there crunching numbers with a calculator and interpreting how each of the new laws that have been passed would impact an inmate.”

“It makes me sick,” one source said, noting that even the most diligent employees are capable of making math errors that could result in additional months or years in prison for an inmate. “What the hell are we doing here? People’s lives are at stake.”

Hundreds of inmates are affected. A spokesperson for the prison system says the DOC has identified 733 inmates who qualify for the increased time served credits. But that number is still likely on the low end since the software is incapable of accurately identifying qualifying inmates, much less accurately calculating the length of time they have left to serve.

Meanwhile, the bug that’s killing freedom remains unpatched. And it appears the software’s many other bugs are making time spent in prison even more dangerous and miserable than it already is. Medical information goes missing or fails to transfer correctly when inmates are moved. Rival gang members have been placed in the same cells. Head counts are inaccurate. Inmate property and commissary funds are routinely recorded incorrectly.

Prison is already a miserable experience. Those trying to turn their lives around and engage in the rehabilitative process most prisons consider to be ancillary at best are being punished for trying by a system that is failing everyone who uses it or is affected by it.

Filed Under: acis, arizona, arizona department of corrections, early release, holly greene, joe profiri, prison management, software

Australian Cops Are Pre-Criming Students Too, Setting Minors Up For A Lifetime Of Harassment

from the procedural-crime-generation dept

It’s not just American law enforcement agencies turning kids into criminals. They’re doing it in Australia too. In Florida, the Pasco County Sheriff’s Office uses software to mark kids as budding criminals, using questionable markers like D-grades, witnessing domestic violence, or being the victim of a crime. The spreadsheet adds it all up and gives deputies a thumbs up to start treating students like criminals, even if they’ve never committed a criminal act.

Over in Australia, the process seems to be a bit more rigorous, but the outcome is the same: non-criminals marked (possibly for life) as potential criminals who should be targeted with more law enforcement intervention.

Victorian police say a secretive data tool that tracked youths and predicted the risk they would commit crime is not being widely used, amid fears it leads to young people from culturally diverse backgrounds being disproportionately targeted.

The tool, which had been used in Dandenong and surrounding suburbs, was only revealed in interviews with police officers published earlier this year.

Between 2016 and 2018, police categorised young people as “youth network offenders” or “core youth network offenders”.

It takes a bit more to be added to this secret list — one police have managed to keep hidden from the general public. Even the program’s name remains a secret. This means parents are never informed when cops decide their kids are criminals-in-development. It also possibly means schools aren’t aware the data they’re feeding the police is being used this way.

According to the research paper detailing the program, Victoria police have classified 40-60 students as “core youth network offenders.” Another 240 students were classified as “youth network offenders.” To get placed on these exclusive lists, students must be charged dozens of times with “offenses,” running from 20 for the 10-14-year-old group to over 60 for 18-year-olds. It’s unclear from the context of the report whether this means criminal offenses or in-school discipline “offenses,” but the latter seems more likely. Someone criminally charged over 60 times before they reached the age of 18 wouldn’t need to be on a secret youth offender list to be on law enforcement’s radar.

The Victoria police appear to believe the tech is actually magic.

_“We can run that tool now and it will tell us – like the kid might be 15 – it tells how many crimes he is going to commit before he is 21 based on that, and it is a 95% accuracy,” one senior officer told [researchers]. “It has been tested._”

Actual pre-crime, stripped of all the obfuscating language that normally surrounds statements on profiling/predictive policing programs. This program can actually predict criminal acts… at least according to its proponents and users. Presumably the police aren’t locking up listed students ahead of any wrongdoing, but they’re certainly increasing their interactions and surveillance of students the tool said will commit [x] crimes over the next few years.

And, like every goddamn predictive policing program that exists anywhere, it focuses on minorities and other disadvantaged residents.

In Dandenong, 67% of households spoke a language other than English at home, more than three times the national average, according to the 2016 census. Almost 80% of all residents had parents who were both born overseas, more than double the national average.

The weekly household income was $412 less than the Australian median, and the unemployment rate of 13% was almost double the national figure.

Cheer up. The cops are here to take everything that sucks about life and make it worse. Rather than address the underlying problems, law enforcement appears content to throw a spreadsheet over it and divert resources towards subjecting certain people to a lifetime of harassment. Then, when things inevitably get worse, they can ask for more money to buy more “smart” policing tech garbage that ensures this hideous, regressive loop remains unbroken.

Filed Under: abuse, australia, harassment, police, pre-crime, software

Anti-Cheat Student Software Proctorio Issuing DMCA Takedowns Of Fair Use Critiques Over Its Code

from the fail dept

As we’ve discussed before, the COVID-19 pandemic has forced many educational institutions into remote learning and with it, remote test-taking. One of the issues in all of that is how to ensure students taking exams are doing so without cheating. Some institutions employ humans to watch students over video calls, to ensure they are not doing anything untoward. But many, many others are using software instead that is built to try to catch cheating by algorithmically spotting “clues” of cheating.

Proctorio is one of those anti-cheat platforms. The software has been the subject of some fairly intense criticism from students, many of whom allege both that the software seems to have trouble interpreting what darker-skinned students are doing on the screen and that it requires a ton of bandwidth, which many low-income students simply don’t have access to. Erik Johnson, who is a student and security researcher, wanted to dig into Proctorio’s workings. Given that it’s a browser extension, he simply downloaded it and started digging through the readily available code. He then tweeted out his findings, along with links to Pastebin pages where he had shared the code he references in each tweet. Below are some of the tweets that you can reference for yourself.

Here’s a list of metrics Proctorio looks for & flags:
– Changes in audio levels
– Abnormal clicking
– Abnormal copy & pastes
– Abnormal exam duration
– End times
– Eye movement
– # of faces
– Head movement
– Abnormal movement of mouse
& more https://t.co/iB6zsjvfcB

— Erik Johnson (@ejohnson99) September 8, 2020

It’s important to note that these tweets are part of a regular string that Johnson has put out critiquing the way Proctorio functions. In other words, due to all the consternation over how Proctorio works among students, this is public criticism from a security researcher showing his work from source code that literally anyone can see if they download Proctorio. And, while you can see the tweets above currently, Proctorio initially had them taken down via DMCA takedown requests.

Those three tweets are no longer accessible on Twitter after Proctorio filed its takedown notices. The code shared on Pastebin is also no longer accessible, nor is a copy of the page available from the Internet Archive’s Wayback Machine, which said the web address had been “excluded.”

A spokesperson for Twitter told TechCrunch: “Per our copyright policy, we respond to valid copyright complaints sent to us by a copyright owner or their authorized representatives.”

Johnson provided TechCrunch a copy of the takedown notice sent by Twitter, which identified Proctorio’s marketing director John Devoy as the person who requested the takedown on behalf of Proctorio’s chief executive Mike Olsen, who is listed as the copyright owner.

When asked to comment at the time, Proctorio noted that just because anyone can see the code by downloading the software doesn’t mean reproducing it is not a copyright violation. And that’s true, although quite a stupid bit of copyright enforcement. What Proctorio didn’t mention is that this sort of critique and use of copyrighted content in furtherance of that critique is precisely what Fair Use is meant to protect. That the company clearly did this as a method for getting some critical tweets taken down also went unmentioned.

“This is really a textbook example of fair use,” said EFF staff attorney Cara Gagliano. “What Erik did — posting excerpts of Proctorio’s code that showed the software features he was criticizing — is no different from quoting a book in a book review. That it’s code instead of literature doesn’t make the use any less fair.”

“Using DMCA notices to take down critical fair uses like Erik’s is absolutely inappropriate and an abuse of the takedown process,” said Gagliano. “DMCA notices should be lodged only when a copyright owner has a good faith belief that the challenged material infringes their copyrighted work — which requires the copyright owner to consider fair use before hitting send.

Which is probably why Twitter eventually reinstated Johnson’s tweets in their entirety, although the message sent to him was that it did so because Proctorio’s DMCA notice was “incomplete”. Whatever the hell that means. You sort of have to wonder if the incomplete-ness of the notices would have been discovered if Johnson and the EFF hadn’t kicked up a shitstorm about it.

Meanwhile, because of course, a lot more people know about the criticism of Proctorio thanks to its efforts to try to silence criticism. Isn’t there a moniker for that?

Filed Under: anti-cheat, cheating, dmca, erik johnson, proctoring, remote learning, remote test taking, security research, software
Companies: proctorio, twitter

As We're All Living, Working, And Socializing Via The Internet… MIT Tech Review Says It Proves Silicon Valley Innovation Is A Myth

from the say-what-now? dept

I get that people are getting a bit of cabin fever and perhaps that’s impacting people’s outlook on the world, but a recent piece by David Rotman in the MIT Tech Review is truly bizarre. The title gets you straight to the premise: Covid-19 has blown apart the myth of Silicon Valley innovation. Of course, even the paragraph that explains the thesis seems almost like a modern updating of the famous “what have the Romans ever done for us?” scene from Monty Python’s Life of Brian:

Silicon Valley and big tech in general have been lame in responding to the crisis. Sure, they have given us Zoom to keep the fortunate among us working and Netflix to keep us sane; Amazon is a savior these days for those avoiding stores; iPads are in hot demand and Instacart is helping to keep many self-isolating people fed. But the pandemic has also revealed the limitations and impotence of the world?s richest companies (and, we have been told, the most innovative place on earth) in the face of the public health crisis.

Wait, what? That doesn’t seem “lame” at all. That kinda seems central to keeping much of the world safe, sane, and connected. And the next paragraph seems equally ridiculous:

Big tech doesn?t build anything. It?s not likely to give us vaccines or diagnostic tests. We don?t even seem to know how to make a cotton swab. Those hoping the US could turn its dominant tech industry into a dynamo of innovation against the pandemic will be disappointed.

Leaving aside the hilariously wrong “big tech doesn’t build anything,” this paragraph reads like “how dare pharmaceutical companies not build video conferencing software.” Besides, tons of big tech companies are doing a lot (beyond the admitted list above) to help during the pandemic, including Google’s and Apple’s efforts to help with contact tracing, and then, of course, there are plenty of examples of the big tech companies of Silicon Valley trying to do more to help out in the pandemic as well. No matter what you think of Elon Musk, engineers at Tesla have been working on using a bunch of existing parts to build ventilators:

And that’s already received praise (and some constructive suggestions) from healthcare professionals.

Basically, the entire premise of Rotman’s article makes no sense at all, and he just keeps repeating it like if he says it enough, maybe people will believe him:

The pandemic has made clear this festering problem: the US is no longer very good at coming up with new ideas and technologies relevant to our most basic needs.

Except that we are — as his own article makes clear. The fact that internet companies aren’t magically creating vaccines isn’t a condemnation of Silicon Valley innovation. I mean, at best, you could argue that it’s a failure of big pharma innovation, but it seems a bit early to be saying that one way or another given that we’re just a few months into this thing, and a bunch of innovations that are helping to rapidly create a vaccine, like genetic testing, have also developed with help from Silicon Valley.

The only way Rotman supports his premise is to argue that software/internet companies are producing software/internet products, rather than manufacturing physical goods. But, again, that’s like saying “why doesn’t Pfizer create videoconferencing software.” It’s not their business. And, perhaps Rotman should get out of Cambridge and come to Silicon Valley (well, post pandemic) to learn about how there’a a hardware renaissance happening in Silicon Valley, in part thanks to new innovations like 3D printing.

The whole article reads like Rotman had a premise, and then wrote the article despite the near total lack of any actual evidence to support the premise. It’s a bad look for MIT’s Tech Review, but what good has MIT ever brought the world anyway?

Filed Under: david rotman, hardware, innovation, pandemic, remote work, silicon valley, social distancing, software, technology