testing – Techdirt (original) (raw)

AI Lawyer Has A Sad: Bans People From Testing Its Lawyering After Being Mocked

from the DoNotTest-DoNotPay dept

Well, a lot has happened since I first started looking into the “World’s First Robot Lawyer,” from DoNotPay. First, Joshua Browder, DoNotPay’s CEO, reached out to me via direct message (DM) and told me he would get me access to my documents by 2 PM the next day – Tuesday, January 24th – saying that the delay was caused by my account being locked for “inauthentic activity,” a term he did not explain or define. Then, Josh claimed he was going to pull out of the industry entirely, canceling his courtroom stunt and saying he would disable all the legal tools on DoNotPay.com. He said he was doing it because it was a distraction, but the fact that he cited exactly the same two documents that I was waiting to receive seemed like a hell of a coincidence.

But plus ça change, plus c’est la même fucking chose, as the poet says. Here’s what hasn’t changed: I still don’t have my documents, and Josh still hasn’t answered the questions I asked him like he said I would.

In his direct message, Josh said he would be willing to answer any questions I asked in good faith. I took him at his word, and responded to the email he sent me announcing his pullout with the following four questions:

  1. Can you describe for me the process DoNotPay used to identify the relevant law for a demand letter? (Cf. “Based on your location, DoNotPay will generate a formal demand letter on your behalf with the most relevant state legislation regarding defamation,” from here: https://web.archive.org/web/20230127063724/https://donotpay.com/learn/cease-and-desist-order/)
  2. Were humans involved in the generation of any client documents described by anything under your “Legal Tools” section? I don’t mean the creation of the templates, etc., I mean in the production of a document based on client responses to prompts.
  3. Are the articles in the “Learn” section of your website written by ChatGPT or equivalent, or by humans?
  4. Who signed the subpoena for the officer in the traffic case that was referenced in your now-deleted tweet?

I asked all these questions in good faith, and for good reasons. Josh represents — over and over and over and over again — that DoNotPay features a robot lawyer with artificial intelligence, going so far as to say that it uses AI instead of “human knowledge.” The sole document I got was one that didn’t make any kind of promise of customization or that it would contain “the most” relevant legislation for anything; the ones that did require that kind of analysis were the ones that got hung up in multi-hour deadlines and never ultimately delivered. Given how much weight he puts on these claims, I think it’s fair to interrogate and test them.

The articles, or “blog posts” as Josh calls them, are a slightly different situation. There are a TON of them, and they are all published without dates or bylines. But many of those articles have significant errors, both legal and factual, and if someone relied upon them, they could be in big trouble. And while I didn’t actually expect him to answer the question about the subpoena, he opened the door by bragging about it in the first place, as far as I’m concerned.

(Only two sentences in this screenshot above are completely accurate. I’m not going to represent to you which two they are, because I am not a lawyer.)

This email, to my great disappointment, went completely ignored.

It wasn’t because he was too busy taking down all his bots, either. On Thursday night, I logged back into the site to check, and discovered that all the prompts I had accessed before were still available, save the two that Josh mentioned specifically in his tweet — the defamation demand letter and the divorce settlement. But even those were still being advertised; every “blog post” on the site has a signup teaser in it advertising access to the site’s legal and other services with no sign that they’re inaccessible until after DoNotPay has your money. One particularly egregious blog post that advertised “free legal advice” to those looking for help navigating the immigration process to become American citizens finally pushed me over the edge, and so I pinged Josh to remind him that I was still waiting for my documents and an answer to my questions, let that sit for a bit, and then started another thread about all the ways he was being less than forthright with the truth. In the course of writing that thread, I discovered that I was suddenly banned from the site; not only could I not log in, but any attempts to do so gave me an error message that read merely “something went wrong.”

It took Josh less than an hour to get back to me via DM, informing me that my money had been completely refunded and complaining that I was lying about the “bots” still being up, although he later admitted that only 7 bots had come down in the previous 36 hours (out of an advertised 1,000):

When I told him I had tested them myself and even generated new documents and cases, he demanded “Was your usage authentic?” I responded “It certainly complied with every provision of the Terms of Service.” At this point, Josh disappeared from the conversation.

After this pause had stretched out for a while, something was kind of nagging at me. I went back and looked at that question and answer again and thought “what is it about the Terms of Service that suddenly had him needing to leave immediately?”

By sheer coincidence, someone — not me (at least I don’t think so) — had archived the DoNotPay Terms of Service just about exactly when I started tweeting my thread, so I know exactly what they said then.

It also meant that I could spot exactly what had changed between that time and this one, a mere two hours later: Josh added a clause to the TOS prohibiting users from testing the website prior to using it in a live dispute.

If you can’t see that, right after I told him I was not violating his terms, he appeared to add this to his terms:

You represent that any dispute or request submitted is an authentic problem you are having. You are responsible for any damages to DoNotPay or others from fake, inauthentic or test disputes.

So now “test” disputes violate the terms?

This change was made after he banned me, without any warning. He claims he told me to “keep it real,” but he absolutely did not, and his claim that I “triggered his anti-spam” by making 10 or 15 new cases seems to run against his site’s promises that one can create an “unlimited number of documents.”

He didn’t answer my questions outside of saying “no the letters aren’t being hand typed out and no we didn’t write them,” which… didn’t answer my questions in the slightest. And then he blocked me.

So there you go. Joshua Browder, CEO of DoNotPay.com, would rather block me, ban my account, retcon his terms of service to disallow any test usage at all, and claim to pull out of the “Legal Services” industry that his site is PLASTERED with branding for, rather than show me the two documents I generated and tried to buy.

I wonder what he doesn’t want me to see?

Filed Under: ai, ai lawyer, joshua browder, robot lawyer, terms, testing, unauthorized practice of law
Companies: donotpay

Instructors And School Administrators Are Somehow Managing To Make Intrusive Testing Spyware Even Worse

from the by-the-time-they're-done,-no-one's-going-to-want-to-go-back-on-campus dept

The COVIDian dystopia continues. After a brief respite, infections and deaths have surged, strongly suggesting the “we’re not doing anything about it” plan adopted by many states is fattening the curve. With infections spreading once again, the ushering of children back to school seems to have been short-sighted.

But not all the kids are in school. Some are still engaged in distance learning. For many, this means nothing more than logging in and completing posted assignments using suites of tools that slurp up plenty of user data. For others, it feels more being forced to bring their schools home. In an effort to stop cheating and ensure “attendance,” schools are deploying spyware that makes the most of built-in cameras, biometric scanning, and a host of other intrusions that make staying home at least as irritating as actually being in school.

The EFF covered some of these disturbing developments back in August, when some schools were kicking off their school years. Bad news abounded.

Recorded patterns of keystrokes and facial recognition supposedly confirm whether the student signing up for a test is the one taking it; gaze-monitoring or eye-tracking is meant to ensure that students don’t look off-screen too long, where they might have answers written down; microphones and cameras record students’ surroundings, broadcasting them to a proctor, who must ensure that no one else is in the room.

So much for the sanctity of the home — the location regarded as the most private of private spaces, worthy of the utmost in Fourth Amendment protections. Unfortunately, the tradeoff for distance learning appears to mean students must give up almost all of their privacy in exchange for not being arrested for truancy.

School isn’t out yet. And there’s even more intrusiveness to report. It’s not just the stripping of privacy that’s adding to the dystopian atmosphere hovering oppressively over 2020. It’s also the Kafka+Orwell aspects of at-home monitoring, as Todd Feathers and Janus Rose report for Vice.

The first part of this aligns with the EFF’s earlier reporting: exam software developers are giving school administrators an insane amount of access to students’ devices.

Like its competitors in the exam surveillance industry, Respondus uses a combination of facial detection, eye tracking, and algorithms that measure “anomalies” in metrics like head movement, mouse clicks, and scrolling rates to flag students exhibiting behavior that differs from the class norm.

Then it just gets surreal.

These programs also often require students to do 360-degree webcam scans of the rooms in which they’re testing to ensure they don’t have any illicit learning material in sight.

Not surreal enough for Respondus and its customers, apparently. Instructions vary from school to school, but Wilfrid Laurier University students are given an entire gauntlet to run through just for the privilege of taking a test. One set of instructions seems to ask students to roll the dice on permanently damaging their ears.

[O]ne WLU professor wrote that anyone who wished to use foam noise-cancelling ear plugs must “in plain view of your webcam … place the ear plugs on your desk and use a hard object to hit each ear plug before putting it in your ear—if they are indeed just foam ear plugs they will not be harmed.”

And there’s so much more! Instructors are taking the intrusiveness baked into Respondus’ exam spyware and adding their own twists. If these weren’t tied to education products, one might assume sexual predators were on the prowl. (One might still assume that, perhaps not even incorrectly. We’ll see how this all shakes out!)

Other instructors required students to buy hand mirrors and hold them up to their webcams prior to beginning a test to ensure they hadn’t written anything on the webcam.

Not every instructor is adding more evil. Some seem to be concerned about the software itself — mainly its reliability and its willingness to see everything unexpected as cheating. But it’s not much less dystopian to advise students on how best to ensure the school’s spyware functions properly during tests. Advice from profs includes telling students to keep everyone else at home off the internet while testing (presumably so no one pings out while submitting answers) and to avoid sitting in front of posters or decorations featuring people or animals so the spyware won’t flag them for having other people in the room during a test.

And it’s not just Canada. An email sent by an instructor at Arkansas Tech told students to engage in a whole bunch of pre-test setup just to assure this small-minded prof they weren’t cheating.

Before beginning an exam, students were required to hold a mirror or their phone’s front-facing camera to reflect the computer screen, and then adjust the webcam so the instructor can “see your face, both hands, your scratch paper, calculator, and the surface of your desk,” according to an email obtained by Motherboard.

If students failed to jump through all these distance learning hoops, the instructor would “set [their] exam score to 0%.”

The coupling of intrusive spyware with increasingly ridiculous demands from instructors has led to open, if mostly remote, revolt. Petitions have been circulated demanding software like this be banned. Feedback sites like ratemyprofessor have been bombed with negative reviews. Unfortunately, the schools have almost all the leverage. It’s not that simple to take your “being educated” business elsewhere, especially in the middle of a global pandemic.

That’s not to say there haven’t been any successes. Blowback from Wilfrid Laurier students forced the Canadian university to withdraw its demand that students set up their own in-home surveillance system by purchasing both an external webcam and a tripod. And some school administrators are at least responding with statements that indicate they recognize the people paying their salaries are unhappy. WLU administrators are promising to “look into” the reported problems, but it seems unlikely it will ditch its proctoring software. What it may do is clarify what instructors can actually ask students to do, which would address at least some of the complaints.

But half-assing it isn’t going to change the intrusive nature of the software itself. But, as noted earlier, students already well on their way to degrees or diplomas can’t just head to the nearest competitor. And there’s a good chance the nearest competitor is using something similar to reduce cheating, which means students will be jumping through one set of hoops just to find themselves jumping through another set at another school.

This pandemic isn’t going to be forever. If it’s in the best interests of everyone to remain as distanced as possible, schools just need to accept the fact that cheating may be a bit more common. Accepting the reality of the situation would be healthier for everyone. Making a bad situation even worse with pervasive surveillance and insane instructions from administrators is the last thing students (and teachers) need right now.

Filed Under: covid, facial recognition, keystrokes, schools, spyware, surveillance, testing
Companies: respondus

England's Exam Fiasco Shows How Not To Apply Algorithms To Complex Problems With Massive Social Impact

from the let-that-be-a-lesson-to-you-all dept

The disruption caused by COVID-19 has touched most aspects of daily life. Education is obviously no exception, as the heated debates about whether students should return to school demonstrate. But another tricky issue is how school exams should be conducted. Back in May, Techdirt wrote about one approach: online testing, which brings with it its own challenges. Where online testing is not an option, other ways of evaluating students at key points in their educational career need to be found. In the UK, the key test is the GCE Advanced level, or A-level for short, taken in the year when students turn 18. Its grades are crucially important because they form the basis on which most university places are awarded in the UK.

Since it was not possible to hold the exams as usual, and online testing was not an option either, the body responsible for running exams in the UK, Ofqual, turned to technology. It came up with an algorithm that could be used to predict a student’s grades. The results of this high-tech approach have just been announced in England (other parts of the UK run their exams independently). It has not gone well. Large numbers of students have had their expected grades, as predicted by their teachers, downgraded, sometimes substantially. An analysis from one of the main UK educational associations has found that the downgrading is systematic: “the grades awarded to students this year were lower in all 41 subjects than they were for the average of the previous three years.”

Even worse, the downgrading turns out to have affected students in poorly performing schools, typically in socially deprived areas, the most, while schools that have historically done well, often in affluent areas, or privately funded, saw their students’ grades improve over teachers’ predictions. In other words, the algorithm perpetuates inequality, making it harder for brilliant students in poor schools or from deprived backgrounds to go to top universities. A detailed mathematical analysis by Tom SF Haines explains how this fiasco came about:

Let’s start with the model used by Ofqual to predict grades (p85 onwards of their 319 page report). Each school submits a list of their students from worst student to best student (it included teacher suggested grades, but they threw those away for larger cohorts). Ofqual then takes the distribution of grades from the previous year, applies a little magic to update them for 2020, and just assigns the students to the grades in rank order. If Ofqual predicts that 40% of the school is getting an A [the top grade] then that’s exactly what happens, irrespective of what the teachers thought they were going to get. If Ofqual predicts that 3 students are going to get a U [the bottom grade] then you better hope you’re not one of the three lowest rated students.

As this makes clear, the inflexibility of the approach guarantees that there will be many cases of injustice, where bright and hard-working students will be given poor grades simply because they were lower down in the class ranking, or because the school did badly the previous year. Twitter and UK newspapers are currently full of stories of young people whose hopes have been dashed by this effect, as they have now lost the places they had been offered at university, because of these poorer-than-expected grades. The problem is so serious, and the anger expressed by parents of all political affiliations so palpable, that the UK government has been forced to scrap Ofqual’s algorithmic approach completely, and will now use the teachers’ predicted grades in England. Exactly the same happened in Scotland, which also applied a flawed algorithm, and caused similarly huge anguish to thousands of students, before dropping the idea.

The idea of writing algorithms to solve this complex problem is not necessarily wrong. Other solutions — like using grades predicted by teachers — have their own issues, including bias and grade inflation. The problems in England arose because people did not think through the real-life consequences for individual students of the algorithm’s abstract rules — even though they were warned of the model’s flaws. Haines offers some useful, practical advice on how it should have been done:

The problem is with management: they should have asked for help. Faced with a problem this complex and this important they needed to bring in external checkers. They needed to publish the approach months ago, so it could be widely read and mistakes found. While the fact they published the algorithm at all is to be commended (if possibly a legal requirement due to the GDPR right to an explanation), they didn’t go anywhere near far enough. Publishing their implementations of the models used would have allowed even greater scrutiny, including bug hunting.

As Haines points out, last year the UK’s Alan Turing Institute published an excellent guide to implementing and using AI ethically and safely (pdf). At its heart lie the FAST Track Principles: fairness, accountability, sustainability and transparency. The fact that Ofqual evidently didn’t think to apply them to its exam algorithm means its only gets a U grade for its work on this problem. Must try harder.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

Filed Under: algorithms, education, exams, grads, predictions, predictive algorithms, protests, testing

As More Students Sit Online Exams Under Lockdown Conditions, Remote Proctoring Services Carry Out Intrusive Surveillance

from the you're-doing-it-wrong dept

The coronavirus pandemic and its associated lockdown in most countries has forced major changes in the way people live, work and study. Online learning is now routine for many, and is largely unproblematic, not least because it has been used for many years. However, online testing is more tricky, since there is a concern by many teachers that students might use their isolated situation to cheat during exams. One person’s problem is another person’s opportunity, and there are a number of proctoring services that claim to stop or at least minimize cheating during online tests. One thing they have in common is that they tend to be intrusive, and show little respect for the privacy of the people they monitor.

As an article in The Verge explains, some employ humans to watch over students using Zoom video calls. That’s reasonably close to a traditional setup, where a teacher or proctor watches students in an exam hall. But there are also webcam-based automated approaches, as explored by Vox:

For instance, Examity also uses AI to verify students’ identities, analyze their keystrokes, and, of course, ensure they’re not cheating. Proctorio uses artificial intelligence to conduct gaze detection, which tracks whether a student is looking away from their screens.

It’s not just in the US that these extreme surveillance methods are being adopted. In France, the University of Rennes 1 is using a system called Managexam, which adds a few extra features: the ability to detect “inappropriate” Internet searches by the student, the use of a second screen, or the presence of another person in the room (original in French). The Vox articles notes that even when these systems are deployed, students still try to cheat using new tricks, and the anti-cheating services try to stop them doing so:

it’s easy to find online tips and tricks for duping remote proctoring services. Some suggest hiding notes underneath the view of the camera or setting up a secret laptop. It’s also easy for these remote proctoring services to find out about these cheating methods, so they’re constantly coming up with countermeasures. On its website, Proctorio even has a job listing for a “professional cheater” to test its system. The contract position pays between 10,000and10,000 and 10,000and20,000 a year.

As the arms race between students and proctoring services escalates, it’s surely time to ask whether the problem isn’t people cheating, but the use of old-style, analog testing formats in a world that has been forced by the coronavirus pandemic to move to a completely digital approach. Rather than spending so much time, effort and money on trying to stop students from cheating, maybe we need to come up with new ways of measuring what they have learnt and understood — ones that are not immune to cheating, but where cheating has no meaning. Obvious options include “open book” exams, where students can use whatever resources they like, or even abolishing formal exams completely, and opting for continuous assessment. Since the lockdown has forced educational establishments to re-invent teaching, isn’t it time they re-invented exams too?

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

Filed Under: cheating, covid-19, education, surveillance, testing

Those Ex-Theranos Patents Look Really Bad; Contest Opened To Find Prior Art To Get Them Invalidated

from the you'd-think-it-should-be-the-other-way-around dept

A few weeks back we wrote about how Fortress Investment Group — a massive patent trolling operation funded by Softbank — was using old Theranos patents to shake down BioFire, a company that actually makes medical diagnostics tests, including one for COVID-19. Fortress had scooped up the patents as collateral after it issued a loan to Theranos, which Theranos (a complete scam company, whose founders are still facing fraud charges…) could not repay. Fortress then set up a shell company, Labrador Diagnostics, which did not exist until days before it sued BioFire. After it (and the law firm Irell & Manella) got a ton of bad press for suing BioFire over these patents — including the COVID-19 test — Fortress rushed out a press release promising that it would issue royalty-free licenses for COVID-19 tests. However, it has still refused to reveal the terms of that offer, nor has it shared the letter it sent to BioFire with that offer.

And while some have argued that after issuing this “royalty-free license” offer, the whole thing was now a non-story, that’s not true. It appears that the offer only covers half of the test: the pouches that have the test-specific reagents, but not the test device that is used to analyze the tests. And so while the COVID-19 test pouches may get a “free” license, the machines to test them are still subject to this lawsuit.

In the meantime, tons of people have been asking how Theranos — who appeared to never have a working product, despite publicly claiming it did (and convincing Walgreens that it did) — could possibly have received patents on technology that never actually existed. Tragically, the answer is that our patent system (for reasons that make no sense) does not require a working prototype, which results in all sorts of nonsense getting a patent. That said, the good folks at Unified Patents have launched a crowdsourcing contest for prior art about the two Theranos patents in question.

We kindly ask our crowdsourcing community of thousands of prior art searchers to take a few minutes to help identify prior art on these patents that never should have issued and help rid the world of them, in the process improving the world?s chances of testing for and containing COVID-19 and other dangerous public health concerns.

The contest will expire on April 30, 2020. Please visit PATROLL for more information or to submit an entry for this contest.

If you’re looking to help out and would like a place to start, the good folks at M-CAM, who analyze patents for prior art and obviousness, have a fairly remarkable analysis of the Theranos patents, and refers to Fortress/Softbank/Labrador as “graverobbers.” The analysis is worth reading, including this analysis of the 1st claim in the patent for “a two-way communication system for detecting an analyte in a bodily fluid from a subject…”:

No shit. My tongue is part of a system which detects various ?analytes? in food such as salt, sugar, and acids. Don?t tell anyone, but I?m starting to worry that I might be the next target for an infringement lawsuit.

But on a more serious level, the analysis explains why the patents are pretty much exactly as sketchy as you would expect from a company of Theranos’ reputation:

… the claims of the patent they state are being infringed are incredibly mundane and obvious (patents must be non-obvious to be granted). They include gems such as “a) a reader assembly comprising a programmable processor that is operably linked to a communication assembly;” where they point out that Biofire?s machine uses, of all things, an ETHERNET CABLE to export data from its processor. Heathens!

It then notes that M-CAM found at least 416 other patents that appear to be significantly similar to the patents at issue, which makes you wonder why the hell the USPTO approved these patents in the first place…

Filed Under: covid-19, crowdsourcing, diagnostics, patent troll, patents, prior art, testing
Companies: fortress investment group, irell & manella, labrador diagnostics, softbank, theranos

Softbank-Owned Patent Troll Now Promises To Grant Royalty-Free License For Covid-19 Tests; Details Lacking

from the let's-see-the-details dept

Yesterday I wrote up a fairly insane story about how a Softbank-owned patent troll, Fortress Investment Group, through a shell company subsidiary, Labrador Diagnostics (which, despite its name, does not seem to do any diagnostics), using patents that it had bought up from the sham medical testing company Theranos during its fire sale, had sued BioFire Diagnostics/BioMerieux, one of the few companies making a Covid-19 diagnostics test, claiming patent infringement. The patent infringement claims were on all of its diagnostics created using BioFire’s FilmArray 2.0, FilmArray EZ, and FilmArray Torch devices — and the company’s Covid-19 tests were based on that technology. Even worse, the company asked the court to issue an injunction, blocking BioFire from using the tests. As we pointed out, this was not just tone deaf, but destructive and dangerous.

This morning, hours after our article went viral, Labrador Diagnostics issued a press release claiming that once it became aware that BioFire was working on Covid-19 tests, it had offered the company a royalty-free license on those tests (and only those tests):

-Labrador Diagnostics LLC (?Labrador?) today announced that it will offer to grant royalty-free licenses to third parties to use its patented diagnostics technology for use in tests directed to COVID-19. Labrador fully supports efforts to assess and ultimately end this pandemic and hopes that more tests will be created, disseminated, and used to quickly and effectively protect our communities through its offer of a royalty-free license during the current crisis.

On March 9, 2020, Labrador, an entity owned by investment funds managed by Fortress Investment Group LLC, filed a patent infringement lawsuit in the District of Delaware to protect its intellectual property. Labrador wants to make clear that the lawsuit was not directed to testing for COVID-19. The lawsuit focuses on activities over the past six years that are not in any way related to COVID-19 testing.

Two days after the lawsuit was filed on March 11, 2020, the defendants issued a press release announcing that they were developing tests for COVID-19. Labrador had no prior knowledge of these activities by the defendants. When Labrador learned of this, it promptly wrote to the defendants offering to grant them a royalty-free license for such tests.

There are still many open questions regarding all of this. It is unclear when Labrador actually sent this letter or what it actually says in the details. It’s notable that it says that it “will offer to grant royalty-free licenses, rather than just flat out waiving any rights it might hold for such tests. The latter would suggest good faith. The former makes you wonder if there are conditions associated with the “offer” (such as needing to license the patents for other uses, or a recognition of the patents as valid or some such). Labrador and its lawyers at Irell & Manella could clear up this confusion by releasing the letter — including the time stamp when it was actually sent.

Even this bit of last minute ass covering doesn’t change the overall sketchiness of the original lawsuit. Again, we’re talking about questionable patents from Theranos, a firm that was shown to be a sham, with technology that never worked. The patents themselves seem excessively, perhaps ridiculously, broad. Look over the claims in patent 8,283,155 and explain to me how that adds anything new or novel to diagnostic testing machines. Here’s the 1st claim:

1. A two-way communication system for detecting an analyte in a bodily fluid from a subject, comprising:

> a) a reader assembly comprising a programmable processor that is operably linked to a communication assembly; > b) an external device configured to transmit a protocol to the communication assembly; > c) a test device configured to be inserted into the reader assembly, said test device comprising: > >> i) a sample collection unit configured for collecting a sample of bodily fluid suspected to contain an analyte; >> ii) an assay assembly containing reactants that react with said sample of bodily fluid based on the protocol transmitted from said external device to yield a detectable signal indicative of the presence and/or concentration of said analyte; and >> iii) an identifier that is configured to provide the identity of said test device and is also configured to trigger the transmission of said protocol that is selected based on said identifier; > > wherein the programmable processor of the reader assembly is configured to receive said protocol from said external device, wherein said protocol in turn effects (1) a reaction in said assay assembly for generating said signal, and (2) selection of a detection method for detecting said signal, and wherein said reader further comprises a detection assembly for detecting said signal which is transmitted via said communication assembly to said external device.

It basically seems to describe a mobile testing unit that can collect data from a sample, and send what it finds over a network to a computer system to analyze. I’m certainly not an expert in the field, but it seems to me that if you were to ask basically any one with any knowledge of how these things work “how would you build a mobile medical diagnostics tool” they’d more or less describe exactly this system. This is not some big breakthrough. This seems to be a broad an obvious idea that never should have received a patent in the first place.

So, sure, it’s great that after Irell & Manella started getting lots of shit for this kind of gross pandemic profiteering, it suddenly got the shell company to issue a press release “offering” (not promising) royalty-free licenses, but that doesn’t clear up what appears to be a fairly gross effort at patent trolling off of a sham company’s questionable patents — and doing so in the midst of a pandemic. While the company claims it didn’t now that BioFire was working on a Covid-19 test, that’s laughable. Pretty much everyone in the space seemed to expect BioFire to be among the diagnostics firms creating a test. Indeed, even the Wall Street Journal wrote about BioFire working on this a full week before the lawsuit was filed. On top of that, this lawsuit was filed when it was already blatantly clear that we were in the midst of a pandemic, where BioFire’s diagnostics would be useful.

So, yes, it’s great that after the terribleness of this decision became clear, the company made it public that it wouldn’t seek to stop Covid-19 testing, but that doesn’t excuse all of the other awful behavior at play here — and, again, the company still has not revealed the details and conditions of its “offer.”

Filed Under: covid-19, diagnostics, license, patent troll, patents, testing
Companies: biofire, biomerieux, fortress investment group, irell & manella, labrador diagnostics, softbank

Company Says It's Built A Marijuana Breathalyzer, Wants To Roll It Out By The Middle Of This Year

from the how-much-'yes'-is-actually-impairment-tho dept

Breathalyzers have been in use for more than 100 years at this point and we still don’t have all the kinks worked out. Testing equipment used by law enforcement frequently isn’t calibrated or maintained correctly. Some devices have been set up improperly, which leads directly to false positives when the tests are deployed.

Unfortunately, impaired driving isn’t going away. And neither are the tools cops like well enough to deploy in the field, but apparently not well enough to engage in routine maintenance or periodic quality control testing. This is already a problem for citizens, who can find themselves behind bars if the testing equipment is faulty. The problem is only going to get worse as marijuana legalization spreads to more states.

There’s currently no field test equipment that detects marijuana impairment. A company in California thinks it has a solution.

By mid-2020, Hound Laboratories plans to begin selling what it says is the world’s first dual alcohol-marijuana breath analyzer, which founder Dr. Mike Lynn says can test whether a user has ingested THC of any kind in the past two to three hours.

“We’re allowed to have this in our bodies,” Lynn said of marijuana, which became legal to use recreationally in California in 2018. “But the tools to differentiate somebody who’s impaired from somebody who’s not don’t exist.”

We won’t know if these claims are true until the testing equipment is deployed. And even then, we still won’t know if the machines are accurate or the drivers they catch are actually impaired. Marijuana doesn’t work like alcohol, so impairment levels vary from person to person. In addition, there’s no baseline for impairment like there is for alcohol. That will have to be sorted out by state legislatures before officers can begin to claim someone is “impaired” just because the equipment has detected THC. At this point, the tech pitched by Hound Labs only provides a yes/no answer.

There’s a very good chance this new tech will go live before the important details — the ones safeguarding people’s rights and freedoms — are worked out. The founder of Hound Labs is also a reserve deputy for the Alameda County Sheriff’s Office. And it’s this agency that’s been test driving the weedalyzer.

The Alameda County Sherriff’s Office agreed to test the Hound Breathalyzer in the field.

“What we’ve seen trending with the addition of the legalization of cannabis in California is that we are coming across more and more marijuana-impaired drivers,” said Alameda County Sheriff spokesperson Sgt. Ray Kelly.

“It’s not hard to determine if there is THC on someone’s breath if they have been smoking it,” Kelly said. “It’s when they’re ingesting it through edibles, which have become much more popular. That’s extremely valuable to law enforcement.”

These tests are completely voluntary and drivers who submit to them won’t be criminally charged even if the device says they’re under the influence. But in a few months — if everyone agrees they’re good enough to be used on civilians — the tests will no longer be voluntary and the consequences will be very real.

Impaired driving that doesn’t involve alcohol is going to increase with the legalization of marijuana. But this new tech should be greeted with the proper amount of skepticism. Breathalyzers that detect alcohol have been around for decades and are still far from perfect. A new device that promises to detect recent marijuana use just because researchers say consumption can be detected for up to three hours shouldn’t be treated as a solution.

The device is stepping into a legal and legislative void with no established baseline for marijuana “intoxication.” It can only say it does or does not detect THC in a person’s breath. It can’t determine whether the amount is a little or a lot, and no one has any guidance stating how much of a THC concentration should be considered impairing or illegal. But it’s pretty much a given these will hit the roads before the law is ready for them, and that should concern drivers in every state where marijuana is legal.

Filed Under: breathalyzer, false positives, field tests, marijuana, police, testing
Companies: hound laboratories

Roadside Breath Tests Are Just As Unreliable As Field Drug Tests

from the magic-8-ball-says... dept

Portable alcohol testing equipment (a.k.a. breathalyzers) have been called “magic black boxes” and “extremely questionable” by judges. And yet, they’re still used almost everywhere by almost every law enforcement agency. They’re shiny and sleek and have knobs and buttons and digital readouts, so they’re not as immediately sketchy as the $2 drug-testing labs cops use to turn donut crumbs into methamphetamines. But they’re almost as unreliable as field drug tests.

Even when the equipment works right, it can still be wrong. But it so very rarely works right. Cops buy the equipment, then do almost nothing in terms of periodic testing or maintenance. A new report from the New York Times shows this equipment should probably never be trusted to deliver proof of someone’s intoxication. And the failure begins with the agencies using them.

The machines are sensitive scientific instruments, and in many cases they haven’t been properly calibrated, yielding results that were at times 40 percent too high. Maintaining machines is up to police departments that sometimes have shoddy standards and lack expertise. In some cities, lab officials have used stale or home-brewed chemical solutions that warped results. In Massachusetts, officers used a machine with rats nesting inside.

There are also problems with the machines themselves, even when they’ve been properly maintained. And the problems aren’t limited to single manufacturer. Dräger, the company that owns the name “Breathalyzer,” produced a machine that experts said with “littered with thousands of programming errors.” This finding was made during a rare examination of breath-testing equipment by defense attorneys granted by the New Jersey Supreme Court. Those findings resulted in zero lost law enforcement customers for Dräger.

CMI’s breath tester fared no better when examined by toxicology lab experts. CMI’s Intoxilyzer gave inaccurate results on “almost every test,” according to the report. This didn’t stop multiple law enforcement agencies from purchasing CMI’s tech, even when a Florida court had this to say about the Intoxilyzer.

The Intoxilyzer 8000 is a magic black box assisting the prosecution in convicting citizens of DUI. A defendant is required to blow into the box. The defense has shown significant and continued anomalies in the operation of the Intoxilyzer 8000’s operation. The prosecution argues most of the tests do not show anomalies. In fact, a high percentage of the tests may show no anomalous operation. That the Intoxilyzer 8000 mostly works is an insufficient response when a citizen’s liberty is at risk.

In the state of Washington, state police spent $1 million on Dräger breath testers to replace aging machines. Rather than roll this out in a controlled fashion with proper testing, state officials opted to bypass outside testing of the software to speed up deployment to police officers. Six years after this speedy rollout, the device’s reliability was called into question in court. The court allowed defense lawyers to review the software. The result of this testing was never made public… at least not officially. The experts who reviewed Dräger’s equipment had this to say about it:

The report said the Alcotest 9510 was “not a sophisticated scientific measurement instrument” and “does not adhere to even basic standards of measurement.” It described a calculation error that Mr. Walker and Mr. Momot believed could round up some results. And it found that certain safeguards had been disabled.

Among them: Washington’s machines weren’t measuring drivers’ breath temperatures. Breath samples that are above 93.2 degrees — as most are — can trigger inaccurately high results.

Dräger sent the researchers a cease-and-desist order, forbidding them from discussing their findings with anyone and demanding they destroy all copies of their report. Unfortunately for the company, the preliminary report had already been distributed to other defense lawyers and made its way to a number of websites.

The rush to deploy equipment wasn’t just a problem in Washington. The same thing happened in Colorado, but the rollout was even more haphazard and borderline illegal. There weren’t enough techs available to set up the purchased equipment, so the manufacturer (CMI) sent a salesperson and one of its lawyers to help with the initial calibration and certification. Only one actual lab supervisor was involved. The rest of the workforce was composed of assistants and interns. At best, one person was qualified to do this job. Since this seemed likely to pose a problem down the road if the equipment or readings were challenged in court, the state lab used an extremely-questionable workaround.

[T]he lab’s former science director said in a sworn affidavit that her digital signature had been used without her knowledge on documents certifying that the Intoxilyzers were reliable. The lab kept using her signature after she left for another job.

In Washington, DC, faulty equipment wasn’t taken out of service by the person overseeing the program. Instead, the person meant to ensure the equipment was working properly was making everything worse when not acting as a freelance chemist.

Mr. Paegle’s predecessor, Kelvin King, who oversaw the program for 14 years, had routinely entered incorrect data that miscalibrated the machines, according to an affidavit by Mr. Paegle and a lawsuit brought by convicted drivers.

In addition, Mr. Paegle found, the chemicals the department was using to set up the machines were so old that they had lost their potency — and, in some cases, Mr. King had brewed his own chemical solutions. (Mr. King still works for the Metropolitan Police Department. A department spokeswoman said he was unavailable for comment.)

Once the courts were informed, 350 convictions were tossed. The damage here was relatively minimal. In Massachusetts — where crime lab misconduct is an everyday occurrence36,000 tests were ruled inadmissible. In New Jersey, 42,000 cases were affected by breath test equipment that had never been set up properly.

Very few corrective measures have been implemented by law enforcement agencies. The fixes that are being made have almost all been the result of court orders. The unreliability of roadside breath tests have led several courts to reject pleas or prosecutions based solely on tests performed by officers during traffic stops. But none of this seems to have had an effect on law enforcement agencies who continue to purchase unreliable equipment and deploy it with proper safeguards or testing.

Breathalyzers are field tests in more than one sense of the word. Drivers are merely lab rats who face the possibility of losing their freedom, their licenses, and possibly their vehicles because a “magic black box” told a cop the driver was under the influence.

Filed Under: accuracy, breathalyzers, dui, evidence, police, roadside breath tests, testing

Daily Deal: The Ultimate Software Testing Bundle

from the good-deals-on-cool-stuff dept

For $59, the Ultimate Software Testing Bundle has 11 courses covering everything you need to know to get started as a software tester. You will learn the basics of software testing, how to debug issues, and much more through the 84 hours of lectures and exercises.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Filed Under: daily deal, software, testing

DailyDirt: Does It Take A Village Or A Japanese Metropolis?

from the urls-we-dig-up dept

Raising kids is an adventure filled with all sorts of imperfect decisions. A butterfly flapping its wings on your kid’s iPad could initiate a cascade of events, leading to his/her eventual life of crime or triumph. Or maybe that butterfly has no effect whatsoever — how did that unusual insect get into the house, anyway? Common core standards might be crushing young spirits with “new math” — or just frustrating parents who don’t remember how to do long division. Is there an optimal way to parent that leads to a society where every child is above average and no one graduates in the bottom half of the class? Maybe the best path is just to let kids figure it all out themselves. (But probably not.)

Hold on. If you’re still reading this, head over to our Daily Deals to save an additional 10% on any item in our Black Friday collection — using the code: ‘EARLY10’ — just through this Sunday, November 22nd.

Filed Under: common core, education, free range kids, parenting, standards, testing, unschooling