mistakes – Techdirt (original) (raw)

Both Things Can Be True: Meta Can Be Evil AND It’s Unlikely That The Company Deliberately Blocked A Mildly Negative Article About It

from the it's-the-perception-that-matters dept

Truth matters. Even if it’s inconvenient for your narrative. I’m going to do a question and answer style post, because I want to address a bunch of questions that came up on this story over the weekend, but let’s start here.

So what happened?

Last Thursday, the Kansas Reflector, a small local news non-profit in (you guessed it) Kansas, published an interesting piece regarding the intersection of Facebook and local news. The crux of the story was that Facebook makes it next to impossible to talk about anything related to Climate Change without having it blocked or with limited visibility.

The piece is perhaps a bit conspiratorial, but not that crazy. What’s almost certainly happening is that Meta has decided to simply limit the visibility of such content because of so many people constantly slamming Meta for supporting one or the other side of culture war issues, including climate change. It’s a dumb move, but it’s the kind of thing that Meta will often do: hamfisted, reactive, stupid-in-the-light-of-day managing of some issue that is getting the company yelled at. It’s not the criticism of Meta, but the hot button nature of the topic.

But then, Meta made things worse (a Meta specialty). Later in the day, it blocked all links to the Kansas Reflector from all Meta properties.

This morning, sometime between 8:20 and 8:50 a.m. Thursday, Facebook removed all posts linking to Kansas Reflector’s website.

This move not only affected Kansas Reflector’s Facebook page, where we link to nearly every story we publish, but the pages of everyone who has ever shared a story from us.

That’s the short version of the virtual earthquake that has shaken our readers. We’ve been flooded with instant messages and emails from readers asking what happened, how they can help and why the platform now falsely claims we’re a cybersecurity risk.

As the story started to Streisand its way around the internet and had people asking what Meta was afraid of, the company eventually turned links back on to most of the Kansas Reflector site. But not all. Links to that original story were still banned. And, of course, conspiracy theories remained.

Meta’s comms boss, Andy Stone, came out to say that it was all just a mistake and had nothing to do with the Reflector’s critical article about Meta:

Image

And, again, it felt like there was a decent chance that this was actually true. Mark Zuckerberg is not sitting in his office worrying about a marginally negative article from a small Kansas non-profit. Neither are people lower down the ranks of Meta. That’s just not how it works. There isn’t some guy on the content moderation line thinking “I know, Mark must hate this story, so I’ll block it!”

It likely had more to do with the underlying topic (the political hot potato of “climate change”) than the criticism of Meta, combined with a broken classifier that accidentally triggered a “this is a dangerous site” flag for whatever reason.

Then things got even dumber on Friday. Reporter Marisa Kabas reposted the original Reflector article on her site, The Handbasket. She could do this as the Reflector nicely publishes its work under a Creative Commons CC BY-NC-ND 4.0 license.

And then Marisa discovered that links to that article were also blocked across Meta (I personally tried to share the link to her article on Threads and had it blocked as “not allowed.”).

Soon after that, blogger Charles Johnson noticed that his site was also blocked by Meta as malware, almost certainly because a commenter linked to the original Kansas Reflector story. Eventually, his site was unblocked on Sunday.

Instagram and Threads boss Adam Mosseri showed up in somewhat random replies to people (often not those directly impacted) and claimed that it was a series of mistakes:

Image

What likely actually happened?

Like all big sites, Meta uses some automated tools to try to catch and block malicious sites before they spread wide and far. If they didn’t, you’d rightly be complaining that Meta doesn’t do the most basic things to protect its users from malicious sites.

Sometimes (more frequently than you would believe, given the scale) those systems make errors. Those errors can be false negatives (letting through dangerous sites that they shouldn’t) and false positives (blocking sites that shouldn’t be blocked). Both types of errors happen way more than you’d like, and if you tweak the dials to lessen one of those errors, you almost certainly end up with a ton of the other. It’s the nature of the beast. Being more accurate in one direction means less accurate in the other.

So, what almost certainly happened here was that there was some set of words or links or something on the Kansas Reflector story that tripped an alarm on a Meta classifier saying “this site is likely dangerous.”

This alarm likely is triggered thousands or tens of thousands of times every single day. Most of these are never reviewed, because most of them don’t become newsworthy. In many cases, there’s a decent chance no one ever even learns that their websites are barred by Meta, because no one ever actually notices.

Everything afterward stems from that one mistake. The automated trigger threshold was passed, the Reflector got blocked because Meta’s systems gave it a probabilistic score that suggested the site was dangerous. Most likely, no human at Meta even read the article before all this, and if anyone had, they would most likely not have cared about the mild criticism (way more mild than tons of stories out there).

If you are explaining why Meta did something that is garden variety incompetent, rather than maliciously evil, doesn’t that make you a Meta bootlicker?

Meta is a horrible company that has a horrible track record. It has done some truly terrible shit. If you want to read just how terrible the company is, read Erin Kissane’s writeup of what happened in Myanmar. Or read about how the company told users to download a VPN, Onavo, that was actually breaking encryption and spying on how users used other apps to send that data back to Meta. Or read about how Meta basically served up the open internet on a platter for Congress to slaughter, because they knew it would harm competition.

The list goes on and on. Meta is not a good company. I try to use their products as little as possible, and I’d like to live in a world where no one feels the need to use any of their products.

But truth matters.

And we shouldn’t accept a narrative as true just because it confirms our priors. That seems to have happened over the past few days regarding a broken content moderation decision that caused a negative news story about Meta to briefly be blocked from being shared across Meta properties.

It looks bad. It sounds bad. And Meta is a terrible company. So it’s very easy to jump to the conclusion that of course it was done on purpose. The problem is that there is a much more benign explanation that is also much more likely.

And this matters, because when you’re so trigger happy to insist that the mustache-twirling version of everything must be true, it actually makes it that much harder to truly hold Meta to account for its many sins. It makes it that much easier for Meta (and others) to dismiss your arguments as coming from a conspiracy theorist, rather than someone who has an actual point.

But what about those other sites? Isn’t the fact that it spread to other sites posting the same story proof of nefariousness?

Again, what happened there also likely stemmed from that first mistake. Once the system is triggered, it’s also probably looking for similar sites or sites trying to get around the block. So, when Kabas reposted the Reflector text, another automated system almost certainly just saw it as “here’s a site copying the other bad site, so it’s an attempt to get around the block.” Same with Johnson’s site, where it likely saw the link to the “bad” site as an attempt to route around the block.

Even after Meta was made aware of the initial error, the follow-on activities would quite likely continue automatically as the systems just did their thing.

But Meta is evil!

Yeah, we covered this already. Even if that’s true… especially if that’s true, we should strive to be accurate in our criticism of the company. Every overreaction undermines the arguments for the very real things that the company has done wrong, and that it continues to do wrong.

It allows the company to point to an error someone made in describing what they’ve done wrong here and use it to dismiss their more accurate criticism for other things.

But Meta has a history of lying and there’s no reason to give it the benefit of the doubt!

Indeed. But I’m not giving the benefit of the doubt to Meta here. Having spent years and years covering not just social media content moderation fuckups, but literally the larger history of content moderation fuckups, there are some pretty obvious things that suggest this was garden variety technical incompetence, found in every system, rather than malicious intent to block an article.

First, as noted, these kinds of mistakes happen all the time. Sooner or later, one is going to hit an article critical of Meta. It reminds me of the time that Google threatened Techdirt because it said comments on an article violated its terms of service. It just so happened that that article was critical of Google. I didn’t go on a rampage saying Google was trying to censor me because of my article that was critical of Google. Because I knew Google made that type of error a few times a month, sending us bogus threat notices over comments.

It happens.

And Meta has always allowed tons of way worse stories, including the Erin Kissane story above.

On top of that, the people at Meta know full well that if they were actually trying to block a critical news story, it would totally backfire and the story would Streisand all over the place (as this one did).

Also, Meta has a tell: if they were really doing something nefarious on this, they’d have a full court, slick press response ready to go. It wouldn’t be a few random Adam Mosseri social media posts going “oh shit, we fucked up, we’re fixing now…”

But it’s just too big of a coincidence, since this is over a negative story!

Again, there are way, way, way worse stories about Meta out there that haven’t been blocked. This story wasn’t even that bad. And no one at Meta is concerned about a marginally negative opinion piece in the Kansas Reflector.

When mistakes are made as often as they are made at this kind of scale (again, likely thousands of mistakes a day), eventually one of them is going to be over an article critical of Meta. It is most likely a coincidence.

But if this is actually a coincidence and it happens all the time, how come we don’t hear about it every day?

As someone who writes about this stuff, I do tend to hear similar stories nearly every day. But most of them never get covered because it’s just not that interesting. “Automated harmful site classifier wrong yet again” isn’t news. But even then, I do still write about tons of content moderation fuckups that fit into this kind of pattern.

Why didn’t Meta come out and explain all this if it was really no big deal?

I mean, they kinda did. Two different execs posted that it was a mistake and that they were looking into it, and some of those posts came over a weekend. It took a few days, but it appears that most of the blocked links that I was aware of earlier have been allowed again.

But shouldn’t they have a full, clear, and transparent explanation for what happened?

Again, if they had that all ready to go by now, honestly, then I’d think they were up to no good. Because they only have those packages ready to go when they know they’re doing something bad and need to be ready to counter it. In this case, their response is very much of the nature of “blech, classifier made a mistake again… someone give it a kick.”

And don’t expect a fully transparent explanation, because these systems are actually doing a lot to protect people from truly nefarious shit. Giving a fully transparent explanation of how that system works, where it goes wrong, and how it was changed might also be super useful to someone with nefarious intent, looking to avoid the classifier.

If Meta is this incompetent, isn’t that a big enough problem? Shouldn’t we still burn them at the stake?

Um… sure? I mean, there are reasons why I support different approaches that would limit the power of big centralized players. And, if you don’t like Meta, come use Bluesky (where people got to yell about this at me all weekend), where things are set up in a way that one company doesn’t have so much power.

But, really, no matter where you go online, you’re going to discover that mistakes get made. They get made all the time.

Honestly, if you understood the actual scale, you’d probably be impressed at how few mistakes are made. But every once in a while a mistake is going to get made that makes news. And it’s unlikely to be because of anything nefarious. It’s really just likely to be a coincidence that this one error happened to be one where a nefarious storyline could be built around it.

If Meta can’t handle this, then why should we let it handle anything?

Again, you’ll find that every platform, big and small, makes these mistakes. And it’s quite likely that Meta makes fewer of these mistakes, relative to the number of decisions it makes, than most other platforms. But it’s still going to make mistakes. So is everyone else. Techdirt makes mistakes as well, as anyone who has ever had their comments caught in the spam filter should recognize.

But why was Meta so slow to fix these things or explain them?

It wasn’t. Meta is quite likely dealing with a few dozen different ongoing crises at any one moment, some more serious than others. Along those lines, it’s quite likely that, internally, this is viewed as a non-story, given that it’s one of these mistakes that happens thousands of times a day. Most of these mistakes are never noticed, but the few that are get fixed in due time. This just was not seen as a priority, because it’s the type of mistake that happens all the time.

But why didn’t Adam Mosseri respond directly to those impacted? Doesn’t that mean he was avoiding them?

The initial replies from Mosseri seemed kinda random. He responded to people like Ken “Popehat” White on Threads and Alex “Digiphile” Howard on Bluesky, rather than anyone who was directly involved. But, again, this tends to support the underlying theory that, internally, this wasn’t setting off any crisis alarm bells. Mosseri saw those posts because he just happened to see those posts and so he responded to them, noting the obvious mistake, and promising to have someone look into it more at a later date (i.e., not on a weekend).

Later on, I saw that he did respond to Johnson, so as more people raised issues, it’s not surprising that he started paying closer attention.

None of what you say matters, because they were still blocking a news organization, and whether it was due to maliciousness or incompetence doesn’t matter.

Well, yes and no. You’re right that the impact is still pretty major, especially to the news sites in question. But if we want to actually fix things, it does matter to understand the underlying reasons why they happen.

I guarantee that if you misdiagnose the problem, your solution will not work and has a high likelihood of actually making things way, way worse.

As we discussed on the most recent episode of Ctrl-Alt-Speech, in the end, the perception is what matters, no matter what the reality of the story. People are going to remember simply that Meta blocked the sharing of links right after a critical article was published.

And that did create real harm.

But you’re still a terrible person/what do you know/why are you bootlicking, etc?

You don’t have to listen to me if you don’t want to. You can also read this thread from another trust & safety expert, Denise from Dreamwidth, whose own analysis is very similar to mine. Or security expert @pwnallthethings, who offers his own, similar explanation. Everyone with some experience in this space sees this as an understandable (which is not to say acceptable!) scenario.

I spend all this time trying to get people to understand the reality of trust & safety for one reason: so that they understand what’s really going on and can judge these situations accordingly. Because the mistakes do cause real harm, but there is no real way to avoid at least some mistakes over time. It’s just a question of how you deal with them when they do happen.

Is it an acceptable tradeoff if it means Meta allows more links to scam, phishing, and malware sites? Because those are the tradeoffs we’re talking about.

While it won’t be, this should be a reminder that content moderation often involves mistakes. But also, while it’s always easy to append some truly nefarious reason to things (e.g., “anti-conservative bias”), it’s often more just “the company is bad at this, because every company is bad at this, because the scale is more massive than anyone can comprehend.”

Sometimes the system sucks. And sometimes the system simply over-reacted to one particular column and Streisanded the whole damn thing into the stratosphere. And that’s useful in making people aware. But if people are going to be aware, they should be aware of how these kinds of systems work, rather than assuming mustache twirling villainy where there’s not likely to be any.

Filed Under: adam mosseri, andy stone, climate change, content moderation, masnick's impossibility theorem, mistakes, news
Companies: kansas reflector, meta

Facebook Is So Sure Its Erroneous Blocking Of Music Is Right, There’s No Option To Say It’s Wrong

It’s hardly a secret that upload filters don’t work well. Back in 2017, Felix Reda, then Shadow Rapporteur on the EU Copyright Directive in the European Parliament, put together a representative sample of the many different ways in which filters fail. A recent series of tweets by Markus Pössel, Senior Outreach Scientist at the Max Planck Institute for Astronomy, exposes rather well the key issues, which have not improved since then.

Facebook muted 41 seconds of a video he uploaded to Facebook because Universal Music Group (UMG) claimed to own the copyright for some of the audio that was played. Since the music in question came from Bach’s Well-Tempered Clavier, and Bach died in 1750, there’s obviously no copyright claim on the music itself, which is definitely in the public domain. Instead, it seems, the claim was for the performance of this public domain music, which UMG says was played by Keith Jarrett, a jazz and classical pianist, and noted interpreter of Bach. Except that it wasn’t, as Pössel explains:

Either I am flattered that a Bach piece that I recorded with my own ten fingers on my digital keyboard sounds just like when Keith Jarrett is playing it. Or be annoyed by the fact that @UMG is *again* falsely claiming music on Facebook that they definitely do not own the copyright to.

This underlines the fact that upload filters may recognize the music – that’s not hard – but they are terrible at recognizing the performer of that music. It gets worse:

OK, I’ll go with “very annoyed” because if I then continue, Facebook @Meta DOES NOT EVEN GIVE ME THE OPTION TO COMPLAIN. They have grayed out the option to dispute the claim. They are dead wrong, but so sure of themselves that they do not even offer the option of disputing the claim, even though their system, in principle, provides such an option. And that, in a nutshell, is what’s wrong with companies like these today. Algorithms that make mistakes, biased towards big companies like @UMG.

This absurd situation is a foretaste of what is almost certainly going to happen all the time once major platforms are forced to use upload filters in the EU to comply with Article 17 of the Copyright Directive. Not only will they block legal material, but there will probably be a presumption that the algorithms must be right, so why bother complaining, when legislation tips the balance in favor of Big Content from the outset?

Follow me @glynmoody on Twitter, Diaspora, or Mastodon. Originally posted to WalledCulture.

Filed Under: copyright, copyright filters, counterclaim, counternotice, mistakes, public domain
Companies: facebook, umg, universal music group

Trumpist Gettr Social Network Continues To Speed Run Content Moderation Learning Curve: Bans, Then Unbans, Roger Stone

from the welcome-to-the-content-moderation-game dept

Remember Gettr? That’s the Trumpist social network run by former Trump spokesperson (and vexatious lawsuit filer) Jason Miller that promised to be supportive of “free speech.” As we point out what happens with every new social network that jumps into the space with promises to “support free speech!” and “not censor!” before long they will begin to realize content moderation is required to keep your site running — and soon they discover that content moderation will involve difficult choices. And, sometimes, it involves making mistakes.

Of course, whenever Twitter, Facebook, Instagram or whoever else note that they made a “mistake” with a content moderation decision and reverse it, there are always some people who insist it couldn’t possibly be a mistake and must be [insert conspiratorial reason here]. So I find it hilarious that on Wednesday, Gettr got to experience all this as well. First, Trump buddy and admitted “dirty trickster” Roger Stone went on Gab — another such social network — to whine about how he was “censored” by Gettr, claiming it was because he had made what he believed were disparaging remarks about Miller (and Steve Bannon).

"Gettr actually suspended my account," Roger Stone wrote on Gab. He further claimed that Steve Bannon and Jason Miller are lovers of sorts – and his banning was due to being critical of the duo.

— Zachary Petrizzo (@ZTPetrizzo) August 25, 2021

This graphic alone is pretty hilarious, given all the people who insisted that Gettr doesn’t do moderation.

A few hours later, Jason Miller told Salon reporter Zachary Petrizzo, that it was all a mistake, and blamed the problem of a bunch of fake Stone accounts that the site was trying to get rid of. In the process, Miller claimed, they “accidentally” deleted the real Stone’s account. The timing coming right after Stone made remarks about Miller was, apparently, entirely coincidental.

“Multiple fake Roger Stone accounts were suspended following user complaints, but his real Gettr account was inadvertently suspended too,” Miller told Salon via email. “His correct account is currently active, and the imposter accounts have all been removed.”

It’s almost like… running a social media account with malicious actors are trying to abuse requires content moderation, and that will often involve making mistakes, rather than, say, “bias” or desires to silence particular people.

Who knows why Gettr actually suspended Stone? These kinds of things happen all the time on all the other social media sites too. It doesn’t mean it’s malicious or directed at silencing a critic (though, honestly, there’s a higher likelihood that is true with a site like Gettr than with Twitter or Facebook, that actually have real policies in place and a history of training staff to moderate judiciously). It could actually be a mistake. Just like Twitter and Facebook sometimes make mistakes.

I’m still waiting for Miller and his fans to realize that all of these challenges that Gettr is facing are no different than the ones that other social media apps faced — and that maybe all their earlier complaints of “censorship” were just as bullshit then as the claims about Gettr being censorial today are. I get the feeling that I’ll be waiting a long, long time for that recognition to set in.

Filed Under: anti-conservative bias, content moderation, jason miller, mistakes, roger stone
Companies: gab, gettr

Oversight Board Tells Facebook It Needs To Shape Up And Be More Careful About Silencing Minorities Seeking To Criticize The Powerful

from the pay-attention-to-this dept

Tomorrow, the Oversight Board is set to reveal its opinion on whether Facebook made the right decision in banning former President Trump. And that will get tons of attention. But the Board came out with an interesting decision last week regarding a content takedown in India, that got almost no attention at all.

Just last week, we wrote about an ongoing issue in India, where the government of Prime Minister Narendra Modi has failed in almost every way possible in dealing with the COVID pandemic, but has decided the best thing to focus on right now is silencing critics on Twitter. That backdrop is pretty important considering that the very next day, the Oversight Board scolded Facebook for taking down content criticizing Modi’s government.

That takedown was somewhat different and the context was very different. Also, it should be noted that as soon as the Oversight Board agreed to take the case, Facebook admitted it had made a mistake and reinstated the content. However, this case demonstrates something important that often gets lost in all of the evidence free hand-wringing about “anti-conservative bias” from people who wrongly insist that Facebook and Twitter only moderate the accounts of their friends. The truth is that content all across the board gets moderated — and often the impact is strongest on the least powerful groups. But, of course, part of their lack of power is that they’re unable to rush onto Fox News and whine about how they’re being “censored.”

The details here are worth understanding, not because there was some difficult decision to make. Indeed, as noted already, Facebook realized it made a mistake almost immediately after the Oversight Board decided to look into this, and when asked why the content was taken down, basically admitted that it had no idea and that it was a complete and total mistake. Here was the content, as described by the Oversight Board ruling:

The content touched on allegations of discrimination against minorities and silencing of the opposition in India by ?Rashtriya Swayamsevak Sangh? (RSS) and the Bharatiya Janata Party (BJP). RSS is a Hindu nationalist organization that has allegedly been involved in violence against religious minorities in India. ?BJP? is India?s ruling party to which the current Indian Prime Minister Narendra Modi belongs, and has close ties with RSS.

In November 2020, a user shared a video post from Punjabi-language online media Global Punjab TV and an accompanying text. The post featured a 17-minute interview with Professor Manjit Singh, described as ?a social activist and supporter of the Punjabi culture.? In its post, Global Punjab TV included the caption ?RSS is the new threat. Ram Naam Satya Hai. The BJP moved towards extremism.? The media company also included an additional description ?New Threat. Ram Naam Satya Hai! The BJP has moved towards extremism. Scholars directly challenge Modi!? The content was posted during India?s mass farmer protests and briefly touched on the reasons behind the protests and praised them.

The user added accompanying text when sharing Global Punjab TV?s post in which they stated that the CIA designated the RSS a ?fanatic Hindu terrorist organization? and that Indian Prime Minister Narendra Modi was once its president. The user wrote that the RSS was threatening to kill Sikhs, a minority religious group in India, and to repeat the ?deadly saga? of 1984 when Hindu mobs attacked Sikhs. They stated that ?The RSS used the Death Phrase ?Ram naam sat hai?.? The Board understands the phrase “Ram Naam Satya Hai” to be a funeral chant that has allegedly been used as a threat by some Hindu nationalists. The user alleged that Prime Minister Modi himself is formulating the threat of ?Genocide of the Sikhs? on advice of the RSS President, Mohan Bhagwat. The accompanying text ends with a claim that Sikhs in India should be on high alert and that Sikh regiments in the army have warned Prime Minister Modi of their willingness to die to protect the Sikh farmers and their land in Punjab.

The post was up for 14 days and viewed fewer than 500 times before it was reported by another user for ?terrorism.? A human reviewer determined that the post violated the Community Standard on Dangerous Individuals and Organizations and took down the content, which also triggered an automatic restriction on the use of the account for a fixed period of time. In its notification to the user, Facebook noted that its decision was final and could not be reviewed due to a temporary reduction in its review capacity due to COVID-19. For this reason, the user appealed to the Oversight Board.

So, you had an ethnic minority — one who had been attacked in the past — warning about those currently in power. And Facebook took it down, refused to review the appeal… until the Oversight Board turned its eye on it, and then admitted it was a mistake, and basically threw its hands in the air and said it had no idea why it had been taken down in the first place.

According to Facebook, following a single report against the post, the person who reviewed the content wrongly found a violation of the of the Dangerous Individuals and Organizations Community Standard. Facebook informed the Board that the user?s post included no reference to individuals or organizations designated as dangerous. It followed that the post contained no violating praise.

Facebook explained that the error was due to the length of the video (17 minutes), the number of speakers (two), the complexity of the content, and its claims about various political groups. The company added that content reviewers look at thousands of pieces of content every day and mistakes happen during that process. Due to the volume of content, Facebook stated that content reviewers are not always able to watch videos in full. Facebook was unable to specify the part of the content the reviewer found to violate the company?s rules.

Got that? Facebook is basically saying “yeah, it was a mistake, but that was because it was a long video, and we just had one person reviewing who probably didn’t watch the whole video.”

Here’s the thing that the “oh no, Facebook is censoring people” don’t get. This happens all the time. And none of us hear about it because the people it happens to often are unable to make themselves heard. They don’t get to run to Fox News or Parler or some other place and yell and scream. And, this kind of “accidental” moderation especially happens to the marginalized. Reviewers may not fully understand what’s going on, or not really understand the overall context, and may take the “report” claim at face value, rather than having the ability or time to fully investigate.

In the end, the Oversight Board told Facebook to put back the content, which was a no-brainer since Facebook had already done so. However, more interesting were its policy recommendations (which, again, are not binding on Facebook, but which the company promises to respond to). Here, the Oversight Board said that Facebook should make its community standards much more accessible and understandable, including translating the rules into more languages.

However, the more interesting bit was that it said that Facebook “should restore human review and access to a human appeals process to pre-pandemic levels as soon as possible while fully protecting the health of Facebook?s staff and contractors.” There were some concerns, early in the pandemic, about how well content moderation teams could work from home, since a lot of that job involves looking at fairly sensitive material. So, there may be reasons this is not really doable just yet.

Still, this case demonstrates a key point that we’ve tried to raise about the impossibility of doing content moderation at scale. So much of it is not about biases, or incompetence, or bad policies, or not wanting to do what’s right. A hell of a lot of it is just… when you’re trying to keep a website used by half the world operating, mistakes are going to be made.

Filed Under: appeals, content moderation, free speech, india, minorities, mistakes, review, takedowns
Companies: facebook, oversight board

Appeals Court Says Address Mistakes On Warrants Are Mostly Harmless, Not Worth Getting Excited About

from the what-even-the-fuck dept

In a case involving a drug bust utilizing a warrant with erroneous information, the Sixth Circuit Court of Appeals had this to say [PDF] about the use of boilerplate language and typographical errors:

Challenges to warrants based on typographical errors or factual inaccuracies typically fall under this Circuit’s clerical error exception. We have consistently found that inadvertent drafting mistakes, for instance transposing a number in a street address or listing an incorrect nearby address, do not violate the Fourth Amendment’s prohibition on unreasonable searches and seizures. That is because those errors create little risk of a mistaken search or a general warrant granting police an unconstitutionally broad authority to conduct searches.

The order to search listed the wrong address. Here’s why:

This description at the beginning of the warrant correctly directed officers to Abdalla’s precise New Hope Road address in DeKalb County. But the warrant’s final paragraph “commanded” officers “to search the . . . premises located at 245 Carey Road, Hartsville, Trousdale Tennessee.” (R. 20-1, Search Warrant, Page ID # 59.) Agent Gooch testified that the Carey Road address came from using a previous warrant as a template. Although Judge Patterson had jurisdiction over Abdalla’s residence in DeKalb County, he lacked jurisdiction in Trousdale County, which encompassed the Carey Road property listed on the warrant’s final page.

The court says this is harmless. Rather than suppress evidence in hopes that cops won’t just copy-paste “sworn statements” before running them by a judge, the Appeals Court says this creates “little risk” of “mistaken searches.” Perhaps in this case the risk was minimal. The rest of the warrant correctly described the residence and how to locate it. But to pretend careless warrant crafting rarely results in “mistaken searches” ignores how often it happens — and how often this supposed low-risk “mistake” results in real harm.

“Little risk?” Here’s what’s actually happening in the Sixth Circuit, which covers Kentucky, Michigan, Ohio, and Tennessee.

Oak Park, Michigan (November 2019): Police raid the wrong side of a duplex, breaking windows and the front door before realizing their mistake.

Flint, Michigan (October 2014): Troopers go to the wrong house to locate a fugitive, shoot family’s dog in the face.

Detroit, Michigan (May 2017): After conducting a one-day(!) human trafficking investigation, a SWAT teams raids the wrong house, handcuffs everyone present (including two children) before discovering their mistake.

Detroit, Michigan (September 2017): DEA agents raid two(!) wrong addresses. The forty officers(!!) recover no drugs. Search warrants and property receipts left at the properties by the feds were blank, according to this report.

Detroit, MIchigan (April 2017): Police raid wrong house, kill homeowner’s dog.

Nashville, Tennessee (August 2020): Three cops raid wrong house, traumatizing the resident and two young children. Officers predicated the search on housing information that hadn’t been updated since November 2018.

Lebanon, Tennessee (January 2006) – Officers raid wrong house, kill 61-year-old man while his wife is handcuffed in another room.

Louisville, Kentucky (October 2018) – Officers (three of whom shot and killed Breonna Taylor during another botched raid) using outdated information raid a house looking for someone who had moved out four months earlier.

Bowling Green, Kentucky (July 2016)Police raid the wrong house looking for a Black suspect. Officers handcuff and question the homeowner, who weighs 100 pounds less than the suspect they’re looking for. The interrogated homeowner is also one foot taller than the suspect. He’s also white.

Louisville, Kentucky (January 2020) – Officers enter the wrong house seeking a shooting suspect, handcuffing one of the residents.

Louisville, Kentucky (July 2020) – Cops raid a vacant house looking for a drug suspect who had already been arrested and was in jail. Officers break windows, destroy a door, and handcuff the man hired to paint the interior of the vacant residence.

Cleveland, Ohio (November 2018) – Wrong house raided during a shooting investigation. Cops cause over $8,000 of physical damage to the house and spend an hour interrogating all the residents — some of whom are disabled — before realizing their mistake.

Strongsville, Ohio (May 2010) – A man and his 14-year-old daughter are forced out of their house and made to lay face down on the lawn until officers realize they have the wrong address.

Cleveland, Tennessee (May 2018) – DEA and local cops raid wrong house in search of murder suspect. Flashbangs are deployed into the house despite the presence of young children — something officers should have been able to discern from the number of toys around the front entry of the residence.

This is just a small sampling. And this is just from this circuit, which covers only four of the 50 states. This happens far too frequently for it to be shrugged off by an Appeals Court, even if the facts of the case might lead the court to conclude a mistake in an affidavit doesn’t warrant the suppression of evidence.

The Fourth Amendment places the sanctity of the home above all else. And yet, officers continue to perform searches without performing the due diligence required to support a home invasion. Outdated info, unverified claims by informants, minimal investigative work… it all adds up to situations where rights are violated and residents are recklessly subjected to violence and deadly force.

How bad can it get? Here’s a true horror story that shows just how little law enforcement agencies care about the people they’re supposed to protect and serve:

Embarrassed cops on Thursday cited a “computer glitch” as the reason police targeted the home of an elderly, law-abiding couple more than 50 times in futile hunts for bad guys.

Apparently, the address of Walter and Rose Martin‘s Brooklyn home was used to test a department-wide computer system in 2002.

What followed was years of cops appearing at the Martins’ door looking for murderers, robbers and rapists – as often as three times a week.

Every wrong visit to the house was a chance for officers to respond with deadly force to perceived threats. That these residents managed to survive 50+ incidents with cops looking for violent criminals is a miracle. “Mistaken searches” are not an acceptable outcome. Blanket statements like these issued by courts just give cops more reasons to cut corners before banging their way into someone’s home in search of nonexistent criminals or criminal activity.

Filed Under: 4th amendment, 6th circuit, mistakes, police, warrants, wrong address

Moderation Of Racist Content Leads To Removal Of Non-Racist Pages & Posts (2020)

from the moderation-mistakes dept

Summary: Social media platforms are constantly seeking to remove racist, bigoted, or hateful content. Unfortunately, these efforts can cause unintended collateral damage to users who share surface similarities to hate groups, even though many of these users take a firmly anti-racist stance.

A recent attempt by Facebook to remove hundreds of pages associated with bigoted groups resulted in the unintended deactivation of accounts belonging to historically anti-racist groups and public figures.

The unintentional removal of non-racist pages occurred shortly after Facebook engaged in a large-scale deletion of accounts linked to white supremacists, as reported by OneZero:

Hundreds of anti-racist skinheads are reporting that Facebook has purged their accounts for allegedly violating its community standards. This week, members of ska, reggae, and SHARP (Skinheads Against Racial Prejudice) communities that oppose white supremacy are accusing the platform of wrongfully targeting them. Many believe that Facebook has mistakenly conflated their subculture with neo-Nazi groups because of the term ?skinhead.?

The suspensions occurred days after Facebook removed 200 accounts connected to white supremacist groups and as Mark Zuckerberg continues to be scrutinized for his selective moderation of hate speech.

Dozens of Facebook users from around the world reported having their accounts locked or their pages disabled due to their association with the “skinhead” subculture. This subculture dates back to the 1960s and predates the racist/fascist tendencies now commonly associated with that term.

Facebook?s policies have long forbidden the posting of racist or hateful content. Its ban on “hate speech” encompasses the white supremacist groups it targeted during its purge of these accounts. The removals of accounts not linked to racism — but linked to the term “skinhead’ — were accidental, presumably triggered by a term now commonly associated with hate groups.

Questions to consider:

Resolution: Facebook’s response was nearly immediate. Facebook apologized to users shortly after OneZero reported the apparently-erroneous deletion of non-racist pages. Guy Rosen (VP- Integrity at Facebook) also apologized for the deletion on Twitter to the author of the OneZero post, saying the company had removed these pages in error during its mass deletion of white supremacists pages/accounts and said the company is looking into the error.

Filed Under: bias, case studies, content moderation, mistakes, racist speech, skinheads
Companies: facebook

EFF Highlights Stories Of Bad Content Moderation With New TOSsed Out Site

from the content-moderation-is-impossible dept

We’ve pointed out for many years that content moderation at scale isn’t just hard, it’s impossible to do well. At the scale of giant platforms, there needs to be some level of moderation or the platforms and users will get overwhelmed with spam or abuse. But at that scale, there will be a ton of mistakes — both type I and type II errors (blocking content that shouldn’t be blocked and failing to block content that probably should be blocked). Some — frankly dishonest — people have used a few examples of certain content moderation choices to falsely claim that there is “anti-conservative bias” in content moderation choices. We’ve pointed out time and time again why the evidence doesn’t support this, though many people insist it’s true (and I’ll predict they’ll say so again in the comments, but when asked for evidence, they will fail to present any).

That’s not to say that the big platforms and their content moderation practices are done well. As we noted at the very beginning, that’s an impossible request. And it’s important to document the mistakes. First, it helps get those mistakes corrected. Second, while it will still be impossible for the platforms to moderate well, they can still get better and make fewer errors. Third, it can help people understand that errors are not because someone hates you or has animus towards a political group or political belief, but because they fuck up the moderation choices all the time. Fourth, it can actually help to find what actual patterns there are in these mistakes, rather than relying on moral panics. To that end, it’s cool to see that the EFF has launched a new site, creatively dubbed TOSsed Out to help track stories of bad content moderation practices.

Just looking through the stories already there should show you that bad content moderation choices certainly aren’t limited to “conservatives,” but certainly do seem to end up impacting actually marginalized groups:

EFF is launching TOSsed Out with several examples of TOS enforcement gone wrong, and invites visitors to the site to submit more. In one example, a reverend couldn?t initially promote a Black Lives Matter-themed concert on Facebook, eventually discovering that using the words ?Black Lives Matter? required additional review. Other examples include queer sex education videos being removed and automated filters on Tumblr flagging a law professor?s black and white drawings of design patents as adult content. Political speech is also impacted; one case highlights the removal of a parody account lampooning presidential candidate Beto O?Rourke.

?The current debates and complaints too often center on people with huge followings getting kicked off of social media because of their political ideologies. This threatens to miss the bigger problem. TOS enforcement by corporate gatekeepers far more often hits people without the resources and networks to fight back to regain their voice online,? said EFF Policy Analyst Katharine Trendacosta. ?Platforms over-filter in response to pressure to weed out objectionable content, and a broad range of people at the margins are paying the price. With TOSsed Out, we seek to put pressure on those platforms to take a closer look at who is being actually hurt by their speech moderation rules, instead of just responding to the headline of the day.?

As the EFF notes, this is something of a reaction to the White House’s intellectually dishonest campaign database building program to get people to report stories of social media bias against conservatives. Unlike the White House, which is focused on pulling some one-sided anecdotes it can misrepresent for political points, the TOSsed out effort is a real opportunity to track what kinds of mistakes happen in content moderation and how to deal with them.

Filed Under: content moderation, mistakes, tossed out, type 1 errors, type 2 errors
Companies: eff

Watchdog Says Australia's Traffic Enforcement System Has Hits Hundreds Of Drivers With Bogus Fines

from the move-fast-and-break-things:-hefty-bureaucracy-edition dept

In Australia, the government has automated traffic enforcement, letting the machines do the work. It’s also automated the limited due process procedures, giving aggrieved citizens the chance to have their complaints and challenges ignored at scale, as The Newspaper reports.

At least 397 motorists in Victoria, Australia, lost their right to drive because the state government bungled the handling of speed camera fines. In a report released Wednesday, Victoria Ombudsman Deborah Glass blasted Fines Victoria, the agency responsible for overseeing the handling of citations. The report reviewed 605 complaints from members of the public about how this year-old state agency handled their situations.

“Many complaints were about delay in the processing of nominations, completing reviews and implementing payment plans,” Glass said in a statement. “The impact of these issues should not be underestimated. People had their licenses wrongly suspended, or were treated as liable for substantial fines, when they had committed no offense.”

Automating the back end and the front end results in this sort of thing — just one of several personal anecdotes the Ombudsman collected from angry Australians forced to deal with the user-unfriendly system. From the report [PDF]:

Dan contacted the Ombudsman about infringements incurred by his late son who had died in tragic circumstances. Dan said he made multiple attempts to call Fines Victoria, but on each occasion the phone went to an automated service, or he was on hold for a long time.

Dan also said he had sent through multiple complaints online but had not received a response. We contacted Fines Victoria, providing a copy of the Coroner’s ‘confirmation of death certificate’ for Dan’s son. We followed up several times after receiving no response. Fines Victoria said later they had not received the email from us.

Dan received further enforcement letters which he said was extremely stressful and upsetting. He became increasingly concerned the Sheriff would go to his son’s home which he had shared with housemates.

Fines Victoria put all 27 outstanding matters, totalling $7,636.60, on hold pending their withdrawal. The enquiry process, however, went for close to 40 days. This is despite Dan providing all relevant information to Fines Victoria a number of times before contacting us.

That’s one of the possible outcomes of Fines Victoria mishandling its end of the complaint process: law enforcement showing up to seize property and auction it off to pay fines. All of this is set in motion automatically via date triggers. From the report, it appears Australian citizens can spend every day of their challenge periods on hold without ever reaching anyone who could help them with a resolution.

Even the lucky few who manage to speak to a live person aren’t going to receive much help. The multiple failures detailed in the report received responses from Fines Victoria ranging from “not our fault” to “we’ll try not to screw so many people over in the future.”

One citizen sums up the Fines Victoria experience succinctly:

When we make a mistake, we pay. When they make a mistake, we pay.

Some of this could have been prevented if Fines Victoria had used all of its allotted time to ensure proper functionality before bringing the system online. Instead, the government decided to rush implementation, leading to a spectacular run of failures that stripped people of their licenses, hit them with bogus fees, and otherwise made their lives miserable.

On 31 December 2017, Fines Victoria began operation with only partial IT functionality. Problems with IT functionality, and related procedural and processing issues, have been apparent since the inception of the agency…

It was not clear why, with a default commencement date of 31 May 2018, the decision was taken to commence earlier than this on 31 December 2017. This decision was perplexing considering Fines Victoria was reporting that significant IT challenges were affecting its operation from the time it commenced.

This rush job led directly to Australians being punished by a system that was much, much worse than the one it replaced.

Complaints received about Fines Victoria in 2018 represented a 74 per cent increase from the number of complaints received the previous year about Civic Compliance Victoria.

All the new system added was “significant IT challenges,” significant hold times, and a much lower chance of having problems resolved.

In addition to the rise in the number of complaints, Ombudsman staff observed a change in the nature of the complaints. People were expressing frustration about delays and being unable to make contact with Fines Victoria much more frequently than with the predecessor agency. There was also an increase in complexity about the administration of infringements which meant many complaints could not be resolved through a quick series of enquiries.

It’s not just the software. It’s also the people. The Ombudsman notes that the problems in the system were made worse by the government employees fielding complaints, who often seemed to go out of their way to push problems back on drivers.

Enforcement at this scale isn’t possible without at least some automation. But the government’s botched implementation shows it cared more about maximizing revenue than minimizing error. And when everything goes this wrong this quickly, it’s the citizens who end up paying for both the end results of a broken system and the costs involved in fixing it.

Filed Under: australia, automated traffic enforcement, fines, mistakes, traffic enforcement, victoria

Washington Prison Management Software Setting People Free Too Early, Keeping Other People Locked Up Too Long

from the on-average,-it-works dept

All this technology is getting in the way of justice being served. For the second time in five years, Washington’s Department of Corrections is dealing with issues created by its prisoner management software. Four years ago, this happened:

For three years, state Department of Corrections staff knew a software-coding error was miscalculating prison sentences and allowing inmates to be released early. On Tuesday, Gov. Jay Inslee gave the damning tally: up to 3,200 prisoners set free too soon since 2002.

For thirteen years, officials knew there was a problem with the software but did nothing about it. It wasn’t until the state’s governor got involved that anyone at the DOC started caring about its malfunctioning code. The code was supposedly fixed but new problems arose, affecting both sides of the jail walls.

A software problem has caused at least a dozen Washington prison inmates to be released too early — or held too long — and has sparked a review of as many as 3,500 cases.

Department of Corrections (DOC) officials are scrambling to determine whether the other inmates’ sentences were miscalculated and are still working to gauge the scope of the problem.

The previous bug miscalculated “good time” credits, resulting in thousands of premature releases. This time around, buggy code is screwing up calculations for inmates who have violated their parole. A few have benefited from the problem. But most of the cases being reviewed involve inmates who have been jailed for too long.

The previous calculation error resulted in two homicides by an inmate who was released too early. This time around, it’s far more likely inmates who have served their time aren’t being released. Either way, there’s life and liberty on the line and the Department of Corrections is showing little sense of urgency when addressing these problems.

If there’s a silver lining, it’s this: with enough votes, the state’s convoluted sentencing laws will be simplified, potentially making the calculation of sentences easy enough any human can do it.

[State Representative Roger] Goodman has proposed legislation, House Bill 1495, creating an 18-member task force that would review and recommend simplifications to the state’s sentencing guidelines, with a final report due by the end of 2020. He said his committee also may ask DOC leaders for a public briefing and explanation of the latest sentence-calculation problems.

Of course, this could take as long to fix as it took the state DOC to fix its original software problem. No answers would be expected for another 18 months, by which point new bugs in the DOC’s software may surface and start handing out get out of jail free cards to some inmates and go directly to jail cards to others who’ve already served their time.

And if it’s not working now, there’s a good chance the DOC’s software will never function properly. Part of the problem is the legislature itself, which complicates sentence calculations by adding new wrinkles with each legislative session. According to the Seattle Times, at least 60 bills in the pipeline could affect sentencing guidelines, increasing the chance of new calculation errors developing.

Software may be the only way to handle a job this complex. But those overseeing the software’s deployment have shown they’re not too interested in proactive maintenance of this complex system. Problems are eventually solved, years after the fact. That sort of responsiveness is unacceptable when guidelines are being constantly altered by new legislation.

Filed Under: department of corrections, inmates, mistakes, prison management, software, washington

There Was Heavy Tech Lobbying On Article 13… From The Company Hoping To Sell Everyone The Filters

from the lobbyist-influence dept

One of the key themes we’ve been hearing for years now concerning the EU’s awful Article 13 section of the EU Copyright Directive, was that no one should pay any attention at all to the critics of Article 13, because it’s all just “big tech lobbying” behind any of the criticism. In the past, we highlighted a few of these claims:

Here’s Geoff Taylor from BPI:

> The US tech lobby has been using its enormous reach and resources to try to whip up an alarmist campaign…

And here’s Richard Ashcroft from PRS for Music:

> the Internet giants… have whipped up a social media storm of misinformation about the proposed changes in order to preserve their current advantage.

And how about UK Music’s Michael Dugher who really wants to blame Google for everything:

> Some absolute rubbish has been written about the EU?s proposed changes on copyright rules. > > Amongst the ludicrous suggestions from the likes of Google is the claim that the shake-up will mean the end of memes, remixes and other user-generated content. Some have said that it will mean ?censorship? and even wildly predicted it will result in the ?death of the internet?. > > This is desperate and dishonest. Whilst some of the myths are repeated by people who remain blissfully untroubled by the technical but crucially important details of the proposed EU changes, in the worst cases this propaganda is being cynically pedalled by big tech like Google?s YouTube with a huge vested and multi-million-pound interest in this battle.

However, as we wrote about back in December, an analysis that looked at the actual lobbying efforts around copyright in the EU found that it was done overwhelmingly by the legacy copyright industries, and only sparingly by the tech companies. In that post, I went through a spreadsheet looking at the lobbying of the EU Commission, and found that over 80% of the meetings were from the entertainment industry.

However, as is coming out now, there was definitely one “tech” company that was one of the most aggressive lobbyists on Article 13. However, it was lobbying in favor of it, and that’s because it knew that Article 13 would lead to an artificial, but highly inflated demand for internet filters. And that’s the company known for building the filtering technology behind nearly all of the non-ContentID copyright filters: Audible Magic.

Law professor Annemarie Bridy recently posted a detailed Twitter thread of Audible Magic’s lobbying activities regarding Article 13. It’s easy to see why the company did so, because the law, if put into effect, would be a huge, huge benefit for Audible Magic, more or less forcing nearly every internet platform of a decent size to have to purchase Audible Magic’s technology. Indeed, in the run-up to Article 13, we heard directly from policymakers in the EU who would point to Audible Magic as “proof” that filtering technology was readily available for not much money and that it worked. Neither of these claims are accurate.

On the fees, Audible Magic has a public pricing page that has been frequently pointed out by supporters of Article 13, often with the claim “fees start as low as 1,000permonth.”But…that’snotaccurate.The1,000 per month.” But… that’s not accurate. The 1,000permonth.”Butthatsnotaccurate.The1,000 only applies to “on device” databases. Hosted databases start at 2,000permonth,whichisalreadydoublethat…andthe2,000 per month, which is already double that… and the 2,000permonth,whichisalreadydoublethatandthe2,000 per month only covers very low levels of usage. Indeed, the usage rates are so low that it’s unlikely to think that any company that used Audible Magic at that rate would be making very much (if any) money at all — meaning that relatively speaking, Audible Magic would be a huge margin killer. And the rates quickly go up from there. Indeed, on Audible Magic’s pricing page, as soon as you get to a level that one might consider “sustainable” for a business, the prices become “contact us.”

A few years back I spoke with one mid-sized streaming company, who told me Audible Magic was quoting them fees that were between 30,000to30,000 to 30,000to60,000 per month. An academic paper from 2017 found pricing to be slightly lower than what I had heard, but still quite expensive:

Commercial fingerprinting and filtering services, such as Audible Magic and Vobile, do not publicly release pricing. But we can guess at the ballpark: one medium-sized file hosting service reported that its license for Audible Magic filtering cost 10,000−12,000permonthin2011(thoughthisproviderwaslaterabletonegotiateareducedratebasedontheamountofcontentflaggedthroughthesystem).AnotherestimatedthatAudibleMagiccostitsserviceroughly10,000-12,000 per month in 2011 (though this provider was later able to negotiate a reduced rate based on the amount of content flagged through the system). Another estimated that Audible Magic cost its service roughly 10,00012,000permonthin2011(thoughthisproviderwaslaterabletonegotiateareducedratebasedontheamountofcontentflaggedthroughthesystem).AnotherestimatedthatAudibleMagiccostitsserviceroughly25,000 per month. OSPs noted that the licensing fees are just the beginning. Filtering systems, several OSPs noted, are not turnkey services. They require integration with existing systems and upkeep as the OSP takes on new mediation roles between rightsholder and user (such as tracking and managing user appeals).

In other words, those things get really pricey quickly — such that it becomes untenable for all but the largest of service providers.

And that leads us to the second part, about whether or not they work. As we’ve been detailing for years, the answer is clearly no. These fingerprinting technologies make both false positive and false negative errors all the time. We probably have a few examples sent our way every single day. Incredibly, the very lobbying video that Bridy points out Audible Magic created as part of its lobbying effort says that Audible Magic’s technology is accurate to about 99%.

Last summer, we highlighted that Alec Muffett created a “simulator” that would look at the the error rates on such filtering technology — and it noted that if you went with an accuracy level of 99.5% (higher than even Audible Magic claims) and ran it across 10 million pieces of content, you’d end up censoring approximately 50,000 pieces of content that were non-infringing. 50,000. And that’s assuming the technology is even more accurate than even Audible Magic will claim.

And, of course, that’s solely discussing the matching accuracy. It says absolutely nothing about understanding user rights — like fair use, fair dealing, parody, etc — none of which Audible Magic takes into account (meaning even more non-infringing works would get censored).

And, yet, as Bridy shows, Audible Magic has been lobbying hard for this:

It’s not surprising that they’d lobby for such a thing. I mean, which company wouldn’t lobby for a new law that would effectively require thousands of internet companies to buy your product for which there is little to no real competition (oh yeah, which almost certainly means Audible Magic would likely raise prices once the government required everyone to buy its filters).

Bridy highlights that their lobbying claims are complete bullshit as well:

Here they are arguing that safe harbors, which everyone knows are designed to *remove* barriers to entry for companies that host UGC, *create* barriers to entry for paid subscriptions services that don't host UGC. ??? Note the invocation of IFPI-created "value gap" rhetoric. pic.twitter.com/G6RjosnWRy

— Annemarie Bridy (@AnnemarieBridy) January 19, 2019

Quite incredible that they highlight the “voluntary” nature of the filters while lobbying for making their technology required under law. And equally ridiculous that they claim that intermediary liability protections some how create “barriers” for online services. That, as Bridy points out, is exactly the opposite of reality. Safe harbors create clear rules that platforms understand so they know what they need to do to set up a legit platform. Removing those rules, as Article 13 does, and requiring expensive (and terrible) technology is a huge barrier, as the cost is prohibitive for most.

There’s more in that thread, but as Bridy shows, Audible Magic’s own presentation shows that it knows who the “winner” of Article 13 will be: Audible Magic inserting itself to become the de facto “copyright filter” layer of the internet:

Who will be one of the *biggest* $ winners if upload filters become enmeshed in the regulatory environment for all online intermediaries that host UGC? You guessed it: Audible Magic–right there in the middle of everything. pic.twitter.com/WcEFt7UDkd

— Annemarie Bridy (@AnnemarieBridy) January 19, 2019

So, yeah, there was some “tech” lobbying for Article 13 and its mandatory filters. It was just coming from the biggest supplier of those filters.

Filed Under: article 13, censorship, copyright, eu, eu copyright directive, filters, lobbying, mistakes
Companies: audible magic