algorithm – Techdirt (original) (raw)

Lawsuit Claiming YouTube’s Algorithms Discriminate Against Minorities Tossed By Federal Court

from the not-your-usual-YouTube-lawsuit dept

This is a bit of an oddity. We’ve seen lots of lawsuits against social media services filed by bigots who are upset their accounts have been banned or their content removed because, well, they’re loaded with bigotry. You know, “conservative views,” as they say.

This one goes the other direction. It claims YouTube’s moderation algorithm is the real bigot here, supposedly suppressing content uploaded by non-white users. The potential class action lawsuit was filed in 2020, alleging YouTube suppressed certain content based on the creators’ racial or sexual identity, as Cal Jeffrey reports for Techspot (which, unlike far too many major news outlets, embedded the decision in its article).

“Instead of ‘fixing’ the digital racism that pervades the filtering, restricting, and blocking of user content and access on YouTube, Defendants have decided to double down and continue their racist and identity-based practices because they are profitable,” the lawsuit stated.

Whew, if true. YouTube has taken a lot of heat over the years for a lot of things related to its content moderation and recommendation algorithms (ranging from showing kids stuff kids shouldn’t see to the previously mentioned “censorship” of “conservatives”), but rarely has it been accused of being racist or bigoted. (That’s Facebook’s territory.)

While it’s not the first lawsuit of this particular kind we’ve covered here at Techdirt (that one was tossed in early July), this case certainly isn’t going to encourage more plaintiffs to make this sort of claim in court. (Well, one would hope, anyway…) While the plaintiffs do credibly allege something weird seems to be going on (at least in terms of the five plaintiffs), they fail to allege this handful of anecdotal observations is evidence of anything capable of sustaining a federal lawsuit.

From the decision [PDF]:

The plaintiffs in this proposed class action are African American and Hispanic content creators who allege that YouTube’s content-moderating algorithm discriminates against them based on their race. Specifically, they allege that their YouTube videos are restricted when similar videos posted by white users are not. This differential treatment, they believe, violates a promise by YouTube to apply its Community Guidelines (which govern what type of content is allowed on YouTube) “to everyone equally—regardless of the subject or the creator’s background, political viewpoint, position, or affiliation.” The plaintiffs thus bring a breach of contract claim against YouTube (and its parent company, Google). They also bring claims for breach of the implied covenant of good faith and fair dealing, unfair competition, accounting, conversion, and replevin.

YouTube’s motion to dismiss is granted. Although the plaintiffs have adequately alleged the existence of a contractual promise, they have not adequately alleged a breach of that promise. The general idea that YouTube’s algorithm could discriminate based on race is certainly plausible. But the allegations in this particular lawsuit do not come close to suggesting that the plaintiffs have experienced such discrimination.

As the court notes, the plaintiffs have been given several chances to salvage this suit. It was originally presented as a First Amendment lawsuit and was dismissed because YouTube is not a government entity. Amended complaints were submitted as the lawsuit ran its course, but none of them managed to surmount the lack of evidence the plaintiffs presented in support of their allegations.

Shifting the allegations to involve California contract laws hasn’t made the lawsuit any more winnable. Plaintiffs must show courts there’s something plausible about their allegations and what was presented in this case simply doesn’t cut it. Part of the problem is the sample size. And part of the problem is whatever the hell this is:

The plaintiffs rely primarily on a chart that purports to compare 32 of their restricted videos to 58 unrestricted videos posted by white users. To begin with, the plaintiffs have dug themselves into a bit of a hole by relying on such a small sample from the vast universe of videos on YouTube. The smaller the sample, the harder it is to infer anything other than random chance. But assuming a sample of this size could support a claim for race discrimination under the right circumstances, the chart provided by the plaintiffs is useless.

As a preliminary matter, 26 of the 58 comparator videos were posted by what the complaint describes as “Large Corporations.” The complaint alleges that “Large Corporation” is a proxy for whiteness. See Dkt. No. 144 at 26–31; Dkt. No. 144 at 4 (defining, without support or elaboration,users who Defendants identify or classify as white” as “including large media, entertainment, or other internet information providers who are owned or controlled by white people, and for whom the majority of their viewership is historically identified as white”). The plaintiffs have offered no principled basis for their proposition that corporations can be treated as white for present purposes, nor have they plausibly alleged that YouTube actually identifies or classifies corporations as white.

Drilling down the specifics doesn’t help the plaintiffs either.

In terms of content, many of the comparisons between the plaintiffs’ restricted videos and other users’ unrestricted videos are downright baffling. For example, in one restricted video, a plaintiff attributes his recent technical difficulties in posting videos on YouTube to conscious sabotage by the company, driven by animus against him and his ideas. The chart in the complaint compares this restricted video with a tutorial on how to contact YouTube Support. In another example, the chart compares a video where a plaintiff discusses the controversy surrounding Halle Bailey’s casting as the Little Mermaid with a video of a man playing—and playfully commenting on—a goofy, holiday-themed video game.

Other comparisons, while perhaps not as ridiculous as the previous examples, nonetheless hurt the plaintiffs. For instance, the chart compares plaintiff Osiris Ley’s “Donald Trump Makeup Tutorial” with tutorials posted by two white users likewise teaching viewers how to create Trump’s distinctive look. But there is at least one glaring difference between Ley’s video and the comparator videos, which dramatically undermines the inference that the differential treatment was based on the plaintiff’s race. About a minute and a half into her tutorial, Ley begins making references to the Ku Klux Klan and describing lighter makeup colors as white supremacy colors. Ley certainly appears to be joking around, likely in an effort to mock white supremacists, but this would readily explain the differential treatment by the algorithm.

And so it goes for other specific examples offered by the plaintiffs:

Only a scarce few of the plaintiffs’ comparisons are even arguably viable. For example, there is no obvious, race-neutral difference between Andrew Hepkins’s boxing videos and the comparator boxing videos. Both sets of videos depict various boxing matches with seemingly neutral voiceover commentary. The same goes for the comparisons based on Ley’s Halloween makeup tutorial. It is no mystery why Ley’s video is restricted—it depicts graphic and realistic makeup wounds. But it is not obvious why the equally graphic comparator videos are not also restricted. YouTube suggests the difference lies in the fact that one of the comparator videos contains a disclaimer that the images are fake, and the other features a model whose playful expressions reassure viewers that the gruesome eyeball dangling from her eye socket is fake.

But the content is sufficiently graphic to justify restricting impressionable children from viewing it.These videos are the closest the plaintiffs get to alleging differential treatment based on their race. But the complaint provides no context as to how the rest of these users’ videos are treated, and it would be a stretch to draw an inference of racial discrimination without such context. It may be that other similarly graphic makeup videos by Ley have not been restricted, while other such videos by the white comparator have been restricted. If so, this would suggest only that the algorithm does not always get it right. But YouTube’s promise is not that its algorithm is infallible. The promise is that it abstains from identity-based differential treatment.

And that’s part of the unsolvable problem. Content moderation at this scale can never be perfect. What these plaintiffs see as discrimination may be nothing more than imperfections in a massive system. A sample size of 32 videos compared to the hundreds of hours of video uploaded per second to YouTube isn’t large enough to even be charitably viewed as a rounding error.

Because of the multitude of problems with the lawsuit, any Section 230 immunity raised by YouTube isn’t even addressed. The final nail in the lawsuit’s coffin involves the timing of accusations, some of which predate YouTube’s Community Guidelines updates that specifically addressed non-discriminatory moderation efforts.

[T]hese alleged admissions [by YouTube executives] were made in 2017, four years before YouTube added its promise to the Community Guidelines. In machine-learning years, four years is an eternity. There is no basis for assuming that the algorithm in question today is materially similar to the algorithm in question in 2017. That’s not to say it has necessarily improved—for all we know, perhaps it has worsened. The point is that these allegations are so dated that their relevance is, at best, attenuated.

Finally, these allegations do not directly concern any of the plaintiffs or their videos. They are background allegations that could help bolster an inference of race-based differential treatment if it were otherwise raised by the complaint. But, in the absence of specific factual content giving rise to the inference that the plaintiffs themselves have been discriminated against, there is no inference for these background allegations to reinforce.

As the court notes, there is the possibility YouTube’s algorithm behaves in a discriminatory manner, targeting non-white and/or non-heterosexual users. But an incredibly tiny subset of allegedly discriminatory actions that may demonstrate nothing more than a perception of bias is not enough to sustain allegations that YouTube routinely suppresses content created by users like these.

Relying on a sampling this small is like looking out your window and deciding that because it’s raining outside of your house, it must be raining everywhere. And that’s the sort of thing courts aren’t willing to entertain as plausible allegations.

Filed Under: algorithm, class action, discrimination, section 230
Companies: youtube

Engineers Gave Elon’s Tweets Special Treatment Because Elon Freaked Out That A Joe Biden Tweet Got More Engagement

from the catering-to-the-bossman's-ego dept

What’s the opposite of shadowbanning? Maxboosting? I dunno, but whatever it is, that’s what Twitter’s frustrated and exhausted engineers gave Elon Musk after he whined (for not the first time) that people might like someone more than they like Elon. By now you know the basics: last week it was reported that Elon was getting frustrated that the views on his tweets were dropping, and he apparently fired an engineer who suggested that maybe, just maybe, Elon wasn’t quite so popular any more. Then, on Monday, suddenly lots of people found that their “For You” algorithmic feed (something Musk insisted was evil before he took over, but now is pressuring people to use) basically was just The Elon Musk show, with every tweet being something from Elon.

Zoe Schiffer and Casey Newton are back with the inside scoop on what happened. Basically, it sounds like Elon threw yet another tantrum, this time because a Joe Biden Super Bowl tweet got more engagement than an Elon Musk tweet. So, in the middle of the night after the Super Bowl, Mr. Nepotism had his cousin send a message to everyone at Twitter, saying this was a “high urgency” issue.

At 2:36 on Monday morning, James Musk sent an urgent message to Twitter engineers.

“We are debugging an issue with engagement across the platform,” wrote Musk, a cousin of the Twitter CEO, tagging “@here” in Slack to ensure that anyone online would see it. “Any people who can make dashboards and write software please can you help solve this problem. This is high urgency. If you are willing to help out please thumbs up this post.”

When bleary-eyed engineers began to log on to their laptops, the nature of the emergency became clear: Elon Musk’s tweet about the Super Bowl got less engagement than President Joe Biden’s.

Of course, for any person who can understand basic things like “what people like” you can kinda see why Biden’s tweet about the Super Bowl got more attention than Musk’s. Biden posted a sweet message noting that while he wasn’t taking sides, he had to root for the Eagles because Jill Biden apparently is a huge Eagles fan. It’s a cute tweet.

Musk’s tweet, on the other hand, was just straight up “Go @Eagles” with a bunch of American flags, and there was little reason to interact with it.

And, I mean, even funnier is that after the Eagles lost (despite leading for much of the game) Musk… deleted his tweet. Like a true fan. Hardcore.

Still, most normal human beings would recognize that one of those tweets is endearing, and one is just “Look at me, I am embracing your sports team. Love me.” So, it’s not really a surprise that one got more engagement than the other. It wasn’t “the algorithm.” It wasn’t even who is popular and who is not. One is just clearly a more engagement-worthy tweet.

But Musk’s always hungry ego must be sated, so his cousin sent out the “high urgency” issue, and Musk allegedly threatened to fire his remaining engineers if they didn’t solve the problem of his tweets not getting enough engagement:

Platformer can confirm: after Musk threatened to fire his remaining engineers, they built a system designed to ensure that Musk — and Musk alone — benefits from previously unheard-of promotion of his tweets to the entire user base.

[….]

His deputies told the rest of the engineering team this weekend that if the engagement issue wasn’t “fixed,” they would all lose their jobs as well.

Musk told them directly that making his tweets popular again was the top priority project. This is entering mad king territory:

Late Sunday night, Musk addressed his team in-person. Roughly 80 people were pulled in to work on the project, which had quickly become priority number one at the company. Employees worked through the night investigating various hypotheses about why Musk’s tweets weren’t reaching as many people as he thought they should and testing out possible solutions.

The solution, basically hard code into the system that every tweet that Elon Musk ever sends must be considered crazy popular by the algorithm, to a level that it must mean that everyone wants to see it, and therefore everyone will:

By Monday afternoon, “the problem” had been “fixed.” Twitter deployed code to automatically “greenlight” all of Musk’s tweets, meaning his posts will bypass Twitter’s filters designed to show people the best content possible. The algorithm now artificially boosted Musk’s tweets by a factor of 1,000 – a constant score that ensured his tweets rank higher than anyone else’s in the feed.

Internally, this is called a “power user multiplier,” although it only applies to Elon Musk, we’re told. The code also allows Musk’s account to bypass Twitter heuristics that would otherwise prevent a single account from flooding the core ranked feed, now known as “For You.”

For a guy who insisted he was going to “open source” the Twitter algorithm to stop it from artificially promoting one story over another, he’s literally done the opposite. All because he can’t admit that maybe someone else’s tweet was better than his? What a pathetic insecure little brat.

There’s a lot more in the Platformer/Verge piece, but the closing quote from an engineer working on this is the most telling by far:

Terrified of losing their jobs, this is the system that Twitter engineers are now building.

“He bought the company, made a point of showcasing what he believed was broken and manipulated under previous management, then turns around and manipulates the platform to force engagement on all users to hear only his voice,” said a current employee. “I think we’re past the point of believing that he actually wants what’s best for everyone here.”

Elon is, of course, free to do whatever nonsense he wants with the site. He owns it. But people need to realize that he’s been incredibly hypocritical and gone back on nearly every single promise he’s made in running the site, and each time he goes back on a promise, rather than going back in a manner to benefit all users, he only goes back such that it benefits him, and him alone.

Of course, to give credit where credit is due, Matt Levine totally called this back when Elon first bought his original 9% stake in Twitter. Levine predicted how the first meeting with then CEO Parag Agarwal and Musk (as his largest shareholder) would go:

Twitter’s relatively new chief executive officer, Parag Agrawal: Welcome, Mr. Musk. We’re so glad that you are our biggest shareholder. We have prepared a presentation showing how we are executing on our strategy of being more technically nimble, building new products and growing revenue and active users. Here on slide 1 you can see—

Elon Musk: Make the font bigger when I tweet.

Agrawal: What?

Musk: I am your biggest shareholder, I want the font on my tweets to be bigger than the font on everyone else’s tweets.

Agrawal: That’s not really how we—

Musk: And I want 290 characters. Again, just for me.

Agrawal:

Musk: And it should play a little sound when I tweet so everyone knows.

Agrawal: I just feel like we want to make a good product for all of our millions of users? I feel like that is going to improve profitability in the long run and, as our largest shareholder, you in particular stand to benefit from—

Musk: Oh I don’t care even a little bit about that, if your stock doubles that is rounding error on my net worth, I just love tweeting and want to meddle a bit to optimize it for my personal needs.

I honestly didn’t think Musk could possibly be that vain and that petty. But I guess I was wrong.

Filed Under: algorithm, elon musk, engagement, for you, joe biden, tweets
Companies: twitter

Reality Check: Twitter Actually Was Already Doing Most Of The Things Musk Claims He Wants The Company To Do (But Better)

from the with-a-very-narrow-exception dept

So there has been lots of talk about Elon Musk and his takeover of Twitter. I’ve written multiple things about how little he understands about free speech and how little he understands content moderation. I’ve also written (with giant caveats) about ways in which his takeover of Twitter might improve some things. Throughout this discussion, in the comments here, and on Twitter, a lot of people have accused me of interpreting Musk’s statements in bad faith. In particular, people get annoyed when I point out that the two biggest points he’s made — that (1) Twitter should allow all “legal” speech, and (2) getting rid of spambots is his number one priority — contradict each other, because spambots are protected speech. People like to argue that’s not true, but they’re wrong, and anyone arguing that expression by bots is not protected doesn’t understand the 1st Amendment at all.

Either way, I am always open to rethinking my position, and if people are claiming that I’m interpreting Musk in bad faith, I can try to revisit his statements in a more forgiving manner. Let’s, as the saying goes, take him figuratively, rather than literally.

But… here’s the thing. If you interpret Musk’s statements in the best possible light, it’s difficult to see how Twitter is not already doing pretty much everything he wants it to do. Now, I can already hear the angry keyboard mashing of people who are very, very sure that’s not true, and are very, very sure that Twitter is an evil company “censoring political views” and “manipulating elections” and whatever else the conspiracy theory of the day is. But it’s funny that the same people who insist that I’m not being fair to Musk, refuse to offer the same courtesy or willingness to understand why and how Twitter actually operates.

So, let’s look at Musk’s actual suggestions, phrased in the best possible light, and look at what Twitter has actually done and is doing… and again, you’ll realize that Twitter is (by far!) the social media service that has gone the farthest to make what he wants real, and in the few areas that he seems to think the company has fallen short, the reality is that it has had to balance difficult competing interests, and realized that its approach is the most likely to get to the larger goal of providing a platform for global conversation.

Musk has repeatedly said that he sees free speech on Twitter as an important part of democracy. So do many people at Twitter. They were the ones who framed themselves as the “free speech wing of the free speech party.” But as any actual expert in free speech will tell you, free speech does not mean that private websites should allow all free speech. And I know people — including Musk — will argue against this point, but it’s just fundamentally wrong. We’ve gone over this over and over again. The internet itself (which is not owned by any entity) is the modern public square, and anyone is free to set up shop on it. But that does not mean that they get to commandeer private property for their own screaming fits.

If it did, you would not have free speech, because you would (1) just get inundated with spam and garbage, and (2) only the loudest, most obnoxious voices would ever be heard. The team at Twitter actually understands the tradeoffs here, and while they don’t always get it “right” (in part because there is no “right”), Twitter’s team is so far above and beyond any other social media website, it’s just bizarre that the public narrative insists the opposite.

Twitter has long viewed its mission as enabling more free speech and more conversation in the world, and has taken steps to actually make that possible. Opening up the platform to people who violate the rules, abuse and harass others, and generally make a mess of things, does not aid free speech or “democracy.” You can disagree with where Twitter draws the lines (and clearly, Musk does), but Musk has shown little to no understanding of why and how the line drawing is done in the first place, and if he moves in the direction he claims, will quickly realize that Twitter’s lines are drawn much much much more permissively than nearly any other website (including, for what it’s worth, Trump’s Truth Social), and that there are actually clear reasons for why it drew the lines it did — and those lines are often to enable more ability for there to be communication and conversation on the platform.

Twitter has long allowed all sorts of dissenting viewpoints and arguments on its platform. Indeed, there are many activists who insist that the problem is that Twitter doesn’t do enough moderation. Instead, Twitter has put in place some pretty clear rules, and it tries to only take down accounts that really break those rules. It doesn’t always get that right. And it misses some accounts, and takes down others it shouldn’t. But on the whole, it’s way more permissive than most any other site that is much quicker to ban users.

Second, even as it contradicts his first point, Musk has claimed that he wants to get rid of spambots and scambots. This is a good goal. And, again, it’s also one that Twitter has been working on for ages. And it has really good, really smart people working on the issue (some of the best out there). And, in part because the company is so open and so permissive (again much more so than other platforms), this is an extraordinarily difficult problem to solve, especially at the scale of Twitter. People assume, falsely, that Twitter doesn’t care about spammers, but part of the issue is that if you want to have an “open” platform for “free speech,” that means that people will take advantage of that. Musk is going to find that Twitter already has some of the best people working on this issue — that is if they don’t rush out the door (or get pushed out by him).

Third, Musk has talked about redoing the verification system. He’s said that Twitter should “authenticate all real humans.” This appears to be (at least partly) part of his method for dealing with the bots and spam he’d like to eradicate. For years we’ve discussed the dangers of a “real names” policy, that requires people to post under their own names, including that studies have shown that the trolling often is worse under real names. It’s especially dangerous for marginalized people, and those who have stalkers, or are otherwise at risk.

But, some people respond, it’s unfair to assume he means a real names policy. Perhaps he just means that Twitter will keep a secret database of your verified details, and you can still be pseudonymous on the site. Except, as experts will tell you, that still is massively problematic, especially for marginalized groups, at-risk individuals, and those in countries with authoritarian regimes. Because now that database becomes a massive target. You get extremely questionable subpoenas, seeking to unmask users all the time. Or, you get the government demanding you cough up info on your users. Or you get hackers trying to get into the database. Or, you get authoritarian countries getting employees into these companies to seek out info on critics of the regime.

All of these things have happened with Twitter. And Twitter was in a position to push back. But it sure helped that in many of those cases Twitter didn’t actually have their “verification,” but much less information, like an IP address and an email.

Or, to take it another level, perhaps Musk really just means that Twitter should offer verification to those who want it. That’s not at all what he said, but it’s how some of his vocal supporters have interpreted this. Well, once again, Twitter has tried that. And it didn’t work. Back in 2016, Twitter opened up verification for everyone, and the company quickly realized it had a huge mess on its hands. First people gamed the system. Second, even though the program was only meant to just verify that the name on the account was the real person it was labeled as, people took it to be an “endorsement” by Twitter, which created a bunch of other headaches. Given that, Twitter paused the program.

It then spent years trying to figure out a way to open up verification to anyone without running into more problems. Indeed, Jack Dorsey made it clear that the plan has always been to “open verification to everyone.” But it turns out that, like dealing with spam and like dealing with content moderation, this is a much harder problem to solve at scale than most people think. It took Twitter almost four years to finally relaunch its verification program in a much more limited fashion, which they hoped would allow the company to test out the new process in a way that would avoid abuse.

But even in that limited fashion the program ran into all sorts of problems. It had to shut down the program a week after launching it, to sort out some of the issues. Then, it had to do so again 3 months later, after finding more problems with the program — specifically that fake accounts were able to game the verification process.

But, again, Twitter has been trying to do exactly what Musk’s fans insist he wants to do. And they’ve been doing so thoughtfully, and recognizing the challenges of actually doing it right, and realizing that it involves a lot of careful thought and tradeoffs.

Next, Musk said that Twitter DMs should have end-to-end encryption, and on this I totally agree. It should. And lots of others have been asking for this as well. Including… people within Twitter who have been working on it. But there are a lot of issues in making that actually work. It’s not something that you can just flip a switch on. There are some technical challenges… but also some social issues as well. All you have to do is look at how long it’s taken Facebook to do the same thing — in part because as soon as the company planned to do this, they were accused of not caring about child safety. Maybe, a privately owned Twitter, controlled by Musk just ignores all that, but there are real challenges here, and it’s not quite as easy as he seems to think. But, once again, it’s not an issue that’s never occurred to Twitter either.

Another recent Musk “idea” was that content moderation should be “politically neutral,” which he (incorrectly) claims “means upsetting the far right and far left equally.” For a guy who’s apparently so brilliant, you’d think he’d understand that there is no fundamental law that says (1) political viewpoints are distributed equally across a bell curve and (2) the differences between neutrality of inputs and neutrality of outputs. That is, every single study has shown that, if anything, Twitter’s content moderation practices greatly favor the right. It’s just that (right now), the right is much, much, much more prone to sharing misinformation. But if you have an unequal distribution of troublemakers, then a “neutral” policy will lead to unequal outcomes. Musk seems to want equal outcomes which literally would mean a non-neutral policy that gives much, much, much more leeway to troublemakers on the right. You can’t have equal outcomes with a neutral policy if the distribution is unequal.

Finally, the only other idea that Musk has publicly talked about is “open sourcing” the algorithm. At a first pass, this doesn’t make much sense, because it’s not like you can just put the code on Github and let everyone figure it out. It’s a lot more complicated than that. In order to release such code, you first have to make sure that it doesn’t reveal anything sensitive, or reveal any kind of vulnerabilities. The process for securing production code that was built in a closed source environment to make it open source… is not easy. Having dealt with multiple projects attempting to do that, it almost always fails.

In addition, if they were open sourcing the algorithm, the people it would benefit the most are the spammers and scammers — the very accounts Musk claims are his very first priority to stomp out. So once again, his stated plans contradict his other stated plans.

But… Twitter has actually again been making moves in this general direction all along anyway. Jack Dorsey, for years, has talked about why there should be “algorithmic choice” on Twitter, where others can build up their own algorithms, and users can pick and choose whose algorithm to use. That’s not the same as open sourcing it, but actually seems like it would be a hell of a lot closer to what Musk actually wants — a more open platform where people aren’t limited to just Twitter’s content moderation choices. And, as Dorsey has pointed out, Twitter is also the only platform that allows you to turn off the algorithm if you don’t want it.

So, as we walk down the list of each of the “ideas” that Musk has publicly talked about, taking them in the most generous light, it’s difficult to argue that Twitter isn’t (1) already doing most of it, but in a more thoughtful and useful manner, (2) much further along in trying to meet those goals than any other social media platform, and (3) already explored, tested, and rejected some of his ideas as unworkable.

Indeed, about the only actual practical point that Musk seems to disagree with Twitter about is a few specific content moderation decisions that he believes should have gone in a different direction. And this is, as always, the fundamental disconnect in any conversation about content moderation. Every individual — especially those with no experience doing any actual moderation — insists that they have the perfect way to do content moderation: just get rid of the content they don’t want and keep the content they do want.

But the reality is that it’s ridiculously more complicated than that, especially at scale. And no company has internalized that more than Twitter (though, I expect many of the people who understand this the best will not be around very long).

Now, I’m sure that Musk fans (and Techdirt haters, some of whom overlap), will quickly rush out the same tired talking points that have already been debunked. Studies have shown, repeatedly, that, no, Twitter does not engage in politically biased moderation. Indeed, the company had to put in place special safe space rules to protect prominent Republican accounts that violated its rules. Lots of people will point to individual examples of specific moderation choices that they personally don’t like, but refuse to engage on why or how they happened. We’ve already explained the whole “Biden Laptop” thing so it doesn’t help your case to bring it up again — not unless you’re able to explain why you’re not screaming about Twitter’s apparently anti-BLM bias for shutting down an account for leaking internal police files.

The simple fact is that content moderation at scale is impossible to do well, but Twitter actually does it better than most. That doesn’t mean you’ll agree with every decision. You won’t. People within the company don’t either. I don’t. I regularly call the company out for bad content moderation decisions. But I actually recognize that it’s not because of bias or a desire to be censorial. It’s because it’s impossible for everyone to agree on all of these decisions, and one thing the company absolutely needs to do is to try to craft policies that can be understood by a large content moderation team, around the globe, who can make relatively quick decisions at an astounding speed. And that leads to (1) a lot of scenarios that don’t neatly fit inside or outside of a policy, and (2) a lot of edge case judgment calls.

Indeed, so much of what people on the outside wrongly assume is “inconsistent” enforcement of policy is actually the exact opposite. A company like Twitter can’t keep changing policy on every decision. It needs to craft policy and stick with it for a while. So, something like the Biden laptop story comes along and someone points out that it seems pretty similar to the Blueleaks case, so if the company is being consistent, shouldn’t it block the NY Post’s account as well? And you can make an argument as to how it’s different, but there’s also a strong argument as to how it’s the same. And, so then you begin to realize that not blocking the NY Post in that scenario would actually be the “inconsistent” approach, since the “hacked materials” policy existed, and had been enforced against others before.

Now, some people like to claim that the Biden laptop didn’t involve “hacked” materials, but that’s great to be able to say in retrospect. At the time, it was extremely unclear. And, again, as described above, Twitter has to make these decisions without the benefit of hindsight. Indeed, they need to be made without the benefit of very much time to investigate at all.

These are all massive challenges, and even if you disagree with some of the decisions, it’s simply wrong to assume that the decisions are driven by bias. I’ve worked with people doing content moderation work at tons of different internet companies. And they do everything they can to avoid allowing bias to enter into their work. That doesn’t mean it never does, because of course, everyone is human. But on the whole, it’s incredible how much effort people put into being truly agnostic about political views, even ridiculous or abhorrent ones. And Twitter, pretty much above all others, is incredibly good at taking the politics out of its trust and safety efforts.

So, again, once Musk owns Twitter, he is free to do whatever he wants. But it truly is incredible to look over his stated goals, and to look at what Twitter has actually done and what it’s trying to do, and to realize that… Twitter already is basically the company Musk insists it needs to be. Only it’s been doing so in a more thoughtful, more methodical, more careful manner than he seems interested in. And that means we seem much more likely to lose the company that actually has done the most towards enabling free speech in support of democratic values. And that would be unfortunate.

Filed Under: 1st amendment, algorithm, content moderation, elon musk, free speech, spam
Companies: twitter

Let Me Rewrite That For You: Washington Post Misinforms You About How Facebook Weighted Emoji Reactions

from the let's-clean-it-up dept

Journalist Dan Froomkin, who is one of the most insightful commentators on the state of the media today, recently began a new effort, which he calls “let me rewrite that for you,” in which he takes a piece of journalism that he believes misled readers, and rewrites parts of them — mainly the headline and the lede — to better present the story. I think it’s a brilliant and useful form of media criticism that I figured I might experiment with as well — and I’m going to start it out with a recent Washington Post piece, one of many the Post has written about the leaked Facebook Files from whistleblower Frances Haugen.

The piece is written by reporters Jeremy Merrill and Will Oremus — and I’m assuming that, like many mainstream news orgs, editors write the headlines and subheads, rather than the reporters. I don’t know Merrill, but I will note that I find Oremus to be one of the most astute and thoughtful journalists out there today, and not one prone to fall into some of the usual traps that journalists fall for — so this one surprised me a bit (though, I’m also using this format on an Oremus piece, because I’m pretty sure he’ll take the criticism in the spirit intended — to push for better overall journalism on these kinds of topics). The article’s headline tells a story in and of itself: Five points for anger, one for a ?like?: How Facebook?s formula fostered rage and misinformation, with a subhead that implies something similar: “Facebook engineers gave extra value to emoji reactions, including ?angry,? pushing more emotional and provocative content into users? news feeds.” There’s also a graphic that reinforces this suggested point: Facebook weighted “anger” much more than happy reactions. And it’s all under the “Facebook under fire” designation:

Seeing this headline and image, it would be pretty normal for you to assume the pretty clear implication: people reacting happily (e.g. with “likes”) on Facebook had those shows of emotions weighted at 1/5th the intensity of people reacting angrily (e.g. with “anger” emojis) and that is obviously why Facebook stokes tremendous anger, hatred and divisiveness (as the story goes).

But… that’s not actually what the details show. The actual details show that initially when Facebook introduced its list of five different “emoji” reactions (to be added to the long iconic “like” button), it weighted all five of them as five times as impactful as a like. That means that “love,” “haha,” “wow,” and “sad” also were weighted at 5 times a single like, and identical to “angry.” And while the article does mention this in the first paragraph, it immediately pivots to focus only on the “angry” weighting and what that means. When combined with the headline and the rest of the article, it’s entirely possible to read the article and not even realize that “love,” “sad,” “haha,” and “wow” were also ranked at 5x a single “like” and to believe that Facebook deliberately chose to ramp up promotion of “anger” inducing content. It’s not only possible, it’s quite likely. Hell, it’s how I read the article the first time through, completely missing the fact that it applied to the other emojis as well.

The article also completely buries how quickly Facebook realized this was an issue and adjusted the policy. While it does mention it, it’s very much buried late in the story, as are some other relevant facts that paint the entire story in a very different light than the way many people are reading it.

As some people highlighted this, Oremus pointed out that the bigger story here is “how arbitrary initial decisions, set by humans for business reasons, become reified as the status quo.” And he’s right. That is the more interesting story and one worth exploring. But that’s not how this article is presented at all! And, his own article suggested the “reified as the status quo” part is inaccurate as well, though, again, that’s buried further down in the story. The article is very much written in a way where the takeaway for most people is going to be “Facebook highly ranks posts that made you angry, because stoking divisiveness was good for business, and that’s still true today.” Except none of that is accurate.

So… let’s rewrite that, and try to better get across the point that Oremus claims was the intended point of the story.

The original title, again is:

Five points for anger, one for a ?like?: How Facebook?s formula fostered rage and misinformation

Let’s rewrite that:

Facebook weighted new emojis much more than likes, leading to unintended consequences

Then there’s the opening of the piece, which does mention very quickly that it applied to all five new emojis, but quickly pivots to just focusing on the anger:

Five years ago, Facebook gave its users five new ways to react to a post in their news feed beyond the iconic ?like? thumbs-up: ?love,? ?haha,? ?wow,? ?sad? and ?angry.?

Behind the scenes, Facebook programmed the algorithm that decides what people see in their news feeds to use the reaction emoji as signals to push more emotional and provocative content ? including content likely to make them angry. Starting in 2017, Facebook?s ranking algorithm treated emoji reactions as five times more valuable than ?likes,? internal documents reveal. The theory was simple: Posts that prompted lots of reaction emoji tended to keep users more engaged, and keeping users engaged was the key to Facebook?s business.

Facebook?s own researchers were quick to suspect a critical flaw. Favoring ?controversial? posts ? including those that make users angry ? could open ?the door to more spam/abuse/clickbait inadvertently,? a staffer, whose name was redacted, wrote in one of the internal documents. A colleague responded, ?It?s possible.?

The warning proved prescient. The company?s data scientists confirmed in 2019 that posts that sparked angry reaction emoji were disproportionately likely to include misinformation, toxicity and low-quality news.

Let’s rewrite that, both using what Oremus claims was the “bigger story” in the article, and some of the information that is buried much later.

Five years ago, Facebook expanded the ways that users could react to posts beyond the iconic “like” thumbs-up, adding five more emojis: “love,” “haha,” “wow,” “sad,” and “angry.” With this new addition, Facebook engineers needed to determine how to weight these new engagement signals. Given the stronger emotions portrayed in these emojis, the engineers made a decision that had a large impact on how the use of those emojis would be weighted in determining how to rank a story: each of those reactions would count for five times the weight of the classic “like” button. While Facebook did publicly say at the time that the new emojis would be weighted “a little more” than likes, and that all the new emojis would be weighted equally, it did not reveal that the weighting was actually five times as much.

This move came around the same time as Facebook’s publicly announced plans to move away from promoting clickbait-style news to users, and to try to focus more on engagement with content posted by friends and family. However, it turned out that friends and family don’t always post the most trustworthy information, and by overweighting the “emotional” reactions, this new move by Facebook often ended up putting the most emotionally charged content in front of users. Some of that content was joyful — people reacting with “love” to engagements and births — but some of it was disruptive and divisive, people reacting with “anger” to false or misleading content.

Facebook struggled internally with this result — while also raising important points about how “anger” as a “core human emotion” is not always tied to bad things, and could be important for giving rise to protest movements against autocratic and corrupt governments. However, since other signals were weighted significantly more than even these emojis — for example, replies to posts had a weight up to 30 times a single “like” click — not much was initially done to respond to the concerns about how the weighting on anger might impact the kinds of content users were prone to see.

However, one year after launch, in 2018, Facebook realized weighting “anger” so highly was a problem, and downgraded the weighting on the “anger” emoji to four times a “like” while keeping the four other emoji, including “love,” “wow,” and “haha” at five times a like. A year later, the company realized this was not enough and even though “anger” is the least used emoji, by 2019 the company had put in place a mechanism to “demote” content that was receiving a disproportionate level of “anger” reactions. There were also internal debates about reranking all of the emoji reactions to create better news feeds, though there was not widespread agreement within the company about how best to do this. Eventually, in 2020, following more internal research on the impact of this weighting, Facebook reweighted all of the emoji. By the end of 2020 it had cut the weight of the “anger” emoji to zero — taking it out of the equation entirely. The “haha” and “wow” emojis were weighted to one and a half times a like, and the “love” and “sad” were weighted to two likes.

From there, the article could then discuss a lot of what other parts of the article does discuss, about some of the internal debates and research, and also the point that Oremus raised separately, about the somewhat arbitrary nature of some of these ranking systems. But I’d argue that my rewrite presents a much more accurate and honest portrayal of the information than the current Washington Post article.

Anyone know how I can send the Washington Post an invoice?

Filed Under: algorithm, emoji, facebook papers, framing, frances haugen, journalism, let me rewrite that for you, ranking, reactions
Companies: facebook

Parler's CEO Promises That When It Comes Back… It'll Moderate Content… With An Algorithm

from the are-you-guys-serious? dept

Parler, Parler, Parler, Parler. Back in June of last year when Parler was getting lots of attention for being the new kid on the social media scene with a weird (and legally nonsensical) claim that it would only moderate “based on the 1st Amendment and the FCC” we noted just how absolutely naive this was, and how the company would have to moderate and would also have to face the same kinds of impossible content moderation choices that every other website eventually faces. In fact, we noted that the company (in part due to its influx of users) was seemingly speedrunning the content moderation learning curve.

Lots of idealistic, but incredibly naive, website founders jump into the scene and insist that, in the name of free speech they won’t moderate anything. But every one of them quickly learns that’s impossible. Sometimes that’s because the law requires you to moderate certain content. More often, it’s because you recognize that without any moderation, your website becomes unusable. It fills up with garbage, spam, harassment, abuse and more. And when that happens, it becomes unusable by normal people, drives away many, many users, and certainly drives away any potential advertisers. And, finally, in such an unusable state it may drive away vendors — like your hosting company that doesn’t want to deal with you any more.

And, as we noted, Parler’s claims not to moderate were always a part of the big lie. The company absolutely moderated, and the CEO even bragged to a reporter about banning “leftist trolls.” The whole “we’re the free speech platform” was little more than a marketing ploy to attract trolls and assholes, with a side helping of “we don’t want to invest in content moderation” like every other site has to.

Of course, as the details have come out in the Amazon suit, the company did do some moderation. Just slowly and badly. Last week, the company admitted that it had taken down posts from wacky lawyer L. Lin Wood in which he called for VP Mike Pence to face “firing squads.”

Amazon showed, quite clearly, that it gave Parler time to set up a real content moderation program, but the company blew it off. But now, recognizing it has to do something, Parler continues to completely reinvent all the mistakes of every social media platform that has come before it. Parler’s CEO, John Matze, is now saying it will come back with “algorithmic” content moderation. This was in an interview done on Fox News, of course.

“We?re going to be doing things a bit differently. The platform will be free speech first, and we will abide by and we will be promoting free speech, but we will be taking more algorithmic approaches to content but doing it to respect people?s privacy, too. We want people to have privacy and free speech, so we don?t want to track people. We don?t want to use their history and things of that nature to predict possible violations, but we will be having algorithms look at all the content ? to try and predict whether it?s a terms-of-service violation so we can adjust quicker and the most egregious things can get taken down,” Matze said. “So calls for violence, incitements, things of that nature, can be taken down immediately.”

This is… mostly word salad. The moderation issue and the privacy question are separate. So is the free speech issue. Just because people have free speech rights, it doesn’t mean that Parler (or anyone) has to assist them.

Also, Matze is about to learn (as every other company has) that algorithms can help a bit, but really won’t be of much help in the long run. Companies with much more resources, including Google and Facebook, have thrown algorithmic approaches to content moderation at their various platforms, and they are far from perfect. Parler will be starting from a much weaker position, and will almost certainly find that the algorithm doesn’t actually replace a true trust and safety program like most companies have.

In that interview, Matze is also stupidly snarky about Amazon’s tool, claiming:

“We even offered to Amazon to have our engineers immediately use Amazon services ? Amazon Rekognition and other tools ? to find that content and get rid of it quickly and Amazon said, ?That?s not enough,? so apparently they don?t believe their own tools can be good enough to meet their own standards,” he said.

That’s incredibly misleading, and makes Matze look silly. Amazon Rekognition is a facial recognition system. What does that have to do with moderating harassment, death threats, and abuse off your site? Absolutely nothing.

Instead of filing terrible lawsuits and making snarky comments, it’s stunning that Parler doesn’t shut up, find an actual expert on trust and safety to hire, and learn from what every other company has done in the past. That’s not to say it needs to handle the moderation in the same way. More variation and different approaches are always worth testing out. The problem is that you should do that from a position of knowledge and experience, not ignorance. Parler has apparently chosen the other path.

Filed Under: algorithm, content moderation, john matze
Companies: amazon, parler

Yelp's Newest Campaign: Asking Google To Do The Right Thing

from the don't-be-evil,-guys dept

Back in 2014, we wrote about a campaign by Yelp which it called “Focus on the User,” in which it made a very compelling argument that Google was treating Yelp (and TripAdvisor) content unfairly. Without going into all of the details, Yelp’s main complaint was that while Google uses its famed relevance algorithm to determine which content to point you to in its main search results, when it came to the top “One Box” on Google’s site, it only used Google’s own content. Four years ago, the Focus on the User site presented compelling evidence that users of Google actually had a better overall experience if the answers for things like local content (such as retailer/restaurant reviews) in the One Box were ranked according to Google’s algorithm, rather than just using Google’s own “Local” content (or whatever they call it these days).

As we noted at the time, this argument was pretty compelling, but we worried about Yelp using the site to ask the EU to then force Google to change how its site functioned. As we wrote at the time:

… the results are compelling. Using Google’s own algorithm to rank all possible reviews seems like a pretty smart way of doing things, and likely to give better results than just using Google’s (much more limited) database of reviews. But here’s the thing: while I completely agree that this is how Google should offer up reviews in response to “opinion” type questions, I still am troubled by the idea that this should be dictated by government bureaucrats. Frankly, I’m kind of surprised this isn’t the way Google operates, and it’s a bit disappointing that the company doesn’t just jump on this as a solution voluntarily, rather than dragging it out and having the bureaucrats force it upon them.

So while the site is fascinating, and the case is compelling, it still has this problem of getting into a very touchy territory where we’re expecting government’s to design the results of search engines. It seems like Yelp, TripAdvisor and others can make the case to Google and the public directly that this is a better way to do things, rather than having the government try to order Google to use it.

It took four years, but it looks like Yelp is at least taking some of my advice. The company has relaunched the “Focus on the User” site, but positioned it more towards convincing Google employees to change how the site handles One Box content, rather than just asking the government for it. This is a good step, and I’m still flabbergasted that Google hasn’t just done this already. Not only would it give users better overall results, but it would undercut many of the antitrust arguments being flung at Google these days (mainly in the EU). It’s a simple solution, and Google should seriously consider it.

That said, while Yelp has shifted the focus of that particular site, it certainly has not not given up on asking the government to punish Google. Just as it was relaunching the site, it was also filing a new antitrust complaint in the EU and again, I’m still concerned about this approach. It’s one thing to argue that Google should handle aspects of how its website works in a better way. It’s another to have the government force the company to do it that way. The latter approach creates all sorts of potential consequences — intended or unintended — that could have far reaching reverberations on the internet, perhaps even the kind that would boomerang around and hurt Yelp as well.

Yelp makes a strong argument for why Google’s approach to the One Box is bad and not the best overall results for its users. I’m glad that it’s repurposed its site to appeal to Google employees, and am disappointed that Google hasn’t made this entire issue go away by actually revamping how the One Box works. But calling on the government to step in and determine how Google should design its site is still a worrisome approach.

Filed Under: algorithm, answers, antitrust, competition, eu, local content, one box
Companies: google, yelp

Techdirt Podcast Episode 85: Is Your Algorithm Racist?

from the and-what-can-be-done-about-it? dept

Algorithms have become a powerful force in the world, but for all the impressive good they do, they sometimes show some worrying tendencies. Algorithms that discriminate are a problem that nobody’s found a solution for yet. This week, we discuss why some algorithms appear to be racist, and whether there’s anything that can be done about it.

Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.

Filed Under: ai, algorithm, biases, inputs, racist, sexist

Facebook Has Lost The War It Declared On Fake News

from the fakey-fakey dept

Fake news stories are a scourge. Something different from parody news folks such as The Onion, there are outfits out there that produce false news stories simply to get clickthroughs and generate advertising revenue. And it isn’t just a couple of your Facebook friends and that weird uncle of yours that gets fooled by these things, even incredibly handsome and massively-intelligent writers such as myself are capable of getting completely misled into believing that a bullshit news story is real.

Facebook is generally seen as a key multiplier in this false force of non-news, which is probably what led the social media giant to declare war on fake news sites a year or so back. So how’d that go? Well, the results as analyzed over at Buzzfeed seems to suggest that Facebook has either lost this war it declared or is losing it badly enough that it might as well give it up.

To gauge Facebook’s progress in its fight, BuzzFeed News examined data across thousands of posts published to the fake news sites’ Facebook pages, and found decidedly mixed results. While average engagements (likes + shares + comments) per post fell from 972.7 in January 2015 to 434.78 in December 2015, they jumped to 827.8 in January 2016 and a whopping 1,304.7 in February.

Some of the posts on the fake news sites’ pages went extremely viral many months after Facebook announced its crackdown. In August, for instance, an Empire News story reporting that Boston Marathon bombing suspect Dzhokhar Tsarnaev sustained serious injuries in prison received more than 240,000 likes, 43,000 shares, and 28,000 comments on its Facebook page. The incident was pure fiction, but still spread like wildfire on the platform. An even less believable September post about a fatal gang war sparked by the “Blood” moon was shared over 22,000 times from the Facebook page of Huzlers, another fake news site.

So, how did this war go so wrong for Facebook? Well, to start, it relied heavily on user-submitted notifications that a link or site was a fake news site. Sounds great, as aggregating feedback has worked quite well in other arenas. For this, however, it was doomed from the start. The purpose of fake news sites is, after all, to fool people, and fooled people are obviously not reporting the links as fake. Even when a reader manages to determine eventually that a link was a fake news post at a later time, perhaps after sharing it and having comments proving it false, how many of those people then take steps to report the link? Not enough, clearly, as the fake news scourge marches on.

Another layer of the problem appears to be the faith and trust the general public puts into some famous people they are following, who have also been fooled with startling regularity.

Take D.L. Hughley, for example. The comedian, whose page is liked by more than 1.7 million people, showed up twice in the Huzlers logs. One fictitious Huzlers story he posted, about Magic Johnson donating blood, garnered more than 10,000 shares from his page. Hughley, who did not respond to BuzzFeed News’ request for comment, also shared four National Report links in 2015.

Radio stations also frequently post fake news. The Florida-based 93XFM was one of a number of radio stations BuzzFeed News discovered sharing Huzlers posts in 2015. Asked about one April post linking to a Huzlers story about a woman smoking PCP and chewing off her boyfriend’s penis, a 93XFM DJ named Sadie explained that fact-checking Facebook posts isn’t exactly a high priority.

In other words, people and organizations that the public assumes to be credible sources of information are sharing these fake news articles, and the public turns off their collective brains and assumes them to be true. After all, if we can’t trust D.L. Hughley then, really, who can we trust? But when even major outlets such as the New York Times have included links in its posts to The National Report, do we really expect people to cast a wary eye towards such an established news peddler?

Well, we should, because the ultimate problem here are the equal parts of a polarized American public coupled with a terrifying level of credulity. Many of these fake news pieces contain headlines for stories that some people want to believe, typically for ideological reasons. This is why a family party recently saw me trying to explain to my grandmother that, no, Michelle Obama probably does not in fact have a penis. That’s a true story, friends, and it stemmed from a fake news article. The willingness to believe such a thing is extreme, certainly, but stories of the Boston Bomber getting beaten in prison fuel the same desire for such a story to be true.

The war is lost. Fake news goes on unabated. Long live Michelle Obama’s penis.

Filed Under: algorithm, fake news, journalism, parody, reality
Companies: facebook

Algorithm Might Protect Non-Targets Caught In Surveillance, But Only If The Government Cares What Happens To Non-Targets

from the something-it-has-yet-to-show dept

Ashkat Rathi at Quartz points to an interesting algorithm developed by Michael Kearns of the University of Pennsylvania — one that might give the government something to consider when conducting surveillance. It gauges the possibility of non-targets inadvertently being exposed during investigations, providing intelligence/investigative agencies with warnings that perhaps other tactics should be deployed.

Rathi provides a hypothetical situation in which this algorithm might prove to be of use. A person with a rare medical condition they’d like to keep private visits a clinic that happens to be under investigation for fraud. This person often calls another family member for medical advice (an aunt who works at another clinic). This second person’s clinic is also under investigation.

When the investigation culminates in a criminal case, there’s a good chance the patient — a “non-target” — may have their sensitive medical information exposed.

If the government ends up busting both clinics, there’s a risk that people could find out about your disease. Some friends may know about your aunt and that you visit some sort clinic in New York; government records related to the investigation, or comments by officials describing how they built their case, may be enough for some people to draw connections between you, the specialized clinic, and the state of your health.

Even though this person isn’t targeted by investigators, the unfortunate byproduct is diminished privacy. This algorithm, detailed in a paper published by the National Academy of Sciences, aims to add a layer of filtering to investigative efforts. As Kearns describes it, the implementation would both warn of potential collateral damage as well as inject “noise” to make accidental exposure of non-targets minimal.

For such cases where there are only a few connections between people or organizations under suspicion, Kearns’s algorithm would warn investigators that taking action could result in a breach of privacy for selected people. If a law were to require a greater algorithmic burden of proof for medical-fraud cases, investigators would need to find alternative routes to justify going after the New York clinic.

But if there were lots of people who could serve as links between the two frauds, Kearns’s algorithm would let the government proceed with targeting and exposing both clinics. In this situation, the odds of comprising select individuals’ privacy is lower.

Potentially useful, but it suffers from a major flaw: the government.

Of course, if an investigation focused on suspected terrorism instead of fraud, the law may allow the government to risk compromising privacy in the interest of public safety.

Terrorism investigations will trump almost everything else, including privacy protections supposedly guaranteed by our Constitution. Courts have routinely sided with the government’s willingness to sacrifice its citizens’ privacy for security.

It’s highly unlikely investigative or intelligence agencies have much of an interest in protecting the privacy of non-targeted citizens, even in non-terrorist-related surveillance — not if it means using alternate (read: “less effective”) investigative methods or techniques. It has been demonstrated time and time again that law enforcement is more interested in the most direct route to what it seeks, no matter how much collateral damage is generated.

The system has no meaningful deterrents built into it. Violations are addressed after the fact, utilizing a remedy process that can be prohibitively expensive for those whose rights have been violated. On top of that, multiple layers of immunity shield government employees from the consequences of their actions and, in some cases, completely thwart those seeking redress for their grievances.

The algorithm may prove useful in other areas — perhaps in internal investigations performed by private, non-state parties — but our government is generally uninterested in protecting the rights it has granted to Americans. Too many law enforcement pursuits (fraud, drugs, terrorism, etc.) are considered more important than the rights (and lives) of those mistakenly caught in the machinery. If the government can’t be talked out of firing flashbangs through windows or predicating drug raids on random plant matter found in someone’s trash can, then it’s not going to reroute investigations just because a piece of software says a few people’s most private information might be exposed.

Filed Under: algorithm, michael kearns, non-targets, privacy, surveillance, targets, warrants

DailyDirt: Skynet Is Just A Little Behind Schedule…

from the urls-we-dig-up dept

Artificial intelligence projects are making significant progress (even though humans seem to keep moving the goalposts for what qualifies as AI). We haven’t created any self-conscious computers yet, but some chips and software are more closely mimicking how the human brain works. There still isn’t much agreement on how to measure intelligence, though if researchers just continue working on different approaches to creating thinking machines, maybe we’ll figure out more about both ourselves and how to make computers learn and interact like people.

If you’d like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post via StumbleUpon.

Filed Under: ai, algorithm, artificial intelligence, biomimicry, brain, chatbot, eliza, ellie, levan, neural networks, neuromorphic chips, neuron, synapse
Companies: ibm