election misinformation – Techdirt (original) (raw)
Possible Reasons Why YouTube Has Given Up Trying To Police 2020 Election Misinfo
from the maybe-not-the-end-of-the-world dept
Judging by the number of very angry press releases that landed in my inbox this past Friday, you’d think that YouTube had decided to personally burn down democracy. You see, that day the company announced an update to its approach to moderating election misinformation, effectively saying that it would no longer try to police most such misinformation regarding the legitimacy of the 2020 election:
We first instituted a provision of our elections misinformation policy focused on the integrity of past US Presidential elections in December 2020, once the states’ safe harbor date for certification had passed. Two years, tens of thousands of video removals, and one election cycle later, we recognized it was time to reevaluate the effects of this policy in today’s changed landscape. In the current environment, we find that while removing this content does curb some misinformation, it could also have the unintended effect of curtailing political speech without meaningfully reducing the risk of violence or other real-world harm. With that in mind, and with 2024 campaigns well underway, we will stop removing content that advances false claims that widespread fraud, errors, or glitches occurred in the 2020 and other past US Presidential elections. This goes into effect today, Friday, June 2. As with any update to our policies, we carefully deliberated this change.
The company insists that its overall election misinfo policies remain in place, and the direct forms of dealing with misinformation like doing things such as trying to trick people into not voting remain in place:
All of our election misinformation policies remain in place, including those that disallow content aiming to mislead voters about the time, place, means, or eligibility requirements for voting; false claims that could materially discourage voting, including those disputing the validity of voting by mail; and content that encourages others to interfere with democratic processes.
The company seems to be trying to walk a fine line here, which is unclear if it will work. But in talking this over with a few people, I came up with a few reasons why YouTube may have gone down this path, and it seemed to be worth discussing those possibilities:
- Realizing the moderation had gone too far_._ Basically, a version of what the company was saying publicly. They realized that in trying to enforce a ban against 2020 election misinfo was, in fact, catching too much legitimate debate. While many are dismissing this, it seems like a very real possibility. Remember, content moderation at scale is impossible to do well, and it frequently involves mistakes. And it seems likely that the mistakes are even more likely to occur with video, in which more legitimate political discourse is mistaken for disinformation and removed. This could include things like legitimate discussions on the problems of electronic voting machines, or questions about building up more resilient election systems which could be accidentally flagged as disinfo.
- Realizing that removing false claims wasn’t making a difference. This is something of a corollary to the first item, and is hinted at in the statement above. Unfortunately, this remains a very under-studied area of content moderation (there are some studies, but much more research is needed): how effective are bans and removals on stopping the spread of malicious disinformation. As we’ve discussed in a somewhat different context, it’s really unclear that online disinformation is actually as powerful as some make it out to be. And if removing that information is not having much of an impact, then it may not be worth the overall effort.
- The world has moved on. To me, this seems like the most likely actual reason. Most folks in the US have basically decided to believe what they believe. That could be that (as all of the actual evidence shows) that the 2020 election was perfectly fair and Joe Biden was the rightful winner or (as no actual evidence supports), the whole thing was “rigged” and Trump should have won. No one’s changing their mind at this point, and no YouTube video is going to convince people one way or the other. And, at this point, this particular issue is so far in the rearview mirror that the cost of continuing to monitor for this bit of misinfo just isn’t worth it for the lack of any benefit or movement in people’s beliefs.
- YouTube is worried about a Republican government in 2025. This is the cynical take. Since 2020 election denialism is now a key plank of the GOP platform, the company may be deciding to “play nice” with the disinformation peddling part of the GOP (which has moved from the fringe to the mainstream) and has decided that this is a more defensible position for inevitable hearings/bad legislation/etc.
In the end, it’s likely to be some combination of all four of those, and even the people within YouTube may not agree on which one is the “real” reason for doing this.
But it does strike me that the out-and-out freakout among some, claiming that this proves the world is ending may not be accurate. I’m all for companies deciding they don’t want to host certain content because they don’t want to be associated with it, but we’re still learning whether or not bans are the most effective tool in dealing with blatant misinformation and disinformation, and it’s quite possible that leaving certain claims alone is actually a reasonable policy in some case.
It would be nice if YouTube actually shared some of the underlying data on this, rather than just asking people to trust them, but I guess that’s really too much to ask these days.
Filed Under: 2020 election, content moderation, election denialism, election misinformation, masnick's impossibility theorem, misinformation
Companies: youtube
Thankfully, Jay Inslee's Unconstitutional Bill To Criminalize Political Speech Dies In The Washington Senate
from the don't-criminalize-free-speech dept
Over the last few years, it’s been depressing to see politicians from both major political parties attacking free speech. As we noted last month, Washington state governor Jay Inslee last month started pushing a bill that would criminalize political speech. He kept insisting that it was okay under the 1st Amendment because he got a heavily biased constitutional lawyer, Larry Tribe, to basically shrug and say “maybe it could be constitutional?” But the bill was clearly problematic — and would lead to nonstop nonsense lawsuits against political candidates.
Thankfully, cooler heads have prevailed in the Washington Senate and the bill has died. The bill’s main sponsor is still insisting that it would survive 1st Amendment scrutiny, but also recognized that it just didn’t have enough political support:
State Sen. David Frockt (D), who sponsored the bill, said, “We have to respect that the bill in its current form did not have enough support to advance despite the care we took in its drafting through our consultation with leading First Amendment scholars.”
Inslee, for his part, still insists something must be done:
After the bill was defeated on Tuesday, Inslee said in a statement, “We all still have a responsibility to act against this Big Lie … we must continue to explore ways to fight the dangerous deceptions politicians are still promoting about our elections.”
And, look, I don’t disagree that the Big Lie about the 2020 election is a problem. But you don’t solve problems by censoring 1st Amendment protected speech. That never ends well. At all.
Filed Under: 1st amendment, election misinformation, free speech, jay inslee, lies, misinformation, political speech, washington
Brazilian President Bans Social Media Companies From Removing Disinformation & Abuse
from the well-that-will-work-out-just-great dept
Ah, great. Just after Australia made it clear that media organizations are liable for comments on social media (demonstrating one aspect of a world without intermediary liability protections), Brazil’s President Jair Bolsonaro has announced new social media rules that effectively force social media sites to keep all content online (demonstrating the flipside to a world without intermediary liability protections). The two most important things that Section 230 does — limiting liability for 3rd party intermediaries and freeing websites of liability for moderation choices — each going away completely in two separate countries in the same week.
To be clear, the rule in Brazil can only stay on the books for 120 days — since it’s a “provisional measure” from the President — and if they’re not enacted into law by the Brazilian Congress by then, they’ll expire (and there’s at least some suggestion that the Brazilian Congress has no interest). But, still, these rules are dangerous.
Under the new policy, tech companies can remove posts only if they involve certain topics outlined in the measure, such as nudity, drugs and violence, or if they encourage crime or violate copyrights; to take down others, they must get a court order. That suggests that, in Brazil, tech companies could easily remove a nude photo, but not lies about the coronavirus. The pandemic has been a major topic of disinformation under Mr. Bolsonaro, with Facebook, Twitter and YouTube all having removed videos from him that pushed unproven drugs as coronavirus cures.
Imagine needing to get a court order to remove content? That’s ridiculous. Bolsonaro’s government put out a twitter thread (in English) claiming that this is the country “taking the global lead in defending free speech” when the reality is exactly the opposite. Compelled speech, which includes the compelled hosting of speech, is the exact opposite of “defending free speech.” Here’s what the government is saying about this dangerous proposal:
Brazilian government is taking the global lead in defending free speech on social networks and protecting the right of citizens to freedom of thought and expression.
As noted, compelled speech is not defending free speech. And freedom of expression does not mean you get to force someone else to let you speak on their property.
The Provisional Measure issued today by the Brazilian government forbids the removal of content that may result in any kind of “censorship of political, ideological, scientific, artistic or religious order”.
Hilariously, though, it does allow for the removal of copyright infringing material, which already undermines the idea that it does not allow for censorship of “artistic” works.
It is also guaranteed that the social network will have to justify the removal of content under the terms of Brazilian laws. Without a just cause, the social network will have to restore the removed content.
This may be the most pernicious part here, because (as we’ve discussed before), the most bad faith, abusive trolls are the ones who are most likely to demand justification — and then to act shocked and pretend to be offended at claims that they were acting as bad faith, abusive trolls. This provision just allows trolls to make an even bigger nuisance of themselves. Which seems to be exactly what Bolsonaro wants, knowing he has more bad faith, abusive trolls on his side.
This measure forbids selective deplatforming by requiring that social media provide a just cause for the suspension of services and restore access should the suspension be considered unlawful. This measure is based on precedent in Brazilian law and freedom of expression.
Compelled speech is never freedom of expression. As for “precedent,” Brazil once led the way in passing smart internet rules. This makes a mockery of them.
Attention: the law does not prevent the social network from removing content that violates Brazilian law, such as child pornography. The Brazilian government stands for both freedom and democracy!
That’s a pretty damn Orwellian statement right there.
This measure has been emitted as a result of ongoing concern with actions taken by social media groups that have been perceived as harmful to healthy debate and freedom of expression in Brazil, and hope it will serve to help restore online political dialogue in the country.
Basically, we’re doing this because our President is not doing well in the polls leading up to a national election, and has been taking a page from Trump’s playbook and talking about how he can only lose by election fraud — and we need to make sure that social media companies don’t actually stop our bullshit propaganda from spreading widely.
The real question now is how will the social media companies respond. According to the NY Times:
Facebook said that the ?measure significantly hinders our ability to limit abuse on our platforms? and that the company agrees ?with legal experts and specialists who view the measure as a violation of constitutional rights.?
Twitter said the policy transforms existing internet law in Brazil, ?as well as undermines the values and consensus it was built upon.?
YouTube said it was still analyzing the law before making any changes. ?We?ll continue to make clear the importance of our policies, and the risks for our users and creators if we can?t enforce them,? the company said.
There’s also this:
In July, YouTube removed 15 of Mr. Bolsonaro’s videos for spreading misinformation about the coronavirus. And late last month, YouTube said that, under orders from a Brazilian court, it had stopped payments to 14 pro-Bolsonaro channels that had spread false information about next year’s presidential elections.
Brazil’s Supreme Court has also been investigating disinformation operations in the country. Mr. Bolsonaro became a target of those investigations last month, and several of his allies have been questioned or detained.
Of course, it seems notable that it was just a few years ago that Brazil arrested a Facebook exec because the company refused to hand over information on WhatsApp users, and has, at times, blocked the entirety of WhatsApp in the country. So, it would be interesting to see what will happen if the companies refuse to follow these ridiculous rules.
But, between this and what happened in Australia, we’ll now get to see two examples of the dangerous situation that happens when you don’t have strong intermediary liability protections, including for the ability to moderate websites how best you see fit.
Filed Under: brazil, compelled speech, content moderation, election misinformation, free speech, jair bolsonaro, marco civil
Companies: facebook, twitter, youtube