social media – Techdirt (original) (raw)
Ctrl-Alt-Speech: Move Fast And Mistake Things
from the ctrl-alt-speech dept
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- Jack Cable calls out Cluely over bogus DMCA (X)
- The cofounder of the viral AI ‘cheating’ startup Cluely says he only hires people for 2 jobs (Business Insider)
- Missouri AG: Any AI That Doesn’t Praise Donald Trump Might Be “Consumer Fraud” (No, Really) (Techdirt)
- Instagram wrongly accuses some users of breaching child sex abuse rules (BBC)
- Elon Musk’s Grok AI chatbot praises Adolf Hitler on X (FT)
- Grok Becomes ‘MechaHitler,’ Twitter Becomes X: How Centralized Tech Is Prone To Fascist Manipulation (Techdirt)
- See the leaked teen social media ban tech trial report that has experts worried (Crikey)
- New anti-fraud system is labelling hospital texts and other legitimate messages as ‘likely scam’ (Irish Independent)
- Why Ireland’s New “Scam Likely” Labels Might Actually Make SMS Fraud Worse for Banks and the Public (Medium)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.
Filed Under: age verification, ai, artificial intelligence, content moderation, social media
Companies: meta, microsoft, openai, twitter, x
Virginia Enacts Stupid, Completely Unworkable ‘Social Media Time Limit’ Law
from the mike-wallace-must-be-rolling-over-his-eyes-in-his-grave dept
Lawmakers seem to think they’re capable of solving every perceivable social media problem via legislation. Sometimes, the intents are pure but the execution is lacking. In many more cases — especially recently — the intent is to harm social media companies with legislation, all while pretending it’s about protecting “free speech” or the “children” or “stopping China” or whatever.
While this country is lacking in privacy protection laws, it’s probably not completely a bad thing. Look anywhere stringent privacy protections have been put in place and you’ll see a ton of collateral damage.
There’s less subtlety here in the US, thanks to our exceptionalism — something that allows lawmakers to target services they don’t like while pretending it’s all about something else.
As usual, it’s being pushed by people who just want to punish social media services and lawmakers who not only don’t understand the subject matter, but also strongly feel that their ignorance strengthens their arguments.
Somehow, a bill forcing social media services (if they fit the very vague description) to limit non-adults (how?) to one hour a day of access managed to make its way to the governor’s desk. And Governor Glenn Youngkin, despite his lack of relevant expertise in such matters, signed it.
Here are the cold, hard facts, as reported by WBOC:
New Virginia legislation requiring social media platforms to limit screen time for minors took effect Tuesday.
The law, signed by Gov. Glenn Youngkin in May, mandates that social media companies set a default limit of one hour per day for users under 16 years old.
First off, how? Second, also how?
The law demands things that have never been demanded of social media services. First, social media platforms must implement some sort of timer. Whether that time limit applies to time the app is active or whether it applies to any time the service is accessed, even if it’s in a tab/app idling in the background, is not discussed.
Nor are the difficulties of ascertaining the actual age of users in order to set this one-hour timer. Does the Virginia government want social media services to collect even more personal information about underage users? Because that seems like the sort of thing lawmakers shouldn’t encourage, even inadvertently.
Then there’s the definition of social media services in the law itself, which means a whole lot of services used by teens either won’t be affected or will be affected inadvertently to the detriment of teens who aren’t just spending hours a day doomscrolling their way into performative speeches given by representatives they’re not even old enough to vote for (or against!).
“Social media platform” means a public or semipublic Internet-based service or application that has users in the Commonwealth and that meets the following criteria:
1. Connects users in order to allow users to interact socially with each other within such service or application. No service or application that exclusively provides email or direct messaging services shall be considered to meet this criterion on the basis of that function alone; and
2. Allows users to do all of the following:
a. Construct a public or semipublic profile for purposes of signing into and using such service or application;
b. Populate a public list of other users with whom such user shares a social connection within such service or application; and
c. Create or post content viewable by other users, including content on message boards, in chat rooms, or through a landing page or main feed that presents the user with content generated by other users. No service or application that consists primarily of news, sports, entertainment, ecommerce, or content preselected by the provider and not generated by users, and for which any chat, comments, or interactive functionality is incidental to, directly related to, or dependent on the provision of such content, or that is for interactive gaming, shall be considered to meet this criterion on the basis of that function alone.
Given this definition, the usual suspects (Facebook, XTwitter, etc.) are the usual suspects. But minors can access DraftKings without a time limit because DraftKings may allow minors to use the service to make bets they’re not legally allowed to make, but the “interactive functionality is incident to” making bets. And the carve-out for online gaming seems especially weird, since that’s probably where the worst people a teen could ever meet reside.
On the flip side, services utilized by schools contain plenty of social media add-ons and interactivity which isn’t entirely “incidental” by design, like Teams meetings or Google Workspace hangouts where students work together on projects and interact socially. And that last part — the necessary interaction — might be enough to trigger a one-hour time limit on everyone involved.
Being denied access to school-related projects because of a badly-written, entirely stupid law obviously isn’t the intent of the law. But the intent doesn’t matter much when it’s doing real-world damage to online spaces shared by minors.
On top of that, there are the positive aspects of interaction, which allow people, who feel alienated in their own immediate social groups, to find support elsewhere. Should they only be allowed one hour of positive interaction per day just because a bunch of people with lawmaking power mistakenly believe too much internet is always a bad thing?
Then there’s this part of the law, which legislators apparently felt solved the whole “who is a minor” thing:
For purposes of this section, any controller or processor that operates a social media platform shall treat a user as a minor if the user’s device communicates or signals that the user is or shall be treated as a minor, including through a browser plug-in or privacy setting, device setting, or other mechanism.
The fuck does this even mean. If I spend a lot of my time playing games on my phone and searching for HBO-buried Looney Tunes, does that “signal” that I’m a minor? This is the least likely way to find minors using social media services. Anyone “signalling” that they’re a minor is either a cop or the current host of “To Catch a Predator.” Minors already know limits are placed on their interactions, thanks to efforts by most social media companies to comply with federal law. Anyone broadcasting their underage bona fides on main probably works for Sheriff Grady Judd.
In theory, the law being amended allows the state attorney general to attempt to collect $7,500 per violation from social media companies that the state thinks has violated this extremely stupid law. In reality, though, it’s nothing more than this: something for people like this lawmaker to point to when seeking re-election.
“We need to start thinking through kind of what are some proper regulations and guardrails that ensure that they’re using it, but it’s not tuning out these other things. That it’s not tuning out their academics, it’s not tuning out time with their friends and family,” said Sen. Schuyler VanValkenburg, who introduced the SB854.
He’s also a teacher and said he’s seen those negative impacts on some students firsthand.
“They spent 45 minutes in study hall just watching videos on TikTok, and in the meantime, they haven’t talked to anybody, they haven’t done any work,” he said.
Sen. VanValkenburg is eight years younger than I am and sounds 50 years older. Just because you’re not on the same wavelength as the young people doesn’t mean they’re wrong. Plenty of social interaction now comes via social media services, as do other things like discussions with family members, assistance with school work, and healthy interactions with people teens actually know in person. This is nothing more than a frustrated teacher trying to legislate kids into putting their phones down because he thinks that’s the way things should be.
Filed Under: 1st amendment, for the children, free speech, glenn youngkin, social media, stupidity, virigina
State Dept. Tells Student Visa Applicants To Set Their Social Media Profiles To ‘Public’ If They Want To Come To The US
from the gotta-stop-all-the-thoughtcrime dept
Way back in the day of EARLIER THIS YEAR, people could expect to be subjected to warrantless, invasive device searches only at US borders and international airports. Visa applicants, however, just needed to fill out some paperwork and wait for permission to head abroad to find work and/or continue their education.
Now, you don’t even have to enter the United States to be subjected to rigorous vetting that opens every digital drawer and roots around in your unmentionables/mentions. And pay no mind to Lady Liberty. She’s come a long way, baby.
“Give me your tired, your poor,
Your huddled masses yearning to breathe free,
The wretched refuse of your teeming shore.
Send these, the homeless, tempest-tost to me, I lift my lamp beside the golden door!”
A U.S. visa is a privilege, not a right.
[…]
Under new guidance, we will conduct a comprehensive and thorough vetting, including online presence, of all student and exchange visitor applicants in the F, M, and J nonimmigrant classifications.
To facilitate this vetting, all applicants for F, M, and J nonimmigrant visas will be instructed to adjust the privacy settings on all of their social media profiles to “public.”
That’s from Marco Rubio’s State Department, an announcement that makes it clear Trump’s anti-migrant actions aren’t just about ejecting foreigners of the browner-skinned persuasion, but about preventing foreigners from setting foot in the US for any reason at all.
F, M, and J visas are all related to seeking higher education and/or learning trade skills. There’s no free riding here. These aren’t people sneaking across the borders and laying low until they secure permanent residence. These are people who are here for a single purpose and willing to pay for the (actual) privilege of accessing educational and trade services.
But this administration’s inherent xenophobia means even people seeking nothing more than temporary stays in the United States must be free from expressed thoughts that aren’t fiercely patriotic for a country they’re only seeking to visit.
The State Department is now in the business of rooting out wrong think, something it made clear a few months ago:
The cable… states that applicants can be denied a visa if their behavior or actions show they bear “a hostile attitude toward U.S. citizens or U.S. culture (including government, institutions, or founding principles).”
That’s why visa applicants are now “instructed” to set their social media profiles to “public.” “Instructed” is a heavy word. The federal government isn’t asking. This is a mandate. If you want to come to the United States, you have to subject yourself to a thorough vetting of your social media profiles by State Department staff, who will then subjectively decide whether or not you’re pro-America enough to be granted a visa.
It’s always been true that visas are a privilege and not a right. But it’s only since Trump’s been in office that the State Department has decided to be a hard-ass about it. Generally speaking, if someone meets the requirements, they get a visa. While some vetting does happen, it’s usually been done to prevent actual criminals or terrorists from entering the country. Now, it’s just something more the federal government can do to prevent foreigners from entering the country by treating anything not completely supportive of Trump as a reason to reject a visa application.
The United States was once proud of its melting pot status. Now, we’ve got more in common with the Confederacy than the Union that defeated it two decades before the Statue of Liberty was erected as a beacon of hope directed at the entire world.
Filed Under: censorship, free speech, immigration, marco rubio, social media, state department, trump administration, vetting
Community And Choice Are Not Bubbles
from the communities-aren't-bubbles dept
Disclosure: I am on the board of Bluesky and am inherently biased. Adjust your skepticism of what I write on this topic accordingly.
It seems a bit odd: when something is supposedly dying or irrelevant, journalists can’t stop writing about it. Consider the curious case of Bluesky, which, according to various pundits, is a failed “liberal echo chamber” that nobody uses anymore. And yet the Washington Post’s Megan McArdle argues that “The Bluesky bubble hurts liberals and their causes,” Josh Barro insists “Bluesky Isn’t a Bubble. It’s a Containment Dome,” and multiple outlets have breathlessly reported on Mark Cuban’s complaints about his personal Bluesky experience as if they were definitive proof of platform failure. Not to be left out, Slate published not one, but two separate articles complaining about Bluesky.
For a supposedly dying bubble that no one wants to use, Bluesky sure generates a lot of traffic-driving hot takes. Which suggests that maybe—just maybe—the entire premise is wrong.
The real story isn’t about Bluesky’s supposed failures—it’s about how these critiques fundamentally misunderstand what people want from social media and who gets to decide what constitutes healthy discourse.
The “echo chamber” myth
Now, you might think that if everyone is complaining about “echo chambers” and “bubbles,” that there must be solid research showing that social media creates them. You would be wrong. The “echo chamber” critique of social media has been thoroughly debunked by researchers, who have consistently found the opposite to be true: people not on social media live in more sheltered information environments than those who are. Professor Michael Bang Petersen gave an interview about his research on the topic where he noted the following:
One way to think about social media in this particular regard is to turn all of our notions about social media upside down. And here I’m thinking about the notion of ‘echo chambers.’ So we’ve been talking a lot about echo chambers and how social media creates echo chambers. But, in reality, the biggest echo chamber that we all live in is the one that we live in in our everyday lives.
I’m a university professor. I’m not really exposed to any person who has a radically different world view or radically different life from me in my everyday life. But when I’m online, I can see all sorts of opinions that I may disagree with. And that might trigger me if I’m a hostile person and encourage me to reach out to tell these people that I think they are wrong.
But that’s because social media essentially breaks down the echo chambers. I can see the views of other people — what they are saying behind my back. That’s where a lot of the felt hostility of social media comes from. Not because they make us behave differently, but because they are exposing us to a lot of things that we’re not exposed in our everyday lives.
Power, not purity
So the “bubble” critique is empirically wrong. But even if it were right, it misses the more important point: this isn’t really about ideological diversity. It’s about who controls the microphone. When critics argue that people should have stayed on ExTwitter to “engage across difference,” they’re ignoring a fundamental reality: Elon Musk controls the algorithm and actively throttles content he dislikes. The NY Times documented how Musk minimizes the reach of those expressing views he disagrees with.
So when McArdle suggests that “liberals” made some mistake by leaving ExTwitter, she’s essentially arguing that people should willingly subject themselves to algorithmic suppression by someone who has explicitly welcomed extremist content back onto the platform. This isn’t about “engaging across difference”—it’s about accepting a rigged game where one side controls the megaphone.
Community, not performance
The “bubble” framing also fundamentally misunderstands what most people want from social media. When you go to a knitting circle, are you disappointed that most people there want to talk about knitting? When you join a book club, do you complain that everyone seems interested in books? Pundits and politicians may want to broadcast to the largest possible audience, but most people are looking for community, not maximum reach.
Most people aren’t looking for a debating arena. They want to talk with people they like about topics they care about—whether that’s knitting, local politics, or professional interests.
This becomes impossible when the platform owner has hung out a shingle for Nazis, and your attempts to discuss your hobbies get drowned out by fascist propaganda algorithmically pushed into your timeline. That’s not “diverse discourse”—it’s just a bad user experience.
Communities have social norms, which can evolve over time
Any community—online or off—develops social norms. These cultural expectations show up as “we don’t do that here” or “we encourage this behavior” signals. Critics complaining about Bluesky’s norms are often just upset that those norms don’t align with their preferences. It’s a bit like complaining that different neighborhoods have different vibes.
Yes, some users can be overly aggressive in enforcing norms, and some reactions can be trigger-happy (I’ve certainly been on the receiving end of some angry responses). But this is true of every community, online and off. If you’ve ever accidentally worn the wrong team’s jersey to a sports bar, you understand how community norms work. The difference is that Bluesky users have actual tools to address these issues themselves, rather than begging platform owners to fix things for them.
Many of the tensions critics point to aren’t unique to Bluesky—they reflect how people are processing a world where fascism is rising in America and democratic institutions are under attack. When people are dealing with existential threats, online interactions can get heated. That’s not a platform problem; it’s a human problem.
But, also, part of the benefit of a system like Bluesky is that it puts users in much greater control over their own experience, meaning they can actually take charge themselves and craft better communities around them, rather than demanding that “the company” fix things. I’m thinking of things like Blacksky, that Rudy Fraser is building. He took the initiative to build community features (custom feeds, custom labelers, etc.) catered to an audience of Black users who want tools for greater self-governance within the ATprotocol ecosystem.
User agency changes everything
This is the fundamental point that critics miss: Bluesky isn’t just another Twitter clone. It’s a demonstration of what happens when you give users actual control over their social media experience instead of forcing them to rely on the whims of billionaires.
For the past decade, social media users have been like restaurant diners who can only eat at one restaurant, where they can’t see the menu in advance, the chef changes the recipes based on his mood, and the only thing diners can do if they don’t like the menu is yell loudly and hope the chef makes something different. Bluesky is more like a food court where you can choose from multiple vendors, see what each one offers, and even set up your own stand if you want. Some people still yell loudly, but out of the learned habit that that’s the only thing you can do.
Most users don’t actually need to know about this, and they don’t need to buy into the ideology of decentralization and user empowerment, but it’s really all about giving the users more control over their social media experience whether directly on a single platform like Bluesky (with things like custom feeds, custom labelers, self-hosted data servers) or through the rapidly growing set of third-party services and apps, some of which have nothing to do with Bluesky.
This represents a fundamental shift from the past decade of social media, where users had to conform to whatever made billionaires happy (posting to the algorithm, accepting whatever content moderation decisions were made) to a system where users can customize their experience to work for them.
The “Twitter competitor” framing is the Trojan Horse. Bluesky demonstrates just one type of service that can be built on an open social protocol—but the real revolution is in returning agency to users.
That kind of user agency and control is part of what also makes some of the other complaints silly. There are better and better tools for taking control over your own experience on Bluesky, and focusing on finding your community. For example, I recently saw that there are labelers that people use to block out talk of US politics (often used by people not in the US and who don’t want to see it).
We need to unlearn the lessons many people have internalized over the past decade and a half. You shouldn’t be at the whims of any billionaire. You should chart your own course, having it set up to work for you, not the billionaire’s best interests. Critics demanding that people return to X are essentially arguing that users should give up this agency and go back to being at the mercy of Elon Musk’s mood swings and algorithmic manipulation.
That kind of user agency and control makes Elon Musk’s version of “free speech” look like what it really is: a billionaire’s right to control the conversation.
The premise is wrong
Finally, the entire premise is wrong. Anyone who actually spends time using Bluesky knows that it’s vibrant and active with a wide variety of discussion topics (and plenty of disagreements and debates, contrary to the whole “bubble” concept). It’s also well aware of what’s happening elsewhere, as there are plenty of discussions about what viewpoints are happening on the wider internet.
The idea that cultural discussions are somehow missing is ridiculous.
The data totally undermines the “dying platform no one uses” narrative: multiple media properties have noted that they get way more traffic from Bluesky than sites like Threads and ExTwitter (both of which throttle posts that include links). And a recent Pew study found that so-called “news influencers” are increasingly on Bluesky.
So we have a platform that publishers say drives more engaged traffic than the “mainstream” alternatives, where news influencers are increasingly active, and which generates enough interest that major media outlets regularly write trend pieces about it. This is not what “failure” looks like.
So basically none of the premises behind those “woe is Bluesky” articles make any sense at all.
About the only context they make sense in is as arguments from people who know they should give up on the sewage drain that ExTwitter has become, but refuse to do so. Rather than deal with their own failings, they are blaming those who have made the leap to a better place and a better system.
So, sure, some people have complaints about Bluesky. But people have complaints about any community they’re in. And Bluesky lets people have way more control over those norms and experiences than any other platform and doesn’t support fascist billionaires at the same time. And, as multiple people have already realized, embracing the Bluesky community already works much better than the billionaire-owned platforms do.
Filed Under: bubbles, community, culture, echo chambers, social media
Companies: bluesky, twitter, x
Why Making Social Media Companies Liable For User Content Doesn’t Do What Many People Think It Will
from the how-stuff-works dept
Brazil’s Supreme Court appears close to ruling that social media companies should be liable for content hosted on their platforms—a move that appears to represent a significant departure from the country’s pioneering Marco Civil internet law. While this approach has obvious appeal to people frustrated with platform failures, it’s likely to backfire in ways that make the underlying problems worse, not better.
The core issue is that most people fundamentally misunderstand both how content moderation works and what drives platform incentives. There’s a persistent myth that companies could achieve near-perfect moderation if they just “tried harder” or faced sufficient legal consequences. This ignores the mathematical reality of what happens when you attempt to moderate billions of pieces of content daily, and it misunderstands how liability actually changes corporate behavior.
Part of the confusion, I think, stems from people’s failure to understand the impossibility of doing content moderation well at scale. There is a very wrong assumption that social media platforms could do perfect (or very good) content moderation if they just tried harder or had more incentive to do better. Without denying that some entities (*cough* ExTwitter *cough*) have made it clear they don’t care at all, most others do try to get this right, and discover over and over again how impossible that is.
Yes, we can all point to examples of platform failures that are depressing and seem obvious that things should have been done differently, but the failures are not there because “the laws don’t require it.” The failures are because it’s impossible to do this well at scale. Some people will always disagree with how a decision comes out, and other times there are no “right” answers. Also, sometimes, there’s just too much going on at once, and no legal regime in the world can possibly fix that.
Given all of that, what we really want are better overall incentives for the companies to do better. Some people (again, falsely) seem to think the only incentives are regulatory. But that’s not true. Incentives come in all sorts of shapes and sizes—and much more powerful than regulations are things like the users themselves, along with advertisers and other business partners.
Importantly, content moderation is also a constantly moving and evolving issue. People who are trying to game the system are constantly adjusting. New kinds of problems arise out of nowhere. If you’ve never done content moderation, you have no idea how many “edge cases” there are. Most people—incorrectly—assume that most decisions are easy calls and you may occasionally come across a tougher one.
But there are constant edge cases, unique scenarios, and unclear situations. Because of this, every service provider will make many, many mistakes every day. There’s no way around this. It’s partly the law of large numbers. It’s partly the fact that humans are fallible. It’s partly the fact that decisions need to be made quickly without full information. And a lot of it is that those making the decisions just don’t know what the “right” approach is.
The way to get better is constant adjusting and experimenting. Moderation teams need to be adaptable. They need to be able to respond quickly. And they need the freedom to experiment with new approaches to deal with bad actors trying to abuse the system.
Putting legal liability on the platform makes all of that more difficult
Now, here’s where my concerns about the potential ruling in Brazil get to: if there is legal liability, it creates a scenario that is actually less likely to lead to good outcomes. First, it effectively requires companies to replace moderators with lawyers. If your company is now making decisions that come with significant legal liability, that likely requires a much higher type of expertise. Even worse, it’s creating a job that most people with law degrees are unlikely to want.
Every social media company has at least some lawyers who work with their trust & safety teams to review the really challenging cases, but when legal liability could accrue for every decision, it becomes much, much worse.
More importantly, though, it makes it way more difficult for trust & safety teams to experiment and adapt. Once things include the potential of legal liability, then it becomes much more important for the companies to have some sort of plausible deniability—some way to express to a judge “look, we’re doing the same thing we always have, the same thing every company has always done” to cover themselves in court.
But that means that these trust & safety efforts get hardened into place, and teams are less able to adapt or to experiment with better ways to fight evolving threats. It’s a disaster for companies that want to do the right thing.
The next problem with such a regime is that it creates a real heckler’s veto-type regime. If anyone complains about anything, companies are quick to take it down, because the risk of ruinous liability just isn’t worth it. And we now have decades of evidence showing that increasing liability on platforms leads to massive overblocking of information. I recognize that some people feel this is acceptable collateral damage… right up until it impacts them.
This dynamic should sound familiar to anyone who’s studied internet censorship. It’s exactly how China’s Great Firewall originally operated—not through explicit rules about what was forbidden, but by telling service providers that the punishment would be severe if anything “bad” got through. The government created deliberate uncertainty about where the line was, knowing that companies would respond with massive overblocking to avoid potentially ruinous consequences. The result was far more comprehensive censorship than direct government mandates could have achieved.
Brazil’s proposed approach follows this same playbook, just with a different enforcement mechanism. Rather than government officials making vague threats, it would be civil liability creating the same incentive structure: when in doubt, take it down, because the cost of being wrong is too high.
People may be okay with that, but I would think that in a country with a history of dictatorships and censorship, they would like to be a bit more cautious before handing the government a similarly powerful tool of suppression.
It’s especially disappointing in Brazil, which a decade ago put together the Marco Civil, an internet civil rights law that was designed to protect user rights and civil liberties—including around intermediary liability. The Marco Civil remains an example of more thoughtful internet lawmaking (way better than we’ve seen almost anywhere else, including the US). So this latest move feels like backsliding.
Either way, the longer-term fear is that this would actually limit the ability of smaller, more competitive social media players to operate in Brazil, as it will be way too risky. The biggest players (Meta) aren’t likely to leave, but they have buildings full of lawyers who can fight these lawsuits (and often, likely, win). A study we conducted a few years back detailed how as countries ratcheted up their intermediary liability, the end result was, repeatedly, fewer online places to speak.
That doesn’t actually improve the social media experience at all. It just gives more of it to the biggest players with the worst track records. Sure, a few lawsuits may extract some cash from these companies for failing to be perfect, but it’s not like they can wave a magic wand and not let any “criminal” content exist. That’s not how any of this works.
Some responses to issues raised by critics
When I wrote about this on a brief Bluesky thread, I received hundreds of responses—many quite angry—that revealed some common misunderstandings about my position. I’ll take the blame for not expressing myself as clearly as I should have and I’m hoping the points above lay out the argument more clearly regarding how this could backfire in dangerous ways. But, since some of the points were repeated at me over and over again (sometimes with clever insults), I thought it would be good to address some of the arguments directly:
But social media is bad, so if this gets rid of all of it, that’s good. I get that many people hate social media (though, there was some irony in people sending those messages to me on social media). But, really what most people hate is what they see on social media. And as I keep explaining, the way we fix that is with more experimentation and more user agency—not handing everything over to Mark Zuckerberg and Elon Musk or the government.
Brazil doesn’t have a First Amendment, so shut up and stop with your colonialist attitude. I got this one repeatedly and it’s… weird? I never suggested Brazil had a First Amendment, nor that it should implement the equivalent. I simply pointed out the inevitable impact of increasing intermediary liability on speech. You can decide (as per the comment above) that you’re fine with this, but it has nothing to do with my feelings about the First Amendment. I wasn’t suggesting Brazil import American free speech laws either. I was simply pointing out what the consequences of this one change to the law might create.
Existing social media is REALLY BAD, so we need to do this. This is the classic “something must be done, this is something, we will do this” response. I’m not saying nothing must be done. I’m just saying this particular approach will have significant consequences that it would help people to think through.
It only applies to content after it’s been adjudicated as criminal. I got that one a few times from people. But, from my reading, that’s not true at all. That’s what the existing law was. These rulings would expand it greatly from what I can tell. Indeed, the article notes how this would change things from existing law:
The current legislation states social media companies can only be held responsible if they do not remove hazardous content after a court order.
[….]
Platforms need to be pro-active in regulating content, said Alvaro Palma de Jorge, a law professor at the Rio-based Getulio Vargas Foundation, a think tank and university.
“They need to adopt certain precautions that are not compatible with simply waiting for a judge to eventually issue a decision ordering the removal of that content,” Palma de Jorge said.
You’re an anarchocapitalist who believes that there should be no laws at all, so fuck off. This one actually got sent to me a bunch of times in various forms. I even got added to a block list of anarchocapitalists. Really not sure how to respond to that one other than saying “um, no, just look at anything I’ve written for the past two and a half decades.”
America is a fucking mess right now, so clearly what you are pushing for doesn’t work. This one was the weirdest of all. Some people sending variations on this pointed to multiple horrific examples of US officials trampling on Americans’ free speech, saying “see? this is what you support!” as if I support those things, rather than consistently fighting back against them. Part of the reason I’m suggesting this kind of liability can be problematic is because I want to stop other countries from heading down a path that gives governments the power to stifle speech like the US is doing now.
I get that many people are—reasonably!—frustrated about the terrible state of the world right now. And many people are equally frustrated by the state of internet discourse. I am too. But that doesn’t mean any solution will help. Many will make things much worse. And the solution Brazil is moving towards seems quite likely to make the situation worse there.
Filed Under: brazil, content moderation, free speech, impossibility, intermediary liability, marco civil, platform liability, social media
Techdirt Podcast Episode 420: The FTC’s Quixotic Social Media Inquiry
from the a-bad-idea-all-around dept
We’ve got a cross-post episode for you this week, courtesy of the Tech Policy Podcast by TechFreedom, hosted by Corbin Barthold. Both TechFreedom and The Copia Institute submitted comments on the FTC’s inquiry into social media censorship, Corbin invited Mike and TechFreedom’s Santana Boulton for a discussion all about what’s going on. You can listen to the whole conversation here on this week’s episode.
You can also download this episode directly in MP3 format.
Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts or Spotify, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Filed Under: censorship, corbin barthold, ftc, podcast, santana boulton, social media
Meta Threatens To Pull Facebook And Instagram Out Of Nigeria Over $290 Million Fine Imposed For Violation Of Local Privacy Laws
from the behaving-badly dept
It’s hardly a secret that Meta is an unpleasant company. That’s reflected both in terms of what happens behind closed doors, and its actions in the market. Some of its attempts to bully nations or even large economic blocks are well documented. But its threats outside Western markets are just as reprehensible, though less well known. For example, the Rest of the World site reports on a major confrontation between Meta and the authorities that is currently taking place in Nigeria:
Local authorities have fined Meta $290 million for regulatory breaches, prompting the social media giant to threaten pulling Facebook and Instagram from the country.
As with earlier EU fines imposed on the company, the sticking point is Meta’s refusal to comply with local privacy laws:
The [Federal Competition and Consumer Protection Commission (FCCPC)] said Meta committed multiple and repeated infringements of the country’s Nigerian rules, including “denying Nigerians the right to control their data, transferring and sharing Nigerian user data without authorization, discriminating against Nigerian users compared to users in other jurisdictions, and abusing their dominant market position by forcing unfair privacy policies.”
After remediation efforts failed, the FCCPC issued its final order in July 2024, imposing a 220millionfinealongwithpenaltiesfromotheragenciesthattookthetotalamountto220 million fine along with penalties from other agencies that took the total amount to 220millionfinealongwithpenaltiesfromotheragenciesthattookthetotalamountto290 million. Meta appealed the decision, but the plea was overturned in April, prompting the company’s threat to withdraw its services from Nigeria.
The fine itself is small change for Meta, which had a net income of 62billiononaturnoverof62 billion on a turnover of 62billiononaturnoverof165 billion in 2024, and a market capitalization of $1.5 trillion. Meta’s current revenues in Nigeria are relatively small, but its market shares are high:
According to social media performance tracker Napoleoncat, Meta has a massive presence in the country, with Facebook alone reaching about 51.2 million users as of May 2024, more than a fifth of the population. Instagram had 12.6 million Nigerian users as of November 2023, while WhatsApp had about 51 million users, making Nigeria the 10th largest market globally for the messaging app.
Since many Nigerians depend on Meta’s platforms, the company might be hoping that there will be public pressure on the government not to impose the fine in order to avoid a shutdown of its services there. But it is hard to see Meta carrying out its threat to walk away from a country expected to be the third most populous nation in the world by 2050. In 2100, the population of Nigeria could reach 541 million according to current projections.
Even though the dispute in Nigeria has received little attention in the Western press, it involves a number of important issues such as privacy, national sovereignty and the future demographics of the online world, all of which have a global dimension. It also provides yet another instance of Meta behaving badly.
Follow me @glynmoody on Mastodon and on Bluesky.
Filed Under: africa, facebook, fccpc, fines, infringement, instagram, meta, nigeria, population, privacy, social media, whatsapp
Companies: meta
Techdirt Podcast Episode 417: The Rise Of The Open Social Web
from the power-to-the-people dept
Though the original promise of the internet has been twisted and distorted, today we’re seeing more and more people working to restore decentralization and user power online. One such person who sees the problem better than most is Flipboard founder and former Twitter board member (among many other things) Mike McCue, whose new application Surf is a kind of browser for the open social web. Mike joins us on this week’s episode to talk all about Surf and the (hopefully more decentralized) future of the internet.
You can also download this episode directly in MP3 format.
Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts or Spotify, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Filed Under: decentralization, mike mccue, podcast, social media
Companies: flipboard, twitter
Only A Few Days Left To Get A Copy Of One Billion Users
from the the-social-media-card-game dept
As we announced last week, our recently-Kickstarted card game One Billion Users is about to enter production, which means this is your last chance to secure a copy for yourself. The Kickstarter campaign is accepting late pledges from now through the end of Wednesday, May 7th.
We recently received our proof copy, and it looks and feels great. We can’t wait to get the game into people’s hands:
Currently we don’t have any plans to produce more copies of the game beyond this run for our Kickstarter backers, so this is very likely your last chance to get your own copy of One Billion Users.
We’re on track to deliver the games by late summer, and we don’t want anyone to miss out. Add your pledge to the Kickstarter today to secure your copy.
(For those of you who already backed the campaign but want to purchase additional copies, you’ll have a chance to do that before your order is fulfilled, as long as you let us know about your intent to buy additional copies by the end of May 7th.)
Filed Under: games, gaming, kickstarter, one billion users, social media
Colorado’s Social Media Moral Panic Bill Dies After Governor’s Thoughtful Veto
from the got-this-one-right dept
Stop me if you’ve heard this one before: a state legislature, caught up in the moral panic about social media, passes yet another clearly unconstitutional bill that will waste taxpayer money on doomed legal battles. This time it’s Colorado, whose legislature passed a ridiculously bad social media regulation bill (SB25-086) that looks suspiciously similar to bills that have already failed in Utah, Arkansas, and other states. But this story has a slightly different ending.
Like many such bills, this one had an age verification component, which would require massive privacy violations for all users, and also would have draconian and clearly unconstitutional requirements for websites to police certain specified “bad” content online, including suspending accounts of certain users based on Colorado claiming that some people don’t deserve social media accounts (which would clearly run afoul of the Supreme Court’s Packingham ruling from 2017).
In this case, though, Governor Jared Polis (who is often, though not always, good on internet issues) chose to veto the bill with a very clear letter explaining his (correct) reasons. He notes that while there are real concerns about problems online, much of the reasoning behind the bill feels like a moral panic, blaming the tech for how it is used:
SB25-086 is intended to address legitimate concerns regarding the safety of children online. My administration takes very seriously our obligation to promote and protect the public safety of everyone across our state, especially minors, both in physical spaces and online ones, and we share the concerns that prompted this bill. Just as when the telephone was invented by Alexander Graham Bell to connect people and ideas, it was later used for criminal activity and government surveillance, it’s also true that as social media platforms have become more popular, they too are used for illegal activity. Notably, e-mail, including group listservs, can and is also used for illicit activity and receives a full exemption from the legislation.
Despite good intentions, this bill fails to guarantee the safety of minors or adults, erodes privacy, freedom, and innovation, hurts vulnerable people, and potentially subjects all Coloradans to stifling and unwarranted scrutiny of our constitutionally protected speech.
Make no mistake, I share the concerns of parents and law enforcement across our state about minors and adults exposed to illegal activity on social media platforms as well as in neighborhoods. This is why my office offered suggestions focused on strengthening tools to help law enforcement successfully apprehend criminals. Sadly, the bill sponsors rejected these ideas and passed legislation that, to my mind, unduly infringes on the speech, privacy, and liberty rights of all users.
But it’s not just that the bill is based on a moral panic falsely targeted at the technology rather than specific abuses, it’s that the nature of the bill is deeply problematic and does away with some basic due process and privacy rights:
This law imposes sweeping requirements that social media platforms, rather than law enforcement, enforce state law. It mandates a private company to investigate and impose the government’s chosen penalty of permanently deplatforming a user even if the underlying complaint is malicious and unwarranted. In our judicial proceedings, people receive due process when they are suspected of breaking the law. This bill, however, conscripts social media platforms to be judge and jury when users may have broken the law or even a company’s own content rules. This proposed law would incentivize platforms, in order to reduce liability risk, to simply deplatform a user in order to comply with this proposed law.
Further, the costly and mandatory data and metadata collection requirements in this bill throw open the door for abuse by guaranteeing the availability of sensitive information such as user age, identities, and content viewed, and these reports could even be made public at the discretion of the Attorney General. This is not a speculative concern: people have been prosecuted for online searches related to reproductive health care access, and people have been detained and deported due to activity on social media platforms.
This kind of data collection threatens user privacy for those who may be searching for reproductive or gender affirming care in Colorado, as well as for our immigrant communities, especially without safeguards in the bill for how this data would be secured or shared. This creates additional legal jeopardy, as well as the potential for blocking Colorado users from accessing or participating in social media to avoid costly compliance with this law. Importantly, recent U.S. Supreme Court cases suggest that content moderation laws that result in the deplatforming of users will not withstand constitutional scrutiny. For a state that prides ourselves on being forward-looking and innovative, this is simply an unacceptable outcome.
He also notes that for all of the screaming about the supposed evils of the internet, the authors of the bill seem to ignore that many, many people are actually helped by the internet. And enabling government-backed censorship would create a huge mess:
Of course, many Coloradans rely on friends they’ve made through online social networks to help them get through hard times and as a personal support structure. But social media platforms do more than provide a platform for free expression and engagement. These platforms are also inextricable from the successes of small businesses and individuals who make a living online. Removing users as this bill demands will have devastating consequences on the livelihoods of many Coloradans that use social media platforms, with the largest economic impact being felt by content creators and small businesses that cannot afford website platforms or professional marketing campaigns. There have been instances across platforms of influencers, entrepreneurs, and even individual users being deplatformed for content related to breastfeeding, for example this measure would give that action the full force of government. Any sales pitch-be it for wellness products, gunsmithing classes, or mental health supports for marginalized youth would be subject to a private entity’s interpretation of its legality, with an incentive to err on the side of deplatforming, and the consequence could be permanent removal. Stripping users of cost-effective customer engagement and marketing opportunities is a potential consequence of this law.
He closes by also noting (as almost no other state does) the absolute ridiculousness of thinking that a single state should regulate the internet, which would create a 50-state statutory patchwork for businesses that operate without borders.
It’s a great letter.
Of course, almost immediately, the Colorado legislature sought to override his veto, and the Senate voted to override Polis 29-6 the very next day. The sponsors of the bill didn’t address any of Polis’ stated concerns (including the fact that the Supreme Court had made it clear that a bill like this was unconstitutional). Instead, they trot out the usual propaganda about how they’re just out there “protecting the children” and who could possibly be against that?
“I think it’s time that we dig deep and find the courage that is within all of us and the conviction that is within all of us to protect the children within the state of Colorado,” Sen. Lisa Frizell, a Castle Rock Republican and one of the bill’s main sponsors, said before the vote was taken.
[….]
“This bill gives us the tools to help remove predators and traffickers from using social media to harm our kids,” said Democratic Sen. Lindsey Daugherty of Arvada, one of the main sponsors. “This is not about censorship, it’s not about speech. It’s about standing up for the safety and dignity of our youngest and most vulnerable.”
So much unconstitutional, unconscionable garbage is passed by legislatures under the false banner of “protecting the children.” As Polis rightly noted, this bill won’t do that — it will actually make many children significantly less safe by driving them away from supportive online communities and forcing them to hand over sensitive personal data. But these moral panic-driven authoritarians don’t care about the real-world consequences. They just want their name in the headlines with false claims of how they saved kids they actually put at risk.
Thankfully, the override was halted earlier this week when the legislature realized it didn’t have the votes for the override in the larger House and punted on the bill.
The override effort failed when the state House laid over the vote to override the veto until May 9, which is after the legislative session ends. That prevented representatives from having to vote against the override after backing the bill.
“The votes are not here,” said Rep. Andy Boesenecker, a Fort Collins Democrat and one of the lead sponsors of the bill. “That’s a fact.”
These bad bills keep popping up over and over again, so I’m sure we haven’t seen the last of this kind of bill. What’s particularly concerning is watching supposedly informed players jump on the moral panic bandwagon. Take current Colorado Attorney General Phil Weiser, a leading candidate to replace Polis. As a former law professor specializing in internet and telecom law, Weiser should understand exactly why these bills are constitutionally problematic. Instead, he’s championing the same failed approaches we’ve seen crater in courtrooms across the country.
It’s a stark reminder that when it comes to internet regulation, even those with the expertise to know better often can’t resist the siren song of “protecting the children” — even when their proposed solutions do anything but.
Filed Under: 1st amendment, age verification, colorado, content moderation, due process, jared polis, moral panic, phil weiser, privacy, protect the children, social media