social media – Techdirt (original) (raw)

TikTok To Be Sold To Trump’s Right Wing Billionaire Buddies And Converted Into A Propaganda Mill

from the Ministry-of-Information dept

Donald Trump has successfully used xenophobia and fake concerns about propaganda and national security to get what he’s long wanted: TikTok (and its fat ad revenues) are poised to be sold off to his right wing billionaire buddies and, inevitably, slowly converted into a right wing propaganda safe space.

After endless delays, Trump insiders claim to be zeroing in on a deal that would sell 80% of TikTok’s U.S. assets to Andreessen Horowitz (owned by Marc Andreessen, an increasingly incoherent right wing billionaire and close Trump ally), Oracle (owned by Larry Ellison, a rabidly right wing billionaire and close Trump ally), and Silver Lake (a hedge fund with a history of… predatory and dodgy behaviors).

Andreessen and Ellison are, to be clear, technofascists who don’t believe in democracy, regulatory oversight, or basic privacy protections for consumers. The remaining 20 percent would remain in the hands of Chinese ownership and the Chinese government, which still has to finalize the deal. This is not, contrary to what you’ll read in the pages of WAPO or CNN, a net improvement.

Oh, and Donald Trump will get to appoint a board member. Remember when Republicans were against government interference in private businesses?

If you recall, selling TikTok to Trump’s buddies was always his goal (remember he originally wanted it split between Walmart and Oracle). It just got temporarily disrupted by his 2020 election loss.

Trump still didn’t truly get what he really wanted: reporting in the Financial Times suggests that China will still technically own and control the algorithm used to power TikTok, something Trump had previously said was essential to any deal. Early reporting by the Wall Street Journal also indicates that existing TikTok users will have to migrate to a new app.

“Come join an app majority owned by Donald Trump’s unhinged right wing billionaire friends where there’s no competent hate speech and right wing propaganda safeguards” is going to be a tricky selling point that could ultimately throw sand in the gears, and create the potential for another (hopefully better?) company to disrupt their plans.

Now is the time for Silicon Valley to engage in that boundless innovation we’ve all heard so much about.

It Was Never About Privacy And National Security

I’ve noted more times than I can count that the push to ban TikTok was never really about protecting American privacy. If that were true, we would pass a real privacy law and craft serious penalties for companies and executives that play fast and loose with sensitive American data, be it TikTok or the myriad of super dodgy apps, telecoms, and hardware vendors monetizing your phone usage.

It was never really about propaganda. If that were true, we’d take aim at the extremely well funded authoritarian propaganda machine and engage in content moderation of race-baiting political propaganda that’s filling the brains of young American men with pudding and hate. We’d push for education media literacy reforms common in countries like Finland.

Banning TikTok was never really about national security. If that were true, we wouldn’t be dismantling our cybersecurity regulators, accidentally hosting sensitive military chats over Signal with journalists, voting to cement utterly incompetent knobs in unaccountable roles across military intelligence, and letting dodgy data brokers sell sensitive personal info to global governments (including our own).

TikTok’s Chinese ownership did pose some very real legitimate security, privacy, and NatSec concerns, but the MAGA folks “fixing” the problem were never competent or good faith actors, and the push to ban hijack TikTok was always about ego, money, and information control.

Ego; Trump got mad at TikTok videos making fun of his small crowd sizes. Money; Facebook worked tirelessly to spread bogus moral panics about TikTok in order to kill off a competitor they couldn’t out-innovate. Control; the GOP wants to own TikTok so they can ensure it’s friendly to an essential cornerstone of party power — their propaganda.

From day one, our shitty technology press helped prop up the myth that this was a good faith effort to manage national security and privacy issues. And the Democrats engaged in one of the most idiotic own goals in tech policy history by helping a billionaire authoritarian shift ownership of the country’s most popular short-form video app to his technofascist buddies.

We’re going to be paying the price for a very long time.

Larry Ellison in particular has been very keen to leverage his massive wealth gains during the Trump era(s) to buy every new and old media venture his family can get its hands on, from CBS and CNN (Time Warner), to TikTok. Trump’s FCC is busy stripping away whatever is left of media consolidation limits to accommodate this massive right wing power grab. And TikTok’s a very central piece of the puzzle.

American authoritarians are following the same playbook we’ve seen in countries like Hungary, where new and old media and journalism is either destroyed or hijacked in service to authoritarian leadership. It’s happening here, now, and the very least ethical people can do is recognize it and put up a fight.

Filed Under: authoritarians, disinformation, donald trump, information warfare, larry ellison, marc andreessen, propaganda, social media, tiktok ban
Companies: andreessen horowitz, bytedance, silverlake partners, tiktok

After 30+ Deaths In Protests Triggered by Nepal’s Social Media Ban, 145,000 People Debate The Country’s Future In Discord Chatroom

from the taming-chaos-with-discord dept

The Himalayan nation of Nepal has featured only rarely on Techdirt. The first time was back in 2003, with a story about an early Internet user there. According to the post, he would spend five hours walking down the mountain to the main road, and then another four hours on a bus to get to the nearest town that had an Internet connection he could use. As a recent Ctrl-Alt-Speech podcast explained, Nepal’s digital society has moved on a long way since then, with massive street protests in the country’s capital, Kathmandu, triggered by a government order banning 26 social media platforms, later rescinded. Those protests turned violent, leaving more than 30 people killed in clashes with the police, key government buildings in flames, and the prime minister ousted. Although the attempt to block the main social media platforms for their failure to submit to governmental registration — and thus control — may have been the final spark that ignited the violence, the underlying causes lie deeper, as NPR explains:

Frustrations have been mounting among young people in Nepal over the country’s unemployment and wealth gap. According to the Nepal Living Standard Survey 2022-23, published by the government, the country’s unemployment rate was 12.6%.

Leading up to the protests, the hashtag #NepoBaby had been trending in the country, largely to criticize the extravagant lifestyles of local politicians’ children and call out corruption, NPR previously reported.

The use of popular digital platforms to criticize the government in this way was probably a key reason for the authorities’ botched clampdown on social media, which in turn led to the large-scale protests and ensuing chaos. And now another popular digital platform is being used in an attempt to find a way to move forward:

After the government’s collapse on Tuesday, the military imposed a curfew across the capital, Kathmandu, and restricted large gatherings. With the country in political limbo and no obvious next leader in place, Nepalis have taken to Discord, a platform popularized by video gamers, to enact the digital version of a national convention.

As one person participating in the discussions told the New York Times: “The Parliament of Nepal right now is Discord.” It is a parliament like no other: in just a few days, more than 145,000 people have joined a Discord server to discuss who should lead the country, at least for the moment:

The channel’s organizers are members of Hami Nepal, a civic organization, and many of those participating in the chat are the so-called Gen-Z activists who led this week’s protests. But since the prime minister’s abrupt resignation on Tuesday, power in Nepal effectively resides with the military. The army’s chiefs, who most likely will decide who next leads the country, have met with the channel’s organizers and asked them to put forth a potential nominee for interim leader.

Whether this unprecedented experiment in large-scale digital politics succeeds in bringing order and stability to Nepal remains to be seen. But it is certainly extraordinary to watch history being made as, once more, the online world rapidly and profoundly reshapes the offline world.

Follow me @glynmoody on Mastodon and on Bluesky.

Filed Under: ban, corruption, gen z, hami nepal, kathmandu, military, nepal, nepobaby, npr, parliament, protests, registration, social media
Companies: discord, new york times

Fake “Free Speech” Champion Clay Higgins Now Wants To Use Gov’t Power To Silence Anyone Who “Belittles” Kirk’s Death

from the bunch-o'-hypocrites dept

Rep. Clay Higgins wants to use his government power to ban Americans from the internet for life if they say unkind things about Charlie Kirk’s death. Yes, the same Clay Higgins who just two years ago co-sponsored the “Protecting Speech from Government Interference Act“—a performative bill that did literally nothing except restate that government actors cannot engage in censorship.

Back then, he sanctimoniously declared:

“The American people have the right to speak their truths, and federal bureaucrats should not be dictating what is or isn’t true. We must continue to uphold the First Amendment as our founding fathers intended.”

The snarky thing to say here is that he’s had a change of heart. The more accurate thing to say is that Clay Higgins is a huge hypocrite. In the wake of the unfortunate killing of Charlie Kirk, Higgins suddenly thinks that the First Amendment no longer applies to him, and he can use his government power to force private companies to ban people for life over First Amendment protected speech:

In case you can’t see that, he says the following:

I’m going to use Congressional authority and every influence with big tech platforms to mandate immediate ban for life of every post or commenter that belittled the assassination of Charlie Kirk. If they ran their mouth with their smartass hatred celebrating the heinous murder of that beautiful young man who dedicated his whole life to delivering respectful conservative truth into the hearts of liberal enclave universities, armed only with a Bible and a microphone and a Constitution… those profiles must come down.

So, I’m going to lean forward in this fight, demanding that big tech have zero tolerance for violent political hate content, the user to be banned from ALL PLATFORMS FOREVER. I’m also going after their business licenses and permitting, their businesses will be blacklisted aggressively, they should be kicked from every school, and their drivers licenses should be revoked. I’m basically going to cancel with extreme prejudice these evil, sick animals who celebrated Charlie Kirk’s assassination.

I’m starting that today.

That is all.

That is a US government official saying that he’s going to use state power to silence voices “for life” for protected speech, such as “belittling” Kirk’s death. He claims he’s going to directly seek to assert state power (removing licenses and permits, something that Congress has no actual authority to do).

That does not appear to be “upholding the First Amendment as our founding fathers intended.” It sure seems to be the opposite of that.

And, of course, you know that Higgins is an even bigger hypocrite than that. He has, somewhat famously, belittled others in similar situations. When Nancy Pelosi’s husband was subject to a violent politically motivated attack he absolutely belittled Pelosi, spreading a blatantly false conspiracy theory about who the attacker was.

By Higgins’ own standard, he should be “banned from ALL PLATFORMS FOREVER” for that tweet. But of course, rules are only for the other team.

Indeed, we know he would freak out if sites actually banned him for something like that, because this is the same Rep. Clay Higgins that once told Twitter execs that he was going to have the FBI arrest them and have them sent to jail because they made the editorial choice to (very briefly) block the sharing of a NY Post article (a story that is widely misunderstood due to misleading conspiracy theories):

“You, ladies and gentlemen, interfered with the United States of America 2020 presidential election, knowingly and willingly,” Higgins said. “That’s the bad news, it’s gonna get worse because this is the investigation part. Later comes the arrest part. Your attorneys are familiar with that.”

Not surprisingly, when the Elon Musk-owned X was way, way, way more aggressive in blocking a story with hacked materials about JD Vance, I don’t recall Higgins threatening him with investigations and arrests.

This perfectly encapsulates the entire MAGA approach to “free speech”: it’s not a principle, it’s a weapon. Free speech is sacred when they want to spread lies about election fraud or attack their enemies. But the moment someone says something they don’t like, suddenly these self-proclaimed First Amendment champions are demanding government censorship that would make Xi Jinping proud.

Higgins’ threat isn’t just hypocritical—it’s genuinely dangerous. A sitting member of Congress is promising to use federal power to punish constitutionally protected speech. The founders he claims to revere would have been appalled by such authoritarian overreach.

But, as I keep asking, where are all those people who falsely claimed that any effort to encourage social media companies to change their moderation practices was the worst attack on free speech in the history of the country? Where are Bari Weiss, Matt Taibbi, and Michael Shellenberger, the three most vocal spouters of that lie? All three are attacking the responses… of Democrats. Not a one appears to have said anything about Higgins’ direct attack on speech.

The silence is deafening and revealing. These supposed free speech warriors are nowhere to be found when actual government censorship is being threatened by their political allies.

Incredibly, Shellenberger, who testified multiple times before Congress, including saying “one’s commitment to free speech means nothing if it does not extend to your political enemies” was whining on his Substack about how some amorphous group of NGOs are trying to censor him, while simultaneously retweeting Pennsylvania’s Senator Dave McCormick calling for UPenn to apparently punish Michael Mann for saying that the “the white on white violence has gotten out of hand,” which is a clear satirical reference to racist claims regarding crime.

But also, Mann’s literally saying the violence has gotten out of hand. How is that something worth punishing? And how is Michael Shellenberger, who claims that any effort by any government official to pressure private entities to punish people for their speech is a huge attack on free speech, suddenly now in favor of government official Dave McCormick ordering punishment for Michael Mann’s protected speech? And how is Michael Shellenberger, who testified before Congress about the importance of standing up for the free speech of your political enemies, suddenly silent on Higgins?

It’s hypocrites all the way down.

The performative bullshit about “censorship” from these clowns was always garbage. We’re just able to show it more clearly now.

Filed Under: bari weiss, charlie kirk, clay higgins, free speech, hypocracy, maga, matt taibbi, michael shellenberger, social media

Experts Universally Pan Jonathan Haidt’s “The Anxious Generation” As Unscientific Garbage, But Politicians Keep Buying It Anyway

from the that's-gotta-burn dept

The verdict is in on Jonathan Haidt’s “The Anxious Generation,” and it’s devastating. A new piece in TES Magazine systematically demolishes Haidt’s claims by doing something revolutionary: actually asking experts who study this stuff what they think.

The result reads like an academic execution:

“When I read the book, I found it really hard to believe it was written by a fellow academic,” admits Tamsin Ford, professor of child and adolescent psychiatry at the University of Cambridge.

What Jon is selling is fear,” argues Andrew Przybylski, professor of human behaviour and technology at the University of Oxford. “It’s not scientific.”

And this isn’t some fringe criticism. TES is the Times Educational Supplement, which has been around since 1910 and is basically the trade magazine for educators in the UK. At a time when many educators have been swallowing Haidt’s misleading claims, seeing a respected educational trade magazine systematically shred his arguments is remarkable.

But here’s the truly damning part: this expert demolition came out the exact same day that Politico published a breathless piece claiming Haidt’s crusade represents “the only true bipartisan issue left,” gushing about how governors from both parties are embracing policy reform based on his work.

The contrast couldn’t be starker: while actual experts are calling Haidt’s work unscientific garbage, politicians are treating it like gospel.

The TES piece doesn’t just criticize—it comprehensibly destroys Haidt’s core arguments with the precision of actual scientists who know what they’re talking about.

First, his claim that there’s a mental health “epidemic” among teenagers caused by social media. Ford points out the fundamental problems with Haidt’s use of data:

Ford argues that using self-report data for prevalence estimates is tricky owing to a “ lack of methodological soundness and ‘noisy’ data”.

“A teenager with high scores on a mental health questionnaire at a single time point will include a mixture of those who have not fully understood the question or are mucking around, those who are having a one-off bad day or adjusting to a life stress and those with persistent difficulties that impair their function. The last are those with mental health conditions,” she explains.

When you look at the actual robust data from the UK’s NHS Mental Health survey, the picture is quite different from Haidt’s “tidal wave” narrative:

In terms of the best UK prevalence data, she says the NHS Mental Health of Children and Young People (MHCYP) survey (which includes input from parents, teachers and clinical assessors) found prevalence of mental disorders in those between 5 and 15 years old increased between 1999 and 2004 by 0.4 percentage points and again between 2004 and 2017 by 1.1 percentage points (data for older teens has only been collected once, in 2017, so there is no data over time).

It’s an increase, but Ford says the data does not support Haidt’s description of a “tidal wave” of mental health challenges, nor a “surge of suffering”.

He is going beyond the data,” she argues.

The experts also systematically debunk Haidt’s claims about causation. Candice Odgers, professor of psychology and informatics at University of California Irvine and a former Techdirt podcast guest, concludes:

“It is perfectly reasonable to take a safety-first approach to kids and social media,” she argues. “But when these decisions are made, it should not be because someone tells you science has discovered social media is the cause of serious mental disorders or will harm our children’s brains. That is the story that is being told, but not what the science says.”

Meanwhile, Przybylski (another former podcast guest) points out a basic logical flaw in Haidt’s argument:

“By the logic of his argument, the correlation between the use of technology and the outcome should be stronger,” he says. “As the algorithms have got more pernicious, more sophisticated, things should be getting progressively worse. [That hasn’t happened.] There is no sense of mechanism here.”

The article also details how experts are particularly concerned about missing the real causes of mental health issues. Ford notes:

Ford says he has missed some other obvious contributing factors in the UK data, including closures of youth clubs and other safe spaces for young people, the world becoming more expensive and difficult to navigate, social changes with looser community bonds and more.

“One of the strongest and most consistent associations for poor mental health is poverty, and we’ve got more children living in poverty, and then we have this huge drop in accessibility to services…[so] there is no early intervention,” she says. “To pin this on phones doesn’t just go way beyond the evidence – it is actually dangerous, as it does not address these other critical factors.”

The experts are also scathing about Haidt’s proposed solutions, which include banning phones in schools and raising age limits for social media. David Ellis has been a leading critic of the addiction narrative:

“We did a satirical paper a couple of years ago and we followed the mathematical formulas that people had used [in this research],” he says. “ We managed to create a friendship addiction scale that demonstrated that 80 per cent of our sample were addicted to their friends, which of course is nonsense.

Even Nora Volkow, director of the National Institute on Drug Abuse in the US and one of the world’s leading experts on dopamine and addiction, disputes Haidt’s claims:

“there’s not a clear-cut definition of what addiction to a phone would be, [so] it is difficult to estimate the prevalence,” she says, adding she knows of no studies that would support Haidt’s 10 per cent addiction figure.

The piece also highlights how Haidt’s claims about educational decline don’t hold up to scrutiny. While he claims there’s been a global decline in learning since smartphones arrived, Christian Bokhove from the University of Southampton points out:

And although “average trajectories” in reading and science were downward, Christian Bokhove, professor in mathematics education at the University of Southampton, argues that beyond the general picture, “many countries were not declining” in that period in any of the three subjects.

Echoing many of the other academics when it comes to their criticisms of Haidt, he argues that even if the data did show a universal decline, “there can be numerous causes for this”.

This aligns perfectly with what I wrote in the Daily Beast piece last year, which was based on many experts as well:

Over the last decade, numerous studies on the impact of phones and social media on children, including a study of studies, conclude that social media is good for some kids, helping them find like-minded individuals. It’s mostly neutral for many kids, and problematic for only a very small group (studies suggest less than 10 percent).

I also highlighted how the country-by-country evidence doesn’t support Haidt’s claims either… unless you cherry-pick your countries, which anyone can do.

Looking at suicide rates (which are more indicative of actual depression rates, rather than self-reported data, given the decreasing stigma associated with admitting to dealing with mental health issues), the numbers show that in many countries it has remained flat or decreased over the past 20 years. Indeed, in countries like France, Ireland, Denmark, Spain, and New Zealand, you see a noticeable decline in youth suicide rates.

If social media were inherently causing an increase in depression, that would be an unlikely result.

But here’s where this gets truly maddening. While experts are thoroughly demolishing Haidt’s claims, politicians are doubling down. That same-day Politico piece reveals the scope of the damage:

39 states now have some sort of phone restrictions in schools, and 18 states and Washington, D.C. have bell-to-bell bans — which ban phones for the entire school day — according to Haidt. After the next legislative sessions, which in many state capitols begin after the new year, more states are sure to enact full bans. The issue has rallied conservatives and liberals, and its potency with parents has largely steamrolled libertarian objections and big tech lobbying.

I first realized a remarkable story was sitting in plain view when I witnessed two governors who are almost comically far apart on the political spectrum both embrace Haidt. Last year, Arkansas Gov. Sarah Huckabee Sanders sent a copy of Haidt’s book to every other governor. She then hosted Haidt in her home state before joining him earlier this year on stage at Davos, not typically a lovefest forum for Arkansas governors and New York academics.

Shortly after that, at the winter meeting at the National Governors Association, I got to talking to New Jersey Gov. Phil Murphy and one of his top aides, and they also were trumpeting Haidt’s work. A liberal, former Goldman Sachs executive turned northeastern governor, Murphy, 68, sounded a lot like his 43-year-old conservative counterpart from Little Rock.

Different regions, different politics and different generations.

But here’s the thing: bipartisan support doesn’t make something right. It just makes it bipartisanly wrong. As I noted in my original piece, every generation has its moral panic, and this appears to be ours.

The TES piece concludes with the most important point of all, from Pete Etchells, professor of psychology and science communication at Bath Spa University:

“It is becoming increasingly difficult to say, ‘hang on, that’s not what the evidence says’, or ‘we don’t have evidence for that yet’, and I really, really worry about this,” he concludes. “There’s a road here where [people say], ‘well, we don’t need science and evidence because we can see it with our own eyes’.”

And that’s exactly what’s happening. Politicians across the spectrum are implementing policies based on Haidt’s work despite an overwhelming expert consensus that his claims are scientifically unfounded. We’re watching evidence-based policy get steamrolled by moral panic in real time.

Incredibly, even with all these quotes, there’s way more in the TES piece, which should leave no doubt in anyone’s mind that the actual experts in the field find Haidt’s book a horrific attack on science and evidence-based policy making.

When professors of child and adolescent psychiatry are saying they can’t believe a fellow academic wrote this book, when experts in human behavior and technology are calling it “fear” rather than science, when leading researchers are pointing out basic logical flaws in the arguments—maybe, just maybe, we should listen to them instead of the guy selling books and giving TED talks.

The verdict from people who actually study this stuff is clear: Haidt’s claims don’t hold up to scrutiny. The fact that politicians find his message appealing doesn’t make it true. It just makes it politically convenient.

And that’s a much scarier prospect than kids having phones.

Filed Under: andrew przybylski, candice odgers, christian bokhove, data, david ellis, evidence, jonathan haidt, moral panic, nora volkow, pete etchells, social media, tamsin ford, tes

Ctrl-Alt-Speech: The Haidt Of Hypocrisy

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

Filed Under: age verification, deepseek, dsa, jim jordan, jonathan haidt, nigel farage, podcast, social media
Companies: anthropic, openai

Techdirt Podcast Episode 428: Blacksky Demonstrates The Promise Of Open Social Media Protocols

from the community-building dept

The goal of Bluesky and the ATProtocol, and of the push for protocols over platforms in general, has always been to see more people building their own communities in a modular fashion. One of the most interesting projects demonstrating this potential is Blacksky, created by Rudy Fraser, which started as a custom feed within Bluesky but has grown into something much bigger. Today, Rudy joins the podcast for a conversation all about Blacksky and what it teaches us about open social media protocols.

You can also download this episode directly in MP3 format.

Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts or Spotify, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.

Filed Under: open protocols, podcast, social media
Companies: blacksky, bluesky

Middle School Cheerleaders Made A TikTok Video Portraying A School Shooting. They Were Charged With A Crime.

from the criminalization-of-kids-being-kids dept

This story was originally published by ProPublica. Republished under a CC BY-NC-ND 3.0 license.

One afternoon in mid-September, a group of middle school girls in rural East Tennessee decided to film a TikTok video while waiting to begin cheerleading practice.

In the 45-second video posted later that day, one girl enters the classroom holding a cellphone. “Put your hands up,” she says, while a classmate flickers the lights on and off. As the camera pans across the classroom, several girls dramatically fall back on a desk or the floor and lie motionless, pretending they were killed.

When another student enters and surveys the bodies on the ground in poorly feigned shock, few manage to suppress their giggles. Throughout the video, which ProPublica obtained, a line of text reads: “To be continued……”

Penny Jackson’s 11-year-old granddaughter was one of the South Greene Middle School cheerleaders who played dead. She said the co-captains told her what to do and she did it, unaware of how it would be used. The next day, she was horrified when the police came to school to question her and her teammates.

By the end of the day, the Greene County Sheriff’s Department charged her and 15 other middle school cheerleaders with disorderly conduct for making and posting the video. Standing outside the school’s brick facade, Lt. Teddy Lawing said in a press conference that the girls had to be “held accountable through the court system” to show that “this type of activity is not warranted.” The sheriff’s office did not respond to ProPublica’s questions about the incident.

Widespread fear of school shootings is colliding with algorithms that accelerate the spread of the most outrageous messages to cause chaos across the country. Social videos, memes and retweets are becoming fodder for criminal charges in an era of heightened responses to student threats. Authorities say harsh punishment is crucial to deter students from making threatening posts that multiply rapidly and obscure their original source.

In many cases, especially in Tennessee, police are charging students for jokes and misinterpretations, drawing criticism from families and school violence prevention experts who believe a measured approach is more appropriate. Students are learning the hard way that they can’t control where their social media messages travel. In central Tennessee last fall, a 16-year-old privately shared a video he created using artificial intelligence, and a friend forwarded it to others on Snapchat. The 16-year-old was expelled and charged with threatening mass violence, even though his school acknowledged the video was intended as a private joke.

Other students have been charged with felonies for resharing posts they didn’t create. As ProPublica wrote in May, a 12-year-old in Nashville was arrested and expelled this year for sharing a screenshot of threatening texts on Instagram. He told school officials he was attempting to warn others and wanted to “feel heroic.”

In Greene County, the cheerleaders’ video sent waves through the small rural community, especially since it was posted several days after the fatal Apalachee High School shooting one state away. The Georgia incident had spawned thousands of false threats looping through social media feeds across the country. Lawing told ProPublica and WPLN at the time that his officers had fielded about a dozen social media threats within a week and struggled to investigate them. “We couldn’t really track back to any particular person,” he said.

But the cheerleaders’ video, with their faces clearly visible, was easy to trace.

Jackson understands that the video was in “very poor taste,” but she believes the police overreacted and traumatized her granddaughter in the process. “I think they blew it completely out of the water,” she said. “To me, it wasn’t serious enough to do that, to go to court.”

That perspective is shared by Makenzie Perkins, the threat assessment supervisor of Collierville Schools, outside of Memphis. She is helping her school district chart a different path in managing alleged social media threats. Perkins has sought specific training on how to sort out credible threats online from thoughtless reposts, allowing her to focus on students who pose real danger instead of punishing everyone.

The charges in Greene County, she said, did not serve a real purpose and indicate a lack of understanding about how to handle these incidents. “You’re never going to suspend, expel or charge your way out of targeted mass violence,” she said. “Did those charges make that school safer? No.”


When 16-year-old D.C. saw an advertisement for an AI video app last October, he eagerly downloaded it and began roasting his friends. In one video he created, his friend stood in the Lincoln County High School cafeteria, his mouth and eyes moving unnaturally as he threatened to shoot up the school and bring a bomb in his backpack. (We are using D.C.’s initials and his dad’s middle name to protect their privacy, because D.C. is a minor.)

D.C. sent it to a private Snapchat group of about 10 friends, hoping they would find it hilarious. After all, they had all teased this friend about his dark clothes and quiet nature. But the friend did not think it was funny. That evening, D.C. showed the video to his dad, Alan, who immediately made him delete it as well as the app. “I explained how it could be misinterpreted, how inappropriate it was in today’s climate,” Alan recalled to ProPublica.

It was too late. One student in the chat had already copied D.C.’s video and sent it to other students on Snapchat, where it began to spread, severed from its initial context.

That evening, a parent reported the video to school officials, who called in local police to do an investigation. D.C. begged his dad to take him to the police station that night, worried the friend in the video would get in trouble — but Alan thought it could wait until morning.

The next day, D.C. rushed to school administrators to explain and apologize. According to Alan, administrators told D.C. they “understood it was a dumb mistake,” uncharacteristic for the straight-A student with no history of disciplinary issues. In a press release, Lincoln County High School said administrators were “made aware of a prank threat that was intended as a joke between friends.”

But later that day, D.C. was expelled from school for a year and charged with a felony for making a threat of mass violence. As an explanation, the sheriff’s deputy wrote in the affidavit, “Above student did create and distribute a video on social media threatening to shoot the school and bring a bomb.”

During a subsequent hearing where D.C. appealed his school expulsion, Lincoln County Schools administrators described their initial panic when seeing the video. Alan shared an audio recording of the hearing with ProPublica. Officials didn’t know that the video was generated by AI until the school counselor saw a small logo in the corner. “Everybody was on pins and needles,” the counselor said at the hearing. “What are we going to do to protect the kids or keep everybody calm the next day if it gets out?” The school district declined to respond to ProPublica’s questions about how officials handled the incident, even though Alan signed a privacy waiver giving them permission to do so.

Alan watched D.C. wither after his expulsion: His girlfriend broke up with him, and some of his friends began to avoid him. D.C. lay awake at night looking through text messages he sent years ago, terrified someone decades later would find something that could ruin his life. “If they are punishing him for creating the image, when does his liability expire?” Alan wondered. “If it’s shared again a year from now, will he be expelled again?”

Alan, a teacher in the school district, coped by voraciously reading court cases and news articles that could shed light on what was happening to his son. He stumbled on a case hundreds of miles north in Pennsylvania, the facts of which were eerily similar to D.C.’s.

In April 2018, two kids, J.S. and his friend, messaged back and forth mocking another student by suggesting he looked like a school shooter. (The court record uses J.S. instead of his full name to protect the student’s anonymity.) J.S. created two memes and sent them to his friend in a private Snapchat conversation. His friend shared the memes publicly on Snapchat, where they were seen by 20 to 40 other students. School administrators permanently expelled J.S., so he and his parents sued the school.

In 2021, after a series of appeals, Pennsylvania’s highest court ruled in J.S.’s favor. While the memes were “mean-spirited, sophomoric, inartful, misguided, and crude,” the state Supreme Court justices wrote in their opinion, they were “plainly not intended to threaten Student One, Student Two, or any other person.”

The justices also shared their sympathy with the challenges schools faced in providing a “safe and quality educational experience” in the modern age. “We recognize that this charge is compounded by technological developments such as social media, which transcend the geographic boundaries of the school. It is a thankless task for which we are all indebted.”

After multiple disciplinary appeals, D.C.’s school upheld the decision to keep him out of school for a year. His parents found a private school that agreed to let him enroll, and he slowly emerged from his depression to continue his straight-A streak there. His charge in court was dismissed in December after he wrote a 500-word essay for the judge on the dangers of social media, according to Alan.

Thinking back on the video months later, D.C. explained that jokes about school violence are common among his classmates. “We try to make fun of it so that it doesn’t seem as serious or like it could really happen,” he said. “It’s just so widespread that we’re all desensitized to it.”

He wonders if letting him back to school would have been more effective in deterring future hoax threats. “I could have gone back to school and said, ‘You know, we can’t make jokes like that because you can get in big trouble for it,’” he said. “I just disappeared for everyone at that school.”


When a school district came across an alarming post on Snapchat in 2023, officials reached out to Safer Schools Together, an organization that helps educators handle school threats. In the post, a pistol flanked by two assault rifles lay on a rumpled white bedsheet. The text overlaid on the photo read, “I’m shooting up central I’m tired of getting picked on everyone is dying tomorrow.”

Steven MacDonald, training manager and development director for Safer Schools Together, recounted this story in a virtual tutorial posted last year on using online tools to trace and manage social media threats. He asked the school officials watching his tutorial what they would do next. “How do we figure out if this is really our student’s bedroom?”

According to MacDonald, it took his organization’s staff only a minute to put the text in quotation marks and run it through Google. A single local news article popped up showing that two kids had been arrested for sharing this exact Snapchat post in Columbia, Tennessee — far from the original district.

“We were able to reach out and respond and say, ‘You know what, this is not targeting your district,’” MacDonald said. Administrators were reassured there was a low likelihood of immediate violence, and they could focus on finding out who was recirculating the old threat and why.

In the training video, MacDonald reviewed skills that, until recently, have been more relevant to police investigators than school principals: How to reverse image search photos of guns to determine whether a post contains a stock image. How to use Snapchat to find contact names for unknown phone numbers. How to analyze the language in the social media posts of a high-risk student.

“We know that why you’re here is because of the increase and the sheer volume of these threats that you may have seen circulated, the non-credible threats that might have even ended up in your districts,” he said. Between last April and this April, Safer Schools Together identified drastic increases in “threat related behavior” and graphic or derogatory social media posts.

Back in the Memphis suburbs, Perkins and other Collierville Schools administrators have attended multiple digital threat assessment training sessions hosted by Safer Schools Together. “I’ve had to learn a lot more apps and social media than I ever thought,” Perkins said.

The knowledge, she said, came in handy during one recent incident in her district. Local police called the district to report that a student had called 911 and reported an Instagram threat targeting a particular school. They sent Perkins a photo of the Instagram profile and username. She began using open source websites to scour the internet for other appearances of the picture and username. She also used a website that allows people to view Instagram stories without alerting the user to gather more information.

With the help of police, Perkins and her team identified that the post was created by someone at the same IP address as the student who had reported the threat. The girl, who was in elementary school, confessed to police that she had done it.

The next day, Perkins and her team interviewed the student, her parents and teachers to understand her motive and goal. “It ended up that there had been some recent viral social media threats going around,” Perkins said. “This individual recognized that it drew in a lot of attention.”

Instead of expelling the girl, school administrators worked with her parents to develop a plan to manage her behavior. They came up with ideas for the girl to receive positive attention while stressing to her family that she had exhibited “extreme behavior” that signaled a need for intensive help. By the end of the day, they had tamped down concerns about immediate violence and created a plan of action.

In many other districts, Perkins said, the girl might have been arrested and expelled for a year without any support — which does not help move students away from the path of violence. “A lot of districts across our state haven’t been trained,” she said. “They’re doing this without guidance.”


Watching the cheerleaders’ TikTok video, it would be easy to miss Allison Bolinger, then the 19-year-old assistant coach. The camera quickly flashes across her standing and smiling in the corner of the room watching the pretend-dead girls.

Bolinger said she and the head coach had been next door planning future rehearsals. Bolinger entered the room soon after the students began filming and “didn’t think anything of it.” Cheerleading practice went forward as usual that afternoon. The next day, she got a call from her dad: The cheerleaders were suspended from school, and Bolinger would have to answer questions from the police.

“I didn’t even know the TikTok was posted. I hadn’t seen it,” she said. “By the time I went to go look for it, it was already taken down.” Bolinger said she ended up losing her job as a result of the incident. She heard whispers around the small community that she was responsible for allowing them to create the video.

Bolinger said she didn’t realize the video was related to school shootings when she was in the room. She often wishes she had asked them at the time to explain the video they were making. “I have beat myself up about that so many times,” she said. “Then again, they’re also children. If they don’t make it here, they’ll probably make it at home.”

Jackson, the grandmother of the 11-year-old in the video, blames Bolinger for not stopping the middle schoolers and faults the police for overreacting. She said all the students, whether or not their families hired a lawyer, got the same punishment in court: three months of probation for a misdemeanor disorderly conduct charge, which could be extended if their grades dropped or they got in trouble again. Each family had to pay more than $100 in court costs, Jackson said, a significant amount for some.

Jackson’s granddaughter successfully completed probation, which also involved writing and submitting a letter of apology to the judge. She was too scared about getting in trouble again to continue on the cheerleading team for the rest of the school year.

Jackson thinks that officials’ outsize response to the video made everything worse. “They shouldn’t even have done nothing until they investigated it, instead of making them out to be terrorists and traumatizing these girls,” she said.

Filed Under: overreaction, police, school shootings, social media, tennessee
Companies: tiktok

Australia Completely Loses The Plot, Plans To Ban Kids From Watching YouTube

from the down-under-and-upside-down-policymaking dept

Last fall, heavily influenced by Jonathan Haidt’s extremely problematic book, Australia announced that it was banning social media for everyone under the age of 16. This was already a horrifically stupid idea—the kind of policy that sounds reasonable in a tabloid headline but crumbles under any serious scrutiny. Over and over again studies have found that social media is neither good nor bad for most teens. It’s also good for some—especially those who are in need of finding community or like-minded individuals. It is, also, not so great for a small group of kids, though the evidence there suggests that it’s worst for those dealing with untreated mental health issues, which causes them to use social media as an alternative to help.

There remains little to no actual evidence that an outright ban will be helpful, and plenty to suggest it will be actively harmful to many.

But now Australia has decided to double down on the stupid, announcing that YouTube will be included in the ban. This escalation reveals just how disconnected from reality this entire policy framework has become. We’ve gone from “maybe we should protect kids from social media” to “let’s ban children from accessing one of the world’s largest repositories of educational content.”

Australia said on Wednesday it will add YouTube to sites covered by its world-first ban on social media for teenagers, reversing an earlier decision to exempt the Alphabet-owned video-sharing site and potentially setting up a legal challenge.

The decision came after the internet regulator urged the government last week to overturn the YouTube carve-out, citing a survey that found 37% of minors reported harmful content on the site.

This is painfully stupid and ignorant. The claim that 37% of minors reported seeing harmful content is also… meaningless without a lot more context and details. What counts as “harmful”? A swear word? Political content their parents disagree with? A video explaining evolution? What was the impact? Is this entirely self-reported? What controls were there? Just saying 37% is kind of meaningless without the details.

This is vibes-based policymaking dressed up in statistics. You could probably get 37% of kids to report “harmful content” on PBS Kids if you asked them vaguely enough. The fact that Australia’s internet regulator is using this kind of methodological garbage to reshape internet policy tells you everything you need to know about how seriously they’ve thought this through.

But also, YouTube is not just effectively the equivalent of television for teens today—it’s often far superior to traditional television because it’s not gatekept by media conglomerates with their own agendas. The idea that you should need to be 16 years old to watch some YouTube programs is beyond laughable, especially given the amount of useful educational content on YouTube. These days there are things like Complexly, Khan Academy, Mark Rober, and plenty of other educational content that kids love and which lives on YouTube. Kids are learning calculus from 3Blue1Brown, exploring history through Crash Course, and getting better science education from YouTube creators than from most traditional textbooks. This isn’t just entertainment—it’s democratized education that bypasses the gatekeeping of traditional media entirely.

This isn’t just unworkable—it’s the construction of a massive censorship infrastructure that will inevitably be used for purposes far beyond “protecting children.” Once you’ve built the system to block kids from YouTube, you’ve built the system to block anyone from anything. And that system will be irresistible to future governments with different ideas about what content people need to be “protected” from.

And the Australian government already knows that age verification tech is a privacy and security nightmare. They admitted as much two years ago.

Of course, kids will figure out ways around it anyway. VPNs exist. Older friends exist. Parents who aren’t idiots exist—and they’ll help their kids break this law. The only thing this accomplishes is teaching an entire generation that their government’s laws are arbitrary, unenforceable, and fundamentally disconnected from reality. It’s teaching kids to have less respect for government.

This isn’t happening in a vacuum, either. Australia is part of a broader global trend of governments using “protect the children” rhetoric as cover for internet control. The UK’s porn age verification disaster, the US Kids Online Safety Act, similar proposals across Europe—they all follow the same playbook. Identify a genuine concern (kids sometimes see stuff online that isn’t great for them), propose a solution that sounds reasonable in a headline (age limits!), then implement it through surveillance and censorship infrastructure that can be repurposed for whatever moral panic comes next.

The end result will be that Australia has basically taught a generation of teenagers not to trust the government, that their internet regulators are completely out of touch, and that laws are stupid. But it goes deeper than that. This kind of blatantly unworkable policy doesn’t just breed contempt for specific laws—it undermines the entire concept of legitimate governance. When laws are this obviously disconnected from technological and social reality, it signals that the people making them either don’t understand what they’re regulating or don’t care about whether their policies actually work. It’s difficult to see how that benefits anyone at all.

Filed Under: age verification, australia, bans, social media, teenagers, youtube ban
Companies: youtube

Take Back Our Digital Infrastructure To Save Democracy

from the a-time-to-build dept

Watch the tech oligarchs who lined up behind Donald Trump at his inauguration, and you’ll see the most important story of our time: the fascists are winning because they’ve built a direct pipeline from concentrated technological power to concentrated political power.

This isn’t about technology being inherently dangerous—it’s about how distorted Wall Street incentives drove us toward digital infrastructure that mirrors authoritarian power structures. Through bullying, threats, and coercion, Trump moved to turn the chokepoints of the centralized internet to his advantage. The MAGA world discovered that when digital platforms become centralized and authoritarian, democratic institutions will follow.

But here’s what the oligarchs don’t want you to understand: the same underlying technologies enabling this power concentration can be architected to resist it. The key isn’t begging for better billionaires or smarter regulations—it’s recognizing that decentralization isn’t a technical preference, it’s a democratic necessity.

The same authoritarian capture that took over centralized social media is already threatening AI systems as well. Just as we’ve watched Musk morph Twitter’s algorithms into X’s non-stop amplification of his personal political preferences, we’re seeing AI systems designed to reflect the biases and political agendas of their corporate owners. But this pattern isn’t inevitable. We need to understand that AI doesn’t have to be another tool of oppression. Designed correctly, it can be a weapon of liberation.

How Concentration Breeds Control

The concentration of digital power wasn’t an accident—it was the inevitable result of Wall Street incentives that rewarded greater centralized control over user empowerment.

Here’s how it happened: investor demands required tool builders to seek ever-greater returns, which meant transitioning from building user-empowering tools to controlling infrastructure. The most successful companies stopped building ever more useful services and started focusing on how to better extract rents from digital chokepoints—the equivalent of privatizing roads, then charging tolls.

These companies colonized the open internet, turning their services into necessary but proprietary infrastructure. They erected barriers to entry, barriers to exit, and tollbooths for everyone else, with your attention as the price of admission.

The result is what Cory Doctorow famously called the enshittification curve: platforms start by empowering users, evolve to capture them, and end by exploiting them. Wall Street’s demand that only investors matter as stakeholders strips away user agency with each step and hands it to corporate overlords.

And corporate overlords, it turns out, are natural allies for authoritarians. When you control the digital infrastructure that shapes how people communicate, learn, and organize, you become an attractive partner for anyone seeking political control. The promise of regulatory capture, government contracts, and protection from competition makes the bargain irresistible.

This convergence wasn’t inevitable—it was a choice made by people who confused convenience with empowerment, scale with value, and engagement with democracy.

The consequences are everywhere: platforms that enabled the Arab Spring and #MeToo are now coordinating genocides and undermining trust in elections. Tools that connected marginalized communities are promoting fascist agendas. And the tech oligarchs who built these systems are now literally standing behind authoritarians at inaugurations.

Digital Infrastructure Is Democratic Infrastructure

Most people still don’t understand the core insight: digital infrastructure and democratic infrastructure are the same thing.

Democracy is the ultimate decentralized technology. It distributes power away from kings and aristocrats to the people—imperfectly, through struggle, but fundamentally. The early internet promised to do the same for information, communication, and commerce. Anyone could publish, reach audiences, and break down barriers between producers and consumers, experts and amateurs, the powerful and powerless.

But Wall Street’s demand for exponential returns required fencing off the digital commons. The billionaires rebuilt the old gatekeeping systems in digital form, turning tools of value creation into mechanisms of value extraction. They offered convenience in exchange for control, scale in exchange for agency, connection in exchange for confinement within their walled gardens.

As Taiwan’s former digital minister, Audrey Tang, explained, democracy and digital freedom aren’t separate concepts—they’re the same thing. When digital platforms become centralized and authoritarian, democratic institutions follow the same pattern. When we surrender control over our digital lives, we surrender control over our political lives.

The concentration of digital infrastructure inevitably leads to the concentration of political power. That’s why the battle for decentralization is fundamentally a battle for democracy itself.

The Path Forward: Protocols, Not Platforms

The solution isn’t building better platforms—it’s making platforms an obsolete concept.

Platforms concentrate power; protocols distribute it. Platforms extract value from users; protocols enable users to create value for themselves. Most importantly: platforms can be captured by bad actors, but protocols resist capture by design.

This resistance isn’t theoretical. We’re seeing it emerge across multiple projects—from the AT Protocol to ActivityPub to nostr. The key insight is architectural: when you separate identity, data storage, and algorithmic curation into different services, no single entity can control the whole system. Users can choose their own moderation services rather than trusting corporate decisions. They can customize their information diet rather than accepting engagement-maximizing feeds. They can control their own data and move between services without losing their social connections.

They have choice. They have transparency. They have their own intentions controlling things, rather than some unseen entity driven by unaligned incentives.

The same architectural principles apply to AI—perhaps the most critical battleground for digital power today. Centralized AI services don’t just mine your data for corporate benefit; they can shape your thinking, limit your capabilities, and make you dependent on their infrastructure. But it doesn’t need to be that way.

We’re already seeing the emergence of open source models, opportunities to control your own system prompts (as DuckDuckGo recently introduced), and smaller distilled models that work in decentralized environments. Projects are emerging to give people more power over their own data, letting you decide how AI can interact with your information, rather than the AI system slurping up everything it can about you.

This isn’t about technical preferences—it’s about the difference between renting someone else’s vision of how you should think and work versus building your own.

The Technological Poison Pill

The beauty of truly decentralized systems is that they’re extremely resistant to capture.

This is what I call the technological poison pill: systems architected so that growth makes them harder to capture, not easier. Traditional centralized platforms become more valuable targets for authoritarians as they scale. Properly designed protocols become more resilient against capture as adoption increases.

Protocol-based systems demonstrate this principle by distributing different functions across services that no single entity controls. Even if one implementation gets captured by bad actors, users can retain their data, connections, and digital identity while moving to alternative services. The architecture makes takeover attempts self-defeating—the very structure that creates value also prevents consolidation of control.

The same principle applies to AI infrastructure. When you control your own models, data, and computational resources, no corporation can unilaterally change terms of service or start mining your conversations. The more people control their own AI infrastructure, the less valuable centralized AI services become as tools of control.

Breaking the Helplessness Loop

The concentration of digital power has trained us to beg for scraps from our digital overlords—and that learned helplessness may be more dangerous than the concentration itself.

Every week brings demands that tech giants “do better” or that governments “crack down” on platforms. But this approach assumes we need permission from powerful entities to fix the internet. It transforms what should be user empowerment into a performance of powerlessness.

This helplessness isn’t accidental—it serves the interests of concentrated power. The more we believe we need tech giants to solve our problems, the more indispensable they become. The more we focus on regulating existing platforms instead of building alternatives, the more we entrench their dominance. The more we beg politicians to save us, the more attractive these companies become as partners for authoritarians seeking control.

The tech oligarchs standing behind Trump at his inauguration represent the logical endpoint of this dynamic: when digital infrastructure owners become kingmakers, democracy becomes a performance staged on their platforms.

But this endpoint isn’t inevitable—it’s the result of choices we can still change.

The Choice Before Us

The underlying infrastructure that enabled our current digital dystopia can enable something radically different: a genuinely democratic digital ecosystem where users control their own experiences, data, and tools.

But this future requires active choice. It means learning new tools, supporting new protocols, and building new habits. It means moving beyond the comfortable convenience of corporate platforms and taking responsibility for digital sovereignty.

The alternative is continued concentration of power in the hands of billionaires who literally stand behind authoritarians at inaugurations, viewing democracy as an obstacle to their vision of control.

Yes, decentralization creates challenges—technical complexity, potential for abuse, fragmentation. But these aren’t arguments against decentralization; they’re arguments for designing it thoughtfully. Democratic institutions have always grappled with similar tensions between distributed power and effective governance. The solution isn’t to abandon democratic principles but to architect systems that embody them while addressing their practical challenges.

The same principle applies to digital infrastructure. Tradeoffs exist, but they don’t justify accepting concentrated control any more than political tradeoffs justify accepting authoritarianism. We can build decentralized systems that address concerns about complexity and abuse without centralizing power in the hands of corporate oligarchs.

Digital Democracy or Concentrated Control?

The choice before us is stark: do we build democratic digital infrastructure, or do we accept permanent concentrated control?

Digital democracy means building systems that embody democratic values—transparency over opacity, user agency over corporate control, distributed power over centralized authority. It means using AI as a tool of personal liberation rather than corporate surveillance. It means supporting protocols that resist capture rather than platforms that court it.

Most importantly, it means rejecting the learned helplessness that treats concentrated tech power as inevitable rather than recognizing it as a temporary arrangement we can change.

The tools exist. Open protocols are maturing. AI models are being democratized. Decentralized infrastructure is becoming viable. The question isn’t technical capability—it’s political will.

Will we choose the difficult work of building democratic digital infrastructure? Or will we continue asking permission from oligarchs and authoritarians?

The battle for the open internet and the battle for democracy aren’t separate fights—they’re the same fight. The future of our digital lives is the future of democracy itself.

We can accept concentrated control over our digital lives, or we can build democratic infrastructure of our own. The choice is ours, but the window for making it won’t stay open forever.

Filed Under: concentration, control, decentralization, democracy, empowerment, enshittification, extraction, fasism, institutions, protocols not platforms, social media

You Shouldn’t Have To Make Your Social Media Public To Get A Visa

from the privacy-is-a-right dept

The Trump administration is continuing its dangerous push to surveil and suppress foreign students’ social media activity. The State Department recently announced an unprecedented new requirement that applicants for student and exchange visas must set all social media accounts to “public” for government review. The State Department also indicated that if applicants refuse to unlock their accounts or otherwise don’t maintain a social media presence, the government may interpret it as an attempt to evade the requirement or deliberately hide online activity.

The administration is penalizing prospective students and visitors for shielding their social media accounts from the general public or for choosing to not be active on social media. This is an outrageous violation of privacy, one that completely disregards the legitimate and often critical reasons why millions of people choose to lock down their social media profiles, share only limited information about themselves online, or not engage in social media at all. By making students abandon basic privacy hygiene as the price of admission to American universities, the administration is forcing applicants to expose a wealth of personal information to not only the U.S. government, but to anyone with an internet connection.

Why Social Media Privacy Matters

The administration’s new policy is a dangerous expansion of existing social media collection efforts. While the State Department has required since 2019 that visa applicants disclose their social media handles—a policy EFF has consistently opposed—forcing applicants to make their accounts public crosses a new line.

Individuals have significant privacy interests in their social media accounts. Social media profiles contain some of the most intimate details of our lives, such as our political views, religious beliefs, health information, likes and dislikes, and the people with whom we associate. Such personal details can be gleaned from vast volumes of data given the unlimited storage capacity of cloud-based social media platforms. As the Supreme Court has recognized, “[t]he sum of an individual’s private life can be reconstructed through a thousand photographs labeled with dates, locations, and descriptions”—all of which and more are available on social media platforms.

By requiring visa applicants to share these details, the government can obtain information that would otherwise be inaccessible or difficult to piece together across disparate locations. For example, while visa applicants are not required to disclose their political views in their applications, applicants might choose to post their beliefs on their social media profiles.

This information, once disclosed, doesn’t just disappear. Existing policy allows the government to continue surveilling applicants’ social media profiles even once the application process is over. And personal information obtained from applicants’ profiles can be collected and stored in government databases for decades.

What’s more, by requiring visa applicants to make their private social media accounts public, the administration is forcing them to expose troves of personal, sensitive information to the entire internet, not just the U.S. government. This could include various bad actors like identity thieves and fraudsters, foreign governments, current and prospective employers, and other third parties.

Those in applicants’ social media networks—including U.S. citizen family or friends—can also become surveillance targets by association. Visa applicants’ online activity is likely to reveal information about the users with whom they’re connected. For example, a visa applicant could tag another user in a political rant or posts photos of themselves and the other user at a political rally. Anyone who sees those posts might reasonably infer that the other user shares the applicant’s political beliefs. The administration’s new requirement will therefore publicly expose the personal information of millions of additional people, beyond just visa applicants.

There are Very Good Reasons to Keep Social Media Accounts Private

An overwhelming number of social media users maintain private accounts for the same reason we put curtains on our windows: a desire for basic privacy. There are numerous legitimate reasons people choose to share their social media only with trusted family and friends, whether that’s ensuring personal safety, maintaining professional boundaries, or simply not wanting to share personal profiles with the entire world.

Safety from Online Harassment and Physical Violence

Many people keep their accounts private to protect themselves from stalkers, harassers, and those who wish them harm. Domestic violence survivors, for example, use privacy settings to hide from their abusers, and organizations supporting survivors often encourage them to maintain a limited online presence.

Women also face a variety of gender-based online harms made worse by public profiles, including stalking, sexual harassment, and violent threats. A 2021 study reported that at least 38% of women globally had personally experienced online abuse, and at least 85% of women had witnessed it. Women are, in turn, more likely to activate privacy settings than men.

LGBTQ+ individuals similarly have good reasons to lock down their accounts. Individuals from countries where their identity puts them in danger rely on privacy protections to stay safe from state action. People may also reasonably choose to lock their accounts to avoid the barrage of anti-LGBTQ+ hate and harassment that is common on social media platforms, which can lead to real-world violence. Others, including LGBTQ+ youth, may simply not be ready to share their identity outside of their chosen personal network.

Political Dissidents, Activists, and Journalists

Activists working on sensitive human rights issues, political dissidents, and journalists use privacy settings to protect themselves from doxxing, harassment, and potential political persecution by their governments.

Rather than protecting these vulnerable groups, the administration’s policy instead explicitly targets political speech. The State Department has given embassies and consulates a vague directive to vet applicants’ social media for “hostile attitudes towards our citizens, culture, government, institutions, or founding principles,” according to an internal State Department cable obtained by multiple news outlets. This includes looking for “applicants who demonstrate a history of political activism.” The cable did not specify what, exactly, constitutes “hostile attitudes.”

Professional and Personal Boundaries

People use privacy settings to maintain boundaries between their personal and professional lives. They share family photos, sensitive updates, and personal moments with close friends—not with their employers, teachers, professional connections, or the general public.

The Growing Menace of Social Media Surveillance

This new policy is an escalation of the Trump administration’s ongoing immigration-related social media surveillance. EFF has written about the administration’s new “Catch and Revoke” effort, which deploys artificial intelligence and other data analytic tools to review the public social media accounts of student visa holders in an effort to revoke their visas. And EFF recently submitted comments opposing a USCIS proposal to collect social media identifiers from visa and green card holders already living in the U.S., including when they submit applications for permanent residency and naturalization.

The administration has also started screening many non-citizens’ social media accounts for ambiguously-defined “antisemitic activity,” and previously announced expanded social media vetting for any visa applicant seeking to travel specifically to Harvard University for any purpose.

The administration claims this mass surveillance will make America safer, but there’s little evidence to support this. By the government’s own previous assessments, social media surveillance has not proven effective at identifying security threats.

At the same time, these policies gravely undermine freedom of speech, as we recently argued in our USCIS comments. The government is using social media monitoring to directly target and punish through visa denials or revocations foreign students and others for their digital speech. And the social media surveillance itself broadly chills free expression online—for citizens and non-citizens alike.

In defending the new requirement, the State Department argued that a U.S. visa is a “privilege, not a right.” But privacy and free expression should not be privileges. These are fundamental human rights, and they are rights we abandon at our peril.

Originally posted to the EFF’s Deeplinks blog.

Filed Under: privacy, social media, state department, travel visa, us