internet – Techdirt (original) (raw)

from the because-of-course-this-is-how-it-would-turn-out dept

A lot of laws have been passed in Europe that regulate the content American companies can carry. Most of these laws were passed to tamp down on speech that would be otherwise legal in the United States, but not so much in Europe where free speech rights aren’t given the same sort of protections found in the US.

Since most of the larger tech companies maintained overseas offices, they were subject to these laws. Those laws targeted everything from terrorist-related content to “hate speech” to whatever is currently vexing legislators. Attached to these mandates were hefty fines and the possibility of being asked to exit these countries completely.

Of course, the most important law governing content takedown demands was passed much, much earlier. I’m not talking about the CDA and Section 203 immunity. No, it’s a law that required no input from legislators or lobbyists.

The law of unintended consequences has been in full force since the beginning of time. But it’s never considered to be part of the legislative process, despite hundreds of years of precedence. So, while the consequences are unintended, they should definitely be expected. Somehow, they never are.

And that brings us to this report [PDF] from The Future of Free Speech, a non-partisan think tank operating from the friendly confines of Vanderbilt University in Tennessee. (h/t Reason)

Legislators in three European countries have made many content-related demands of social media services over the past decade-plus. The end result, however, hasn’t been the eradication of “illegal” content, so much as it has been the eradication of speech that does not run afoul of this mesh network of takedown-focused laws.

When you demand communication services respond quickly to vaguely written laws, the expected outcome is exactly what’s been observed here: the proactive removal of content, a vast majority of which doesn’t violate any of the laws these services are attempting to comply with.

This analysis found that legal online speech made up most of the removed content from posts on Facebook and YouTube in France, Germany, and Sweden. Of the deleted comments examined across platforms and countries, between 87.5% and 99.7%, depending on the sample, were legally permissible.

Equally unsurprising is this breakdown of the stats, which notes that Germany’s content removal laws (which have been in place longer and are much more strict due to its zero-tolerance approach to anything Nazi-adjacent) tend to result in highest percentage of collateral damage.

The highest proportion of legally permissible deleted comments was observed in Germany, where 99.7% and 98.9% of deleted comments were found to be legal on Facebook and YouTube, respectively. This could reflect the impact of the German Network Enforcement Act (NetzDG) on the removal practices of social media platforms which may over-remove content with the objective of avoiding the legislation’s hefty fine. In comparison, the corresponding figures for Sweden are 94.6% for both Facebook and YouTube. France has the lowest percentage of legally permissible deleted comments, with 92.1% of the deleted comments in the French Facebook sample and 87.5% of the deleted comments French YouTube sample.

This isn’t just a very selective sampling of content likely to be of interest to the three countries examined in this report. Nearly 1.3 million YouTube and Facebook comments were utilized for this study. It’s a relatively microscopic in terms of comments generated daily by both platforms but large enough (especially when restrained to three European countries) to determine content removal patterns.

The researchers discovered that more than half the comments removed by these platforms under these countries’ laws were nothing more than the sort of thing that makes the internet world go round, so to speak:

Among the deleted comments, the majority were classified as “general expressions of opinion.” In other words, these were statements that did not contain linguistic attacks, hate speech or illegal content, such as expressing the support for a controversial candidate in the abstract. On average, more than 56% of the removed comments fall into this category.

So, the question is: are these policies actually improving anything? More to the point, are they even achieving the stated goals of the laws? The researchers can’t find any evidence that supports a theory that collateral damage may be acceptable if it helps these governments achieve their aims. Instead, the report suggests things will continue to get worse because the geopolitical environment is in constant flux, which means the goalposts for content moderation are similarly always in motion while the punishments for non-compliance remain unchanged. And that combination pretty much ensures what’s been observed here will only get worse.

[M]oderation of social media is understood by several countries as a delicate balance between freedom of expression, security, and protection of minorities. However, recent events and geopolitical developments could disrupt this perceived balance. National security concerns have caused governments to try to counter misinformation and interference from hostile nations with blunt tools. Additionally, but without making any definitive conclusions, there is some indication that legislation, such as the NetzDG, aimed at strengthening citizens and granting them certain rights, has the unintended effect of encouraging social media platforms to delete a larger fraction of legal comments. This is a preview into the potential impact of the EU’s DSA now in force on freedom of expression.

The report is far kinder in its observations than it probably should be. It says multiple EU governments “understand” that content moderation is a “delicate balance.” That rarely seems to be the case. This report makes it clear that content moderation at scale is impossible. But when companies point this out, regulators tend to view these assertions as flimsy excuses and insist this means nothing more than tech companies just aren’t trying hard enough to meet the (impossible) demands of dozens of laws and hundreds of competing interests.

The takeaway from this report should be abundantly clear. But somehow adherence to the law of unintended consequences is still considered to be a constant flouting of the Unicorns Do Exist laws passed by governments that firmly believe that any decree they’ve issued must be possible to comply with. Otherwise they, in their infinite wisdom, wouldn’t have written it in the first place.

Filed Under: censorship, content removal, europe, france, free speech, germany, internet, netzdg, regulation, unintended consequences

Yet Another Study Finds That Internet Usage Is Correlated With GREATER Wellbeing, Not Less

from the that-pesky-data dept

You’ve all heard the reports about how the internet, social media, and phones are apparently destroying everyone’s well being and mental health. Hell there’s a best selling book and its author making the rounds basically everywhere, insisting that the internet and phones are literally “rewiring” kids minds to be depressed. We’ve pointed out over and over again that the research does not appear to support this finding.

And, really, if the data supported such a finding, you’d think that a new study looking at nearly 2 and a half million people across 168 countries would… maybe… find such an impact?

Instead, the research seems to suggest much more complex relationships, in which for many people, this ability to connect with others and with information are largely beneficial. For many others, it’s basically neutral. And for a small percentage of people, there does appear to be a negative relationship, which we should take seriously. However, it often appears that that negative relationship is one where those who are already dealing with mental health or other struggles, turn to the internet when they have no where else to go, and may do so in less than helpful ways.

The Oxford Internet Institute has just released another new study by Andrew Przybylski and Matti Vuorre, showing that there appears to be a general positive association between internet usage and wellbeing. You can read the full study here, given that it has been published as open access (and under a CC BY 4.0 license). We’ve also embedded it below if you just want to read it there.

As with previous studies done by Vuorre and Przbylski, this one involves looking at pretty massive datasets, rather than very narrow studies of small sample sizes.

We examined whether having (mobile) internet access or actively using the internet predicted eight well-being outcomes from 2006 to 2021 among 2,414,294 individuals across 168 countries. We first queried the extent to which well-being varied as a function of internet connectivity. Then, we examined these associations’ robustness in a multiverse of 33,792 analysis specifications. Of these, 84.9% resulted in positive and statistically significant associations between internet connectivity and well-being. These results indicate that internet access and use predict well-being positively and independently from a set of plausible alternatives.

Now, it’s important to be clear here, as we have been with studies cited for the opposite conclusion: this is a correlational study, and is not suggesting a direct causal relationship between having internet access and wellbeing. But, if (as folks on the other side claim) internet access was truly rewiring brains and making everyone depressed, it’s difficult to see how then we would see these kinds of outcomes.

People like Jonathan Haidt have argued that these kinds of studies obscure the harm done to teens (and especially teenaged girls) as his way of dismissing these sorts of studies. However, it’s nice to see the researchers here try to tease out possible explanations, to make sure such things weren’t hidden in the data:

Because of the large number of predictors, outcomes, subgroups to analyze, and potentially important covariates that might theoretically explain observed associations, we sought out a method of analysis to transparently present all the analytical choices we made and the uncertainty in the resulting analyses. Multiverse analysis (Steegen et al., 2016) was initially proposed to examine and transparently present variability in findings across heterogeneous ways of treating data before modeling them (see also Simonsohn et al., 2020). We therefore conducted a series of multiverse analyses where we repeatedly fitted a similar model to potentially different subgroups of the data using potentially different predictors, outcomes, and covariates.

That allowed them to explore questions regarding different subgroups. And while they did find one “negative association” among young women, it was not in the way you might have heard or would have thought of. There was a “negative association” between “community well-being” and internet access:

We did, however, observe a notable group of negative associations between internet use and community well-being. These negative associations were specific to young (15–24-year-old) women’s reports of community well-being. They occurred across the full spectrum of covariate specifications and were thereby not likely driven by a particular model specification. Although not an identified causal relation, this finding is concordant with previous reports of increased cyberbullying (Przybylski & Bowes, 2017) and more negative associations between social media use and depressive symptoms (Kelly et al., 2018; but see Kreski et al., 2021). Further research should investigate whether low community well-being drives engagement with the internet or vice versa.

This took me a moment to understand, but after reading the details, it’s showing that (1) if you were a 15 to 24-year old woman and (2) if you said in the survey that you really liked where you live (3) you were less likely to have accessed the internet over the past seven days. That was the only significant finding of that nature. That same cohort did not show a negative correlation for other areas of well being around fulfilment and such.

To be even more explicit: the “negative association” was only with young women who answered that they strongly agree with the statement “the city or area where you live is a perfect place for you” and then answered the question “have you used the internet in the past seven days.” There were many other questions regarding well-being that didn’t have such a negative association. This included things like rating how their life was from “best” to “worst” on a 10 point scale, and whether or not respondents “like what you do every day.”

So, what this actually appears to do is support is the idea that if you are happy with where you live (happy in your community) than you may be less focused on the internet. But, for just about every other measure of well-being it’s strongly correlated in a positive way with internet access. There are a few possible explanations for this, but at the very least it might support the theory that the studies of those who are both facing mental health problems and excessive internet usage may stem from problems outside of the internet, leading them to turn to the internet for a lack of other places to turn.

The authors are careful to note the limitations of their findings, and recognize that human beings are complex:

Nevertheless, our conclusions are qualified by a number of factors. First, we compared individuals to each other. There are likely myriad other features of the human condition that are associated with both the uptake of internet technologies and well-being in such a manner that they might cause spurious associations or mask true associations. For example, because a certain level of income is required to access the internet and income itself is associated with well-being, any simple association between internet use and well-being should account for potential differences in income levels. While we attempted to adjust for such features by including various covariates in our models, the data and theory to guide model selection were both limited.

Second, while between-person data such as we studied can inform inferences about average causal effects, longitudinal studies that track individuals and their internet use over time would be more informative in understanding the contexts of how and why an individual might be affected by internet technologies and platforms (Rohrer & Murayama, 2021).

Third, while the constructs that we studied represent the general gamut of well-being outcomes that are typically studied in connection to digital media and technology, they do not capture everything, nor are they standard and methodically validated measures otherwise found in the psychological literature. That is, the GWP data that we used represent a uniquely valuable resource in terms of its scope both over time and space. But the measurement quality of its items and scales might not be sufficient to capture the targeted constructs in the detailed manner that we would hope for. It is therefore possible that there are other features of well-being that are differently affected by internet technologies and that our estimates might be noisier than would be found using psychometrically validated instruments. Future work in this area would do well in adopting a set of common validated measures of well-being (Elson et al., 2023).

On the whole it’s great to see more research and more data here, suggesting that, yes, there is a very complex relationship between internet access and wellbeing, but it should be increasingly difficult to claim that internet access is an overall negative and harmful, no matter what the popular media and politicians tell you.

Filed Under: andrew przybylski, data, internet, matti vuorre, social media, study, well-being
Companies: oxford university

The Sky Is Rising 2024 Edition: Rather Than Destroying Culture, The Internet Has Saved The Content Industries

from the the-sky-is-rising dept

Read the latest edition of The Sky Is Rising at The Copia Institute »

Twelve years ago, we released our very first research report, the Sky is Rising. Back then, in 2012, the commonly accepted wisdom was that the internet was killing various creative industries, from the music industry (especially!) to movies, TV, and books among other things. This didn’t seem to match with the world that we were seeing, so we dug into all the data (and, wherever possible, sought to use the industry’s own numbers) and found that while some industries were struggling to adapt to the internet, the data actually showed that the sky was rising, not falling.

We found that more content than ever before was being created (though not all through traditional channels). We found that people were engaging with more content than ever before. And, contrary to the narrative spun by some legacy industries, we saw that people were more than willing to spend money on content. They were just focused on having it be convenient and accessible where they wanted it to be.

Over the years with support from CCIA, we released additional editions of the Sky is Rising report via our think tank The Copia Institute, but our last one was five years ago in 2019, before the COVID pandemic. Last year we set out to revisit not just the data, but the structure of the whole report. The process took almost the entire year, but we’re excited to release our latest edition of The Sky is Rising.

In the original report, a decade ago, we were focused just on countering the misleading narrative that the internet was killing the creative industries. Not only is that myth dead and buried, the latest report suggests quite the opposite: that the internet has saved those industries and basically become the lifeblood of all creative industries.

Throughout the report what we saw time and time again is that the growth in these industries is happening because of the internet. It’s making it easier than ever to create, to share, to distribute, to promote, to sell, and to engage. Creativity is thriving, and much of it is entirely due to the internet.

Indeed, we saw this most directly in industries most heavily impacted by COVID. One of our concerns going into this report was looking at how the pandemic impacted things, and the data certainly confirmed that some industries had huge problems: namely live music and movie theaters. But, in both cases, the amazing thing that the data showed was how the internet rushed in to fill the void, providing new ways to experience content that traditionally had required performance spaces, helping to tide things over during the periods of lockdowns, and then easing the rebound after lockdowns loosened.

The internet helped spare those industries, and helped billions of people around the globe continue to engage with and experience wonderful art, even in the midst of a global pandemic.

Over and over again we saw examples of the internet helping these industries out. The most stark and clear example is the recording industry (which, as a reminder, is just one segment of the music industry). This was always Exhibit A for an industry supposedly being destroyed by the internet. Except, just as we saw, with the ability to create more music, distribute it, and enable more convenient access to everyone, the business models have sorted themselves out, and now the internet is responsible for the industry reaching new highs.

On the video side of things, while COVID took a huge bite out of the box office, when lumped together with digital streaming, the larger market for video basically has continued to grow.

For what it’s worth, that chart highlights a change we made with this year’s report. Ever since the original edition, we had been combining movies and TV into a single “video” section. This turned out to be prescient as the line between movies and TV started to blur quite a bit during the streaming era. As we were putting together this year’s report, we started to lean in on this thinking, and we retitled the sections and expanded a few. In the old reports, we covered Music, Video, Books, and Video Games. This year, we have switched it to the activity involved: Listening, Watching, Reading, and Playing. This allowed us to expand some of these categories, and slot in some newer things like TikTok videos, digital magazines, and podcasts.

Also, we’ve added a “mini-chapter’ on AI. We’re way too early into the generative AI world to have that much data on what it means for creativity and the creative industries. However, from what we’re seeing, it feels like “generative AI” is taking on the misleading role that “the internet” had in the early 2000s, of a new technology that some are predicting will destroy certain industries. And, while it’s early, what we’re seeing is (again) quite the opposite. AI has all the makings of an incredible tool to help people be even more creative and to create more wonderful works that people will enjoy.

There’s a lot more in the full report, which weighs in at 80 pages, chock full of details, charts, and graphs. But the key takeaway from it should be that the story from the early 2000s about how the internet was going to kill the creative industries and creators was not only wrong, it had everything backwards. The internet has been a huge boost to the creative industries, opening up new ways for people to create, to distribute, and to engage with content of all kinds.

The sky is truly rising, not falling. And, we should keep that in mind as we live through yet another apparent moral panic about the next “threat” to these industries.

Read the latest edition of The Sky Is Rising at The Copia Institute »

Filed Under: books, copyright, creativity, culture, internet, movies, music, reading, sky is rising, tv, video games

State Governments Can’t Resist The Siren Song Of Censorship

from the that-hasn't-worked-in-over-a-century dept

The states have gone rogue. In the last year alone, at least nine states enacted internet censorship laws. And more legislators are promising to take up the cause. But these laws are directly at odds with the First Amendment’s command that the government shall not abridge the freedom of speech.

Undeterred, states passed laws restricting who can access social media, erecting barriers for how to access social media, and even restricting what speech can be displayed.

The states have defended these unconstitutional laws by claiming the laws don’t regulate speech. Instead, the states say they are regulating conduct. But this claim rests on a narrow interpretation of what speech is that was rejected long ago. The First Amendment broadly protects expression regardless of medium. Movies, video games, and the internet all fall under the First Amendment’s protection. In fact, the First Amendment even protects expressive acts like flag burning. Given the First Amendment’s broad scope, simply recasting speech as conduct will not save an unconstitutional law.

Unfortunately, many states are unmoved by these facts. As though entranced by a siren, they continue on their path toward censorship. When governments perceive a threat, whether real or imagined, they act to combat it. When they believe the threat comes from speech, they seek to suppress it in the name of safety. Yet, the First Amendment forbids them from indulging this impulse.

But that impulse is strong. And when confronted with an obstacle, the government looks for a way around it. To evade the First Amendment, governments characterize their laws as “privacy protections,” “conduct regulations” or as restrictions of access to “places.” Upon closer examination, however, none of these justifications hold water.

For example, California presents its Age-Appropriate Design Code as a privacy regulation. Yet, the law imposes obligations on websites to deploy algorithms, designs, and features in a certain way or face fines. Of course, algorithms, features, and designs are the means by which websites develop, display, and disseminate speech.

Arkansas contends that its Social Media Safety Act regulates social media as a “place.” Arkansas seeks to keep minors out of this place just like it restricts their access to bars and casinos. But there is a fundamental distinction between social media and a bar or casino. Social media sites are speech sites. They are designed to facilitate the creation, consumption, and distribution of speech. Bars and casinos, by contrast, are not.

When challenged, both Arkansas and California refused to concede that their laws are censorial. In fact, California boldly proclaimed that its law has “nothing to do with speech.” In both cases, the government’s entire position depends on courts ignoring reality and the crucial role algorithms and social media play in creating, curating, and disseminating online speech. Fortunately, courts are not so naive.

Indeed, accepting these rationalizations would require a sudden, total departure from a century of First Amendment jurisprudence. Such a departure would be drastic, but we can catch a glimpse of what such a narrow, restricted view of speech would look like by examining the early “moving pictures” industry.

In the early twentieth century, movies were sweeping the nation. But they were so new that it was difficult to square them with the common sense understanding of speech. Believing that movies were not a form of speech, several states erected censorship boards to restrict their dissemination.

Ohio passed its Moving Picture Censorship Act in 1913. The law armed the board of censors with the authority to pass judgment on all movies brought into the state. The censors were tasked with approving only “moral, educational, or amusing and harmless” movies. The Mutual Film Corporation challenged the law and argued that it violated the freedoms of speech and publication under the Ohio Constitution.

The U.S. Supreme Court upheld the censorship law. The Court said that movies were not covered under the right to speak, write, or publish. Movies were “mere representations of events, of ideas and sentiments [already] published and known, vivid, useful and entertaining no doubt, but *** capable of evil.”

Put simply, movies were different. They were impersonal affairs. The audience had no opportunity to interact with the cast members as they might after a theatrical production or speech. Similarly, audience members were not presented with a copy of the script to read, flip through, or take home as they could by purchasing a book or newspaper. Given these differences, movies were readily distinguishable from the sort of speaking, writing, and publishing familiar at the time.

The Court latched on to these differences and adopted a narrow conception of speech. Given the ubiquity of movies today, we understand that the differences between movies and books are superficial. We know that movies are expressive and protected by the First Amendment. But the Court was not armed with its own subsequent, robust case law on the subject. And while the narrow view of speech arose from a decision based on Ohio’s constitution, it held sway over the Court’s subsequent First Amendment decisions for the next few decades.

However, in Burstyn v. Wilson, the Court revisited the issue of movie censorship; it repudiated Mutual Film and its narrow view of speech. The Court said that freedom of expression was the rule. There was no need to make an exception for movies because “the basic principles of freedom of speech and the press, like the First Amendment’s command, do not vary.” In the years since Burstyn, the Court’s decisions confirmed that Mutual Film was an aberration and incompatible with free speech. Its subsequent decisions applying the First Amendment to video games and the internet firmly established that free speech applies across mediums. Leaving no room for doubt, the Court reaffirmed its commitment to a broad view of speech this term when it said that online and offline speech protections are coextensive.

Mutual Film and its narrow view of speech are obsolete. Any medium for communication—new or ancient—is entitled to First Amendment protection. Half-hearted attempts to pass off speech regulations as something else will not survive a legal challenge. When legislators choose to regulate speech in spite of the First Amendment, they do so at their own risk.

Unfortunately, this is exactly how many states are approaching internet regulations today. As though still enamored by Mutual Film, some legislators seem determined to treat the internet like a movie from 1913 and restrict content (or access to it) if it is not “moral, educational, or amusing and harmless.” But government oversight of content is censorship. And attempts to impose censorship under an obsolete framework will find no purchase in court. These measures must fail. And they will.

Paul Taske is Litigation Center Policy Counsel for NetChoice, who has been active in challenging many of these laws.

Filed Under: 1st amendment, arkansas, california, censorship, free speech, internet, social media, states

In Internet Speech Cases, SCOTUS Should Stick Up For Reno v. ACLU

from the scotus-should-remember-it-protected-free-speech-online dept

It was by no means certain that the internet would enjoy full First Amendment protection. The radio is not shielded from the government in that way. Nor is broadcast television. Both Congress and the President supported placing online speech under some degree of state control. In Reno v. ACLU (1997), however, the Supreme Court could find “no basis for qualifying the level of First Amendment scrutiny that should be applied to this [new] medium.” Liberty won out.

A quarter-century later, the free internet faces an array of new threats. Sometimes the danger is announced openly and without regret. Discussing his intention to sign a law restricting minors’ access to social media, the governor of Utah recently declared Reno “wrongly decided.” There are “new facts,” he tells us. He earns points for candor. Most opponents of internet freedom attempt to hide what they’re doing. Some of these aspiring regulators even try to snatch the banner of free speech for themselves. But they all want, by hook or by crook, to curtail or evade Reno.

Many states chafe at the restraints Reno places on the government. A few have already arrived at the Supreme Court. These states endorse legal theories that would drastically shrink _Reno_’s scope. But they do not want Reno narrowed in a neutral, even-handed fashion. For the states in question stand on opposite sides of our nation’s culture war. Each side’s message is this: Limit Reno for thee, but not for me. Each side wants the Justices to revoke _Reno_’s protection for the other side.

Yet both sides appeal to the same legal principles. Each side makes arguments in its own litigation that, if accepted in the other side’s litigation, would blow up in its face. Each side makes arguments that, if given full play, could lead to _Reno_’s being destroyed for everyone. The two sides risk pulling the temple down on our heads.

The cases in question are 303 Creative v. Elenis, Moody v. NetChoice, and NetChoice v. Paxton. In 303 Creative, Colorado seeks to compel a Christian website designer to express a message, in the form of a website for a gay wedding, to which she objects. The U.S. Court of Appeals for the Tenth Circuit ruled for the state. The Supreme Court granted review and heard oral argument last December. In Moody and Paxton, states seek to force large social media platforms to spread messages that those platforms believe are dangerous, harmful, or abhorrent. In Moody, the Eleventh Circuit ruled for the platforms, blocking a Florida law called SB7072. In Paxton, the Fifth Circuit ruled against them, upholding a Texas law, HB20, that requires “viewpoint neutral” content moderation (i.e., if you carry Holocaust documentaries, you must carry Holocaust deniers). Petitions for certiorari have been filed in both cases, and the Court is almost certain to grant at least one of them.

The driving forces here are Colorado (supported by other blue states and the federal government) and Florida and Texas (supported by other red states). Still, each side has found able champions on the bench. Judges figure prominently in these legal debates, as we will see. Yet the Supreme Court now has the full picture. With both 303 Creative and Moody/Paxton before them, a majority of the Justices might take a different view. They might see that the best course is to defend the rule and spirit of Reno against all comers.

How is Reno being challenged? How do the attacks on it match up in 303 Creative, Moody, and Paxton? Let’s dig in.

Common Carrier / Place of Public Accommodation

Two years back, Justice Thomas, writing for himself, suggested that “some digital platforms” are “akin to common carriers or places of public accommodation.” If that’s right, he surmised, then “laws that restrict” those platforms’ “right to exclude” might satisfy the First Amendment. The state might lawfully force such entities to disseminate speech against their will.

Upholding HB20 in Paxton, Judge Oldham took the next step. Texas claimed that large social media platforms can be treated like common carriers. Oldham agreed. He concluded—in dicta; no other judge joined this part of his opinion—that HB20’s viewpoint neutrality rule “falls comfortably within the historical ambit of permissible common carrier regulation.”

The idea of common carriage has, Oldham wrote, “been part of Anglo-American law for more than half a millennium.” He explored the concept’s history at length, following it on a “long technological march” from “ferries and bakeries,” to “steamboats and stagecoaches,” to “telegraph and telephone lines,” and finally—in his mind—to “social media platforms.” He argued “the centrality of the Platforms to public discourse.” He grappled with “modern precedents.” He engaged with the “counterarguments” of “the Platforms and their amici.” No one can dispute his rigor.

The Eleventh Circuit, speaking through Judge Newsom, ruled in Moody that the platforms are not like common carriers. Newsom, too, was careful and thorough. But in any event, how much of this debate is genuinely relevant? Judge Southwick’s answer, in his dissent in Paxton, was short and to the point. “Few of the cases cited” by Judge Oldham, Southwick wrote, “concern the intersection of common carrier obligations and First Amendment rights,” and the ones that do “reinforce the idea [that] common carriers retain their First Amendment protections of their own speech.” To show that a legal principle can trump a constitutional right, in other words, it does not suffice to show that the principle has an impressive pedigree. One must establish that the principle has in fact been used to trump the constitutional right.

Here is where things get interesting. This is precisely the approach that Lorie Smith, the Christian website designer, urges the Supreme Court to deploy in 303 Creative. Colorado says that Smith must make websites for gay weddings because her business is a place of public accommodation. What must Colorado do to connect its premise and its conclusion? It must prove, Smith contends, that “public-accommodation laws historically compelled speech, not that they merely existed.” At oral argument, Justice Thomas picked up this line of thought. Is there a “long tradition,” he asked (appearing to depart from the stance he teased with two years ago), “of public accommodations laws applying to speech . . . or expressive conduct?”

Where are the cases showing that, by declaring an entity a common carrier, the state can strip that entity of its right to decide what speech it will (or will not) disseminate to the public at large? Judge Oldham cited none. Where are the cases showing that, by declaring an entity a place of public accommodation, the state can force that entity to create expressive products against its will? In response to Justice Thomas’s question, Colorado’s counsel conceded that “the historical record is sparse.”

Would conservatives be glad to see Smith forced to design websites that go against her religious convictions? Would liberals rejoice at seeing social media platforms forced to host and amplify hate speech? If the answer to these questions is no, perhaps neither side should start down this path. Perhaps neither should be trying to use common carrier or public accommodation rules to evade Reno and control the internet.

Market Power

As support for the common carrier argument, Judge Oldham asserted the major social media platforms’ market power. “Each Platform has an effective monopoly,” he insisted, “over its particular niche of online discourse.” In his view, “sports ‘influencers’ need access to Instagram,” “political pundits need access to Twitter,” and so on.

There are a number of problems with this claim. To begin with, an entity that wins itself market power does not lose its right to free speech. In Miami Herald v. Tornillo (1973), it was argued that “debate on public issues” was at that time “open only to a monopoly in control of the press.” The Court did not disagree. Nonetheless, it unanimously struck down a state law requiring newspapers to let political candidates reply to negative coverage. “Press responsibility is not mandated by the Constitution,” the Justices explained, “and like many other virtues it cannot be legislated.”

Even if market power mattered, it is far from obvious that platforms have “effective monopolies,” whether over “niches” or otherwise. A month after the Fifth Circuit issued Paxton, Elon Musk purchased Twitter, causing more than a few commentators to ditch the service for Mastodon. Influencers—and, for that matter, political pundits—can gain a large following on Snapchat, TikTok (for now), YouTube, or Rumble. More broadly, the overlap among social media products is greater than might appear at first blush. Suing to break up Facebook and Instagram, for instance, the Federal Trade Commission has asserted that the products’ common parent, Meta, dominates a market for “personal social networking services.” The only large competitor in this market, the agency alleges, is Snapchat. Yet the agency has struggled to explain what makes this market distinct. These days, in fact, Meta is scrambling to make its products more like TikTok.

So the worst thing about the “effective monopol[ies]” claim is that it bounces off the surface. The typical antitrust case is a complex dispute about costs and outputs, profit margins and elasticities, and much else besides. Judge Oldham offered a bare assertion. A just-so story. A useful belief, if one’s goal is to let states commandeer the biggest social media platforms.

No one would cry for those platforms if the judiciary were to overestimate the size and stability of their market “niches.” Indeed, many will smile at the prospect. But be careful what you wish for.

Recall that the Tenth Circuit ruled against Lorie Smith in 303 Creative. Smith’s “custom and unique services,” the court wrote, “are inherently not fungible.” They are, “by definition, unavailable elsewhere.” Smith is therefore a market of one, the court thought, and that is grounds for forcing her to speak. Outlandish? Probably so. Then again, Colorado warns that if Smith wins, belief-based restrictions on service might proliferate, leading to market foreclosure in the aggregate. And that argument is not ridiculous; it is merely speculative and weak—not unlike the “effective monopol[ies]” argument in Paxton.

Anyone tempted to use loose pronouncements of market power as a weapon of (culture) war should first picture how the tactic might be misused in a variety of other cases. One careless claim of market power begets another.

Speech vs. Conduct

On the way to upholding HB20, the Fifth Circuit relied heavily on Rumsfeld v. FAIR (2006). A federal statute required law schools to host military recruiters on pain of losing government funding. FAIR upheld this mandate. “A law school’s decision to allow recruiters on campus,” the Court reasoned, “is not inherently expressive.” The statute regulated “conduct, not speech.” It affected “what law schools must _do_—afford equal access to military recruiters—not what they may or may not say.”

The Fifth Circuit used FAIR as a guide. The “targeted denial of access to only military recruiters,” the court said, could not be distinguished from the “viewpoint-based” content moderation “regulated by HB 20.” In both cases, the court concluded, the regulated activity is “conduct” that lacks “inherent expressiveness.” Therefore social media platforms have no First Amendment right to control what speech they host.

This, it turns out, is a popular way to justify letting the state regulate speech. In 303 Creative, the Biden administration filed a brief in support of Colorado. Colorado’s public accommodations law “target[s] conduct,” the brief says, invoking FAIR, and it “impose[s]” only “‘incidental’ burdens on expression.” The brief cites FAIR more than two dozen times.

FAIR was authored by Chief Justice Roberts. At the oral argument in 303 Creative, he did not seem thrilled about how the decision was thrown back at him. That case involved “providing rooms,” he protested, and the Court held merely that “empty rooms don’t speak.”

The Chief Justice is on to something. Here again, the best move is not to play. Conservatives and liberals can come up with creative ways selectively to apply FAIR to this or that (but no other!) form of online speech. They can try to exploit the decision with callous craft, expecting, for some reason, that the gambit will work always in favor of their interests, and never against them. Or they can put FAIR down and affirm Reno for all.

Editorial Discretion

Which brings us to the most aggressive, and the most dangerous, of the attacks on Reno. Included within the First Amendment is a right to editorial discretion. This is why the government generally cannot tell a newspaper which articles or letters to publish, or a parade which marchers to allow, or a television channel which movies to carry. As the Eleventh Circuit said in Moody, it is why social media services are “constitutionally protected” when “they moderate and curate the content that they disseminate on their platforms.”

In Paxton, the Fifth Circuit swept this right aside. “Editorial discretion,” the court proclaimed, is not “a freestanding category of constitutionally protected speech.”

In their petition for certiorari, the platforms’ representatives cast serious doubt on this claim. They quote the Supreme Court’s discussion, across various decisions, of the “exercise [of] editorial discretion over . . . speech and speakers,” of the “editorial function” as being “itself” an “aspect of ‘speech,’” and of the right of “editorial discretion in the selection and presentation” of content. As they observe, the Fifth Circuit “essentially limited th[e] Court’s editorial discretion cases to their facts.”

That’s true—but hold on. Let us return, one last time, to 303 Creative. At argument, Justice Sotomayor sounded remarkably like Judge Oldham. “Show me where,” on the website, “it’s your message,” she asked Smith’s counsel. “How is this your story? It’s [the couple’s] story.” Counsel responded with—the right to editorial discretion. “Every page” on the website is Smith’s “message,” counsel said, “just as in a newspaper that posts an op-ed written by someone else.” Sotomayor did not seem impressed.

We must again ask whether the states would welcome consistent application of their legal principles. If Colorado successfully compels Smith to speak in 303 Creative, will it accept that it has strengthened Florida’s and Texas’s hand in Moody and Paxton? Would Florida and Texas be willing to remove the platforms’ right to editorial discretion at the price of nixing many Christian artists’ right to such discretion as well? A state could duck the question by dreaming up new and clever ways to distinguish the cases. Yes, of course. Other, very different states could do the same. That is the problem.

The Court has called for the views of the Solicitor General in Moody and Paxton. The Biden administration will be tempted to try to thread the needle. To get cute. To argue that the red-state social media laws before the Court are toxic and scary and unconstitutional, but that the blue-state social media laws in the works are beneficial and enlightened and in perfect harmony with the First Amendment.

The Solicitor General should resist the urge to make everything come out right (from a liberal perspective). Here is what she should do instead. Agree that review is warranted. Denounce SB7072 and HB20. Celebrate the right to editorial discretion. Heap praise on Reno v. ACLU. Stop.

Filed Under: 1st amendment, 303 creative v. elenis, colorado, common carrier, compelled speech, florida, free speech, internet, moody v. netchoice, netchoice v. paxton, public accommodation, reno v. aclu, supreme court, texas

Indian Government Cuts Off Internet Access To 27 Million Punjab Residents As It Continues Its Targeting Of Sikhs

from the oppressors-have-all-the-best-toys dept

The Indian government under Narendra Modi has become an even worse version of itself. It has expanded its power unilaterally to silence critics and oppress citizens Modi doesn’t care for. It has continued to do this despite courts finding these actions illegal.

The government has, on more than one occasion, cut millions of people’s access to the internet. It claims these extreme measures are justified to ensure the safety of residents during periods of upheaval, but it’s clear these blackouts are being used to help the government control the narrative when faced with mass criticism.

More of the same is on tap as the Modi government continues its oppression of Sikh residents in the Punjab region — the only region in the country where Sikhs are not a minority. Following a violent protest earlier this year, the government clamped down on the region’s 27 million residents while searching for a single person: Sikh political activist Amritpal Singh, who the government claimed had instigated the violence against Punjab police officers. Pallavi Pundir, reporting for Vice, details the internet clampdown the government deployed during its search for Singh.

This week, state police and paramilitary forces put Punjab on edge as they swept through the whole state searching for Singh to arrest him. They said Singh is a “national security threat” and named the February incident as the reason for the crackdown.

Authorities blocked internet access, placed restrictions on movement, stopped protests, suspended Twitter accounts and arrested over a hundred people, all in the span of four days.

Up to 27 million people deprived of internet access just so the government could try (and fail) to track down someone who’d make them look cruel and inept. Singh became one of several public enemies number one following the protest against the (apparently wrongful) arrest of a Sikh man for kidnapping. Those charges were dropped and the man was released, prompting yet another protest against police abuse in Punjab. Having embarrassed itself, the government decided to punish an entire reason for its own inability to prevent itself from engaging in oppressive tactics targeting the Sikh population.

But this extreme form of damage control isn’t working. This persecution — and its accompanying internet blackout — has attracted attention elsewhere in the world.

_In Canada, which has the world’s largest Sikh population after India, Member of Parliament Jagmeet Singh called the ongoing measures “draconian._”

“These measures are unsettling for many [Sikhs] given [the state’s] historical use to execute extrajudicial killings and enforced disappearances,” he tweeted.

And the oppression continues, enabled by an oft-abused law that (no surprise) declares national security concerns justify massive government overreach.

This week, the Punjab police released dramatic details of Singh’s alleged escape: from a high-speed car chase, to claiming that Singh is hiding in disguise. So far, the Punjab police have arrested 154 of his associates. Singh and four others are charged under the draconian National Security Act (NSA), which gives overarching powers to the state to detain anyone.

Just like every other country in the world (including ours!), saying the words “national security” tilts all the power towards the executive branch, allowing the Indian government to do what it wants, when it wants, all without having to seek approval from courts or legislators. And it can continue to do these things without ever having to offer explicit justifications or explanations because any discussion would supposedly threaten the security of the nation.

So, the oppression continues with no end in sight. And it has been extended to those who merely observe and report of the government’s actions.

As of this week, Punjab police continue to crack down on protesters. Amandeep, an independent journalist from the state, whose Twitter account is also suspended, told VICE World News that the phones of some journalists have been seized. He requested anonymity due to fears for his security.

Despite all of this, the government has still failed to control the narrative, at least not abroad. With intermittent internet access, residents are limited to seeing what the government wants it to see. But they’re not stupid. They know what’s happening isn’t what’s being portrayed by the outlets the government controls. Unfortunately, knowing you’re being lied to isn’t the same thing as being able to do anything about it. And without stable access to the internet, it’s extremely difficult to counter the government’s narrative, which is exactly why these extreme measures are being deployed.

The government doesn’t need to cut off internet access to track down alleged criminals. But it does need to do that if its efforts are motivated almost solely by the ruling party’s animus towards certain residents of the country.

Filed Under: amritpal singh, india, internet, narendra modi, punjab, sikhs

Arkansas: No Need To Age Verify Kids Working In Meat Processing Plants, But We Must Age Verify Kids Online

from the priorities,-people dept

As we’ve been covering, there are a slew of laws across the country (and around the globe!) looking to required websites to “age verify” their visitors. And, it seems to be something that has support from all around the political spectrum, as “protect the children” moral panics know no political boundaries.

Just recently Utah passed its age verification (and more) anti-social media bills (which the governor is expected to sign shortly). Ohio has a plan in the works as well. And, of course, here in California, such a bill was signed into law last year, though is now being challenged in court. There are many states working on similar bills as well. Indeed, at this point, it’s more likely than not that a state is exploring such a bill, even as it seems likely to be unconstitutional.

Arkansas is one such state. SB66 is a bill “to create the protection of minors from distribution of harmful material” and “to establish liability for the publication or distribution of material harmful to minors on the internet” and “to require reasonable age verification.” In other words, the same old unconstitutional garbage that (1) has already been rejected by the Supreme Court and (2) is pre-empted under Section 230.

While the whole law is garbage, let’s just focus in on the age verification part. It would require that any commercial entity “shall use a reasonable age verification method before allowing access to a website that contains a substantial portion of material that is harmful to minors.” The bill has a longer definition of what “material harmful to minors” would be, and it includes “nipple of the female breast.” Also the “touching, caressing, or fondling of nipples, breasts, buttocks, the anus, or genitals.” Hmm.

Anyway.

In other news… in the very same state of Arkansas, governor Sarah Huckabee Sanders has signed into law a different bill, HB1410, which revised Arkansas labor laws to remove age verification for those under 16. The governor claimed in a statement that the law “was an arbitrary burden on parents to get permission from the government for their child to get a job.”

This comes less than a month after meat packing company Packers Sanitation Services, which has operations in Arkansas, was fined $1.5 million for illegally employing “at least 102 children to clean 13 meatpacking plants on overnight shifts,” some of whom were… in Arkansas. That company was found to employ kids as young as 13, who had their skin “burned and blistered.”

So, you know, seems like a good time to roll back the laws that try to make sure companies aren’t doing that sort of thing in Arkansas.

But, equally, seems like an odd time to focus on making sure those very same kids, who will no longer have to verify their ages to work such jobs… will have to verify their age to check out any website where they might encounter a female nipple.

Too young to see a nipple, but never too young to be put to labor cleaning a meatpacking plant where you can have your own skin burned and blistered.

The Arkansas way.

Somewhat incredibly, the bills share two cosponsors. Representative Wayne Long and Senator Joshua Bryant want to make sure it’s more difficult for kids to use the internet, but easier to have those kids work dangerous jobs in meatpacking plants.

Seems healthy.

Filed Under: age verification, arkansas, child labor, for the children, harmful content, internet, joshua bryant, nipples, protect the children, sarah huckabee sanders, wayne long

Getting Kicked Off Social Media For Breaking Its Rules Is Nothing Like Being Sent To A Prison Camp For Retweeting Criticism Of A Dictator

from the push-back,-don't-emulate dept

It’s become frustrating how often people insist that losing this or that social media account is “censorship” and an “attack on free speech.” Not only is it not that, it makes a mockery of those who face real censorship and real attacks on free speech. The Washington Post recently put out an amazing feature about people who have been jailed or sent away to re-education camps for simply reposting something on social media. It’s titled “They clicked once. Then came the dark prisons.

The authoritarian rulers were not idle. They planned to take back the public square, and now they are doing it. According to Freedom on the Net 2022, published by Freedom House, between June 2021 and May 2022, authorities in 40 countries blocked social, political or religious content online, an all-time high. Social media has made people feel as though they can speak openly, but technological tools also allow autocrats to target individuals. Social media users leave traces: words, locations, contacts, network links. Protesters are betrayed by the phones in their pockets. Regimes criminalized free speech and expression on social media, prohibiting “insulting the president” (Belarus), “picking quarrels and provoking trouble” (China), “discrediting the military” (Russia) or “public disorder” (Cuba).

Ms. Perednya’s case is chilling. She was an honors student at Belarus’s Mogilev State University. Three days after Russia’s invasion of Ukraine, she reposted, in a chat on Telegram, another person’s harsh criticism of Mr. Putin and Mr. Lukashenko, calling for street protests and saying Belarus’s army should not enter the conflict.

She was arrested the next day while getting off a bus to attend classes. Judges have twice upheld her 6½-year sentence on charges of “causing damage to the national interests of Belarus” and “insulting the president.”

That is chilling free speech. That is censorship. You losing your account for harassing someone is not.

There are a bunch of stories in the piece, each more harrowing then the next.

After a wave of protest against covid-19 restrictions in late November, Doa, a 28-year-old tech worker in Beijing, told The Post that she and a friend were at a night demonstration briefly, keeping away from police and people filming with their phones. “I worked before in the social media industry. … I know how those things can be used by police,” she said. “They still found me. I’m still wondering how that is possible.” She added: “All I can think of is that they knew my phone’s location.” Two days later, police called her mother, claiming Doa had participated in “illegal riots” and would soon be detained. “I don’t know why they did it that way. I think it creates fear,” Doa said. A few hours later, the police called her directly, and she was summoned to a police station in northern Beijing, where her phone was confiscated and she underwent a series of interrogations over roughly nine hours. The group Chinese Human Rights Defenders estimates that more than 100 people have been detained for the November protests.

The piece calls on democratic nations to do something about all of this.

But as authoritarian regimes evolve and adapt to such measures, protesters will require new methods and tools to help them keep their causes alive — before the prison door clangs shut. It is a job not only for democratic governments, but for citizens, universities, nongovernmental organizations, civic groups and, especially, technology companies to figure out how to help in places such as Belarus and Hong Kong, where a powerful state has thrown hundreds of demonstrators into prison without a second thought, or to find new ways to keep protest alive in surveillance-heavy dystopias such as China.

Free nations should also use whatever diplomatic leverage they have. When the United States and other democracies have contact with these regimes, they should raise political prisoners’ cases, making the autocrats squirm by giving them lists and names — and imposing penalties. The Global Magnitsky Act offers a mechanism for singling out the perpetrators, going beyond broad sanctions on countries and aiming visa bans and asset freezes at individuals who control the systems that seize so many innocent prisoners. The dictators should hear, loud and clear, that brutish behavior will not be excused or ignored.

Except, what the piece leaves out is that, rather than do any of that, it seems that the political class in many of these “free nations” are looking on in envy. We’ve pointed out how various nations, such as the UK with its Online Safety Bill, and the US with a wide variety of bills, are actually taking pages directly from these authoritarian regimes, claiming that there can be new laws that require censorship in the name of “public health” or “to protect the children.” From pretty much all political parties, we’re seeing an embrace of using the power of regulations to make citizens less free to use the internet.

The many, many stories in the WaPo feature are worth thinking about, but the suggestion that the US government or other governments in so-called “free” nations aren’t moving in the same direction is naïve. We keep hearing talk about the need to “verify” everyone online, or to end anonymity. But that’s exactly what these authoritarian countries are doing to track and identify those saying what they don’t like.

And then we see the UK trying to require sites take down “legal, but harmful” content, or US Senators proposing bills that would make social media companies liable for anything the government declares to be “medical misinfo” and you realize how we’re putting in place the identical infrastructure, enabling a future leader to treat the citizens of these supposedly “free” nations identically to what’s happening in the places called out in the WaPo piece.

If anything, reading that piece should make it clear that these supposedly free nations should be pushing back against those types of laws, highlighting how similar laws are being abused to silence dissent. Fight for those locked up in other countries, but don’t hand those dictators and authoritarians the ammunition to point right back at our own laws, allowing them to claim they’re just doing the same things we are.

Filed Under: authoritarian, censorship, dictators, free expression, free speech, internet

Following Massive Protests Against COVID Policies, Chinese Government Again Ramping Up Its Censorship Efforts

from the pushing-back-against-the-pushback dept

A deadly fire in an Urumqi apartment complex has led to something rarely seen in China: massive protests across the nation against the Chinese government’s actually draconian COVID restrictions. Most of the city of Urumqi is on lockdown, with residents banned from leaving their homes. These restrictions may have contributed to the death toll. Witnesses (and one video) claimed lockdown barriers prevented fire trucks from getting to the scene of the fire.

Elsewhere in the country, people have been forced to sleep at work due to quarantine conditions. Others have been bused from their homes to quarantine facilities. Meanwhile, COVID numbers continue to climb, suggesting the recently instituted “zero COVID” policies aren’t actually addressing the problem.

Starting in Urumqi, protests soon spread across the country. Faced with open expressions of anger, the Chinese government is reacting the way it always reacts when it is faced with dissent: by increasing the footprint of its jackboot.

Internet and phone use is heavily regulated (and heavily surveiled) in China. Whatever was already working is being intensified. And whatever hasn’t been applied yet is being put into motion. No longer will it take creating or sharing content the government doesn’t like to earn police visits, criminal charges, or both. Now, as CNN reports, it will only take a nearly passive sign of approval directed at content the Chinese government dislikes to attract the government’s negative attention.

Internet users in China will soon be held liable for liking posts deemed illegal or harmful, sparking fears that the world’s second largest economy plans to control social media like never before.

China’s internet watchdog is stepping up its regulation of cyberspace as authorities intensify their crackdown on online dissent amid growing public anger against the country’s stringent Covid restrictions.

The new rules come into force from Dec. 15, as part of a new set of guidelines published by the Cyberspace Administration of China (CAC) earlier this month.

The Chinese government would prefer an airtight stranglehold, and this is just some expected tightening of its grip. As the government has certainly noticed, the more it tries to censor, the more creative citizens are when circumventing the efforts. Rotated videos, screenshots of content, coded language, unexpected communication platforms… all of these help keep citizens one step ahead of the censors.

So, the rules continue to roll out. And they get more extreme with every iteration.

The regulationis an updatedversion of one previously published in 2017. For the first time, it states that “likes” of public posts must be regulated, along with other types of comments. Public accounts must also actively vet every comment under their posts.

However, the rules didn’t elaborate on what kind of content would be deemedillegal or harmful.

This vagueness is a feature, not a bug. You’ll know you’ve violated the new rules when uniformed officers swing by the house to inform you that you’ve violated them. The solution is to stop liking other people’s posts: winning by not playing.

But there’s an upside to China’s ever-expanding censorship programs, especially when they’re trailing ever-expanding dissent. Even China’s massive surveillance apparatus can’t possibly hope to catch them all.

However, analysts also questioned how practical it would be to carry out the newest rules, given that public anger is widespread and strict enforcement of these censorship requirements would consume significant resources.

“It is almost impossible to stop the spread of protest activities as the dissatisfaction continues to spread. The angry people can come up with all sorts of ways to communicate and express their feelings,” Cheng said.

The Chinese government has the power. But it also has billions of people to keep an eye on. Dissent will never be completely silenced. And as long as that’s true, there’s still hope for the nation.

Filed Under: censorship, china, covid, free speech, intermediary liability, internet, protests

Public Records Expose Indian Government’s Full Access To Nation’s Internet Traffic

from the mass-surveillance,-minimal-expense dept

The government of India continues to increase its monitoring of residents’ day-to-day lives. Like pretty much every other country in the world, India relies on the internet to handle communications, data, and multiple services used by residents.

The government, under Prime Minister Narendra Modi, has become less democratic and more authoritarian. To keep dissent to a minimum, the government has repeatedly expanded its power to surveil internet traffic and communications, under the theory that doing so will somehow make the nation more secure.

Expansions of government power are codified with alarming regularity — much of it focused on controlling narratives, snooping on residents, and bending foreign social media platforms to its will.

Under Modi’s government, platforms and service providers have been stripped of safe harbor protections in order to be held directly responsible for user-generated content. In addition, the government has added compelled assistance mandates, which force service providers to log tons of user data continuously and provide government on-demand access to this information. The end result has been proactive removal of questionable content by service providers in order to avoid being punished by the Modi government for allowing “illegal” content to be spread by India’s internet users.

The government’s cyber law continues to morph, stripping protections and expanding government power with each iteration. Expansive mandated access to internet data and communications have resulted in ever more proactive activity from service providers — something inadvertently exposed in public records obtained by Entrackr.

“All ILD [international long distance] and ISP [internet service provider] licensees are mandated to connect their systems to the CMS [Centralized Monitoring System] facility,” and “law enforcement agencies are provided facility for on-line and real-time monitoring of traffic,” the Internet Service Providers Association of India said in a filing with the Department of Telecommunications obtained by Entrackr under the RTI [Right to Information] Act.

“This facility makes the obligation of providing physical space (10 work stations with access control) a redundant real estate facility at the Licensee gateway locations,” ISPAI said.

Whoops. There’s a lot of quiet parts being said out loud here. First off, the service providers group is admitting it gives the government on-demand and real-time access to traffic flowing through their servers and services. But the second part is even worse: the data the government already has access to (thanks to these mandates) makes in-house access by government surveillance entities redundant. The overall point seems to be that ISPs are already at max compliance, so there’s no need to mandate on-location access points.

Not only is this remote access so easy to use ISPs and long distance providers feel on-site access would be redundant, it’s also so easy to use the government doesn’t even need to inform or otherwise involve the entities housing this data. As early as 2015, the Indian government was already accessing this data without service providers’ knowledge. All that appears to have changed is that this access is now codified.

“Interception through LIS happened by interception requests which were made by law enforcement agencies to the Nodal Officers of the TSPs,” the Internet Freedom Foundation said in 2020. The CMS, however, undercuts these requests by letting the government access data in real time without individual requests.

Entrackr says the scale of the government’s access is “unclear.” Well, when service providers aren’t even given notification that their data and traffic are being accessed, it’s left up to the government to be honest about how often it’s using these powers. Cutting ISPs out of the equation allows the government to keep its own set of books, so to speak. And when government surveillance is almost completely discretionary, it’s highly unlikely the data collected by the government on its internet surveillance activities will be accurate.

All of this makes Indian residents little more than data generators the government can surveil at will. They’re not really citizens. They’re just open wallets and open books to a government that, under its current leadership, considers citizens something to be governed, rather than something it serves.

Filed Under: india, internet, internet surveillance, surveillance