blocking – Techdirt (original) (raw)

Elon Had SpaceX Defy Brazilian Supreme Court Order To Block ExTwitter, But Then Backed Down

from the will-he-won't-he dept

In the ongoing battle between Elon Musk and the Brazilian Supreme Court, it appears that Elon was the first to blink. At least a little bit.

What started to shape up as a new front in the battle, with Elon’s SpaceX defying the order to block X on its Starlink satellite internet service, crumbled on Tuesday as SpaceX announced it would comply, though under protest (some reports claim SpaceX missed the deadline to appeal the ruling, though). Of course, that was the adults at SpaceX saying that, and it’s always possible that Elon will look to overrule them. At the time of this posting, Elon hasn’t directly commented on SpaceX’s announcement yet.

Last week, we wrote about the still ongoing battle between the Brazilian Supreme Court and Elon Musk over his refusal to remove certain content (and share some information on users) from ExTwitter. It then morphed into a fight about having a “local representative” when Elon pulled ExTwitter out of Brazil entirely, after a threat was issued with the potential to jail local employees. On Friday, we wrote about the ban order issued by Supreme Court Justice Alexandre de Moraes.

The initial ban ordered basically every level of the telecom/internet infrastructure stack to ban access to ExTwitter. That included, among other things, that Apple and Google had to block it from their app stores in Brazil, that ISPs in Brazil had to block access to ExTwitter and its app, and that internet backbone and telecom providers also had to block access to ExTwitter.

There were also two more controversial parts of the ban. The first part told Apple and Google that they had to also block access to VPNs that might allow users to get around the bans. A later part threatened massive fines on Brazilians caught getting around the ban by any means, including using a VPN. A few hours after the initial order was released, Moraes backed down on the first part, temporarily suspending the order that Apple and Google block VPNs, though the fine for users still stood. Many people incorrectly thought that part was rescinded as well.

On Monday, the Supreme Court upheld the overall ban. Moraes said that the ban on personal use for VPNs would only be enforced for users who sought to “engage in conduct that defrauds the court decision,” which seems somewhat broad and open to interpretation. One other judge wanted to limit the individual fines only to users who got around the ban and used it to post racist or fascist supporting content, but that request did not receive the necessary support from the other judges.

Still, there is an interesting element in all of this. Another of Elon’s offerings, Starlink from SpaceX, is one of those ISPs that would need to block access to ExTwitter in order to comply with the order. Given that Moraes had already started freezing SpaceX assets, it’s no surprise that Musk basically told Brazilian regulators he wasn’t going to abide by the blocking order either, according to the NY Times:

On Sunday, Starlink informed Brazil’s telecom agency, Anatel, that it would not block X until Brazilian officials released Starlink’s frozen assets, Anatel’s president, Carlos Baigorri, said in an interview broadcast by the Brazilian outlet Globo News.

Mr. Baigorri said he had received that response from Starlink’s lawyers. “Let’s wait and see if they formalize this in the records,” he said.

Mr. Baigorri said he had informed Justice Moraes “so that he can take the measures he deems appropriate.” Mr. Baigorri said his agency could revoke Starlink’s license to operate in Brazil, which would “hypothetically” prevent the company from offering connections to its Brazilian customers.

However, just a little while ago, Starlink announced that it was going to comply with the order, though it is doing so under protest. It posted to ExTwitter:

To our customers in Brazil (who may not be able to read this as a result of X being blocked by @alexandre):

The Starlink team is doing everything possible to keep you connected.

Following last week’s order from @alexandre that froze Starlink’s finances and prevents Starlink from conducting financial transactions in Brazil, we immediately initiated legal proceedings in the Brazilian Supreme Court explaining the gross illegality of this order and asking the Court to unfreeze our assets. Regardless of the illegal treatment of Starlink in freezing of our assets, we are complying with the order to block access to X in Brazil.

We continue to pursue all legal avenues, as are others who agree that @alexandre’s recent orders violate the Brazilian constitution.

There’s at least a bit of irony here, given that Elon’s famous “sorry to be a free speech absolutist” line came in saying he would not block news sources “unless at gunpoint.”

Image

I guess he sees Brazil as holding a gun.

This came just a day after Elon posted wildly about why the US should seize Brazilian government assets in response to Brazil seizing Starlink assets. This was after Elon saw reports of the US seizing Venezuelan President Nicolas Maduro’s airplane.

Image

Of course, it is meaningless to declare that “Unless the Brazilian government returns the illegally seized property of 𝕏 and SpaceX, we will seek reciprocal seizure of government assets too. Hope Lula enjoys flying commercial…” given that Elon is not in the government. However, if his preferred candidate, Donald Trump, wins, I wouldn’t be surprised to see an attempt to help his financial backer on this one.

Starlink represents an interesting leverage point in all of this. It has been used in Brazil for a few years now, including by some in the government. But, as NY Times’ Jack Nicas (who covers tech in Brazil) noted earlier this week, when it launched in Brazil, Elon pledged to hook up 19,000 Brazilian schools with Starlink.

Apparently that never actually happened. But it didn’t stop Elon from just retweeting someone claiming that it had happened.

Either way, this situation and Elon Musk’s vast empire make some of this stuff way more complicated than most any other comparable scenario.

It also seems unlikely to end here. Remember that the current fight is a follow-on to the fight back in April when ExTwitter at first refused to remove some content that Moraes demanded, then quietly backed down… only to then change its mind again later.

Indeed, there are reports coming out of Brazil as I finish this article saying that Moraes is ordering more Starlink seizures which could potentially impact users’ ability to even use the service at all. It’s unclear on the timing of that with regards to Starlink saying it would comply with the blocking. But since the seizures are more about punishing Musk for not complying with ExTwitter, rather than about Starlink itself, it seems likely that these will move forward.

Filed Under: alexandre de moraes, blocking, brazil, elon musk, seizing assets
Companies: spacex, twitter, x

Supreme Court Does Not Go Far Enough In Determining When Government Officials Are Barred From Censoring Critics On Social Media

from the the-test-is-too-broad dept

After several years of litigation across the federal appellate courts, the U.S. Supreme Court in a unanimous opinion has finally crafted a test that lower courts can use to determine whether a government official engaged in “state action” such that censoring individuals on the official’s social media page—even if also used for personal purposes—would violate the First Amendment.

The case, Lindke v. Freed, came out of the Sixth Circuit and involves a city manager, while a companion case called O’Connor-Ratcliff v. Garnier came out of the Ninth Circuit and involves public school board members.

A Two-Part Test

The First Amendment prohibits the government from censoring individuals’ speech in public forums based on the viewpoints that individuals express. In the age of social media, where people in government positions use public-facing social media for both personal, campaign, and official government purposes, it can be unclear whether the interactive parts (e.g., comments section) of a social media page operated by someone who works in government amount to a government-controlled public forum subject to the First Amendment’s prohibition on viewpoint discrimination. Another way of stating the issue is whether a government official who uses a social media account for personal purposes is engaging in state action when they also use the account to speak about government business.

As the Supreme Court states in the Lindke opinion, “Sometimes … the line between private conduct and state action is difficult to draw,” and the question is especially difficult “in a case involving a state or local official who routinely interacts with the public.”

The Supreme Court announced a fact-intensive test to determine if a government official’s speech on social media counts as state action under the First Amendment. The test includes two required elements:

Although the court’s opinion isn’t as generous to internet users as we had asked for in our amicus brief, it does provide guidance to individuals seeking to vindicate their free speech rights against government officials who delete their comments or block them outright.

This issue has been percolating in the courts since at least 2016. Perhaps most famously, the Knight First Amendment Institute at Columbia University and others sued then-president Donald Trump for blocking many of the plaintiffs on Twitter. In that case, the U.S. Court of Appeals for the Second Circuit affirmed a district court’s holding that President Trump’s practice of blocking critics from his Twitter account violated the First Amendment. EFF has also represented PETA in two cases against Texas A&M University.

Element One: Does the official possess actual authority to speak on the government’s behalf?

There is some ambiguity as to what specific authority the Supreme Court believes the government official must have. The opinion is unclear whether the authority is simply the general authority to speak officially on behalf of the public entity, or instead the specific authority to speak officially on social media. On the latter framing, the opinion, for example, discusses the authority “to post city updates and register citizen concerns,” and the authority “to speak for the [government]” that includes “the authority to do so on social media….” The broader authority to generally speak on behalf of the government would be easier to prove for plaintiffs and should always include any authority to speak on social media.

Element One Should Be Interpreted Broadly

We will urge the lower courts to interpret the first element broadly. As we emphasized in our amicus brief, social media is so widely used by government agencies and officials at all levels that a government official’s authority generally to speak on behalf of the public entity they work for must include the right to use social media to do so. Any other result does not reflect the reality we live in.

Moreover, plaintiffs who are being censored on social media are not typically commenting on the social media pages of low-level government employees, say, the clerk at the county tax assessor’s office, whose authority to speak publicly on behalf of their agency may be questionable. Plaintiffs are instead commenting on the social media pages of people in leadership positions, who are often agency heads or in elected positions and who surely should have the general authority to speak for the government.

“At the same time,” the Supreme Court cautions, “courts must not rely on ‘excessively broad job descriptions’ to conclude that a government employee is authorized to speak” on behalf of the government. But under what circumstances would a court conclude that a government official in a leadership position does not have such authority? We hope these circumstances are few and far between for the sake of plaintiffs seeking to vindicate their First Amendment rights.

When Does the Use of a New Communications Technology Become So “Well Settled” That It May Fairly Be Considered Part of a Government Official’s Public Duties?

If, on the other hand, the lower courts interpret the first element narrowly and require plaintiffs to provide evidence that the government official who censored them had authority to speak on behalf of the agency on social media specifically, this will be more difficult to prove.

One helpful aspect of the court’s opinion is that the government official’s authority to speak (however that’s defined) need not be written explicitly in their job description. This is in contrast to what the Sixth Circuit had, essentially, held. The authority to speak on behalf of the government, instead, may be based on “persistent,” “permanent,” and “well settled” “custom or usage.”

We remain concerned, however, that if there is a narrower requirement that the authority must be to speak on behalf of the government via a _particular communications technology_—in this case, social media—then at what point does the use of a new technology become so “well settled” for government officials that it is fair to conclude that it is within their public duties?

Fortunately, the case law on which the Supreme Court relies does not require an extended period of time for a government practice to be deemed a legally sufficient “custom or usage.” It would not make sense to require an ages-old custom and usage of social media when the widespread use of social media within the general populace is only a decade and a half old. Ultimately, we will urge lower courts to avoid this problem and broadly interpret element one.

Government Officials May Be Free to Censor If They Speak About Government Business Outside Their Immediate Purview

Another problematic aspect of the Supreme Court’s opinion within element one is the additional requirement that “[t]he alleged censorship must be connected to speech on a matter within [the government official’s] bailiwick.”

The court explains:

For example, imagine that [the city manager] posted a list of local restaurants with health-code violations and deleted snarky comments made by other users. If public health is not within the portfolio of the city manager, then neither the post nor the deletions would be traceable to [his] state authority—because he had none.

But the average constituent may not make such a distinction—nor should they. They would simply see a government official talking about an issue generally within the government’s area of responsibility. Yet under this interpretation, the city manager would be within his right to delete the comments, as the constituent could not prove that the issue was within that particular government official’s purview, and they would thus fail to meet element one.

Element Two: Did the official purport to exercise government authority when speaking on social media?

Plaintiffs Are Limited in How a Social Media Account’s “Appearance and Function” Inform the State Action Analysis

In our brief, we argued for a functional test, where state action would be found if a government official were using their social media account in furtherance of their public duties, even if they also used that account for personal purposes. This was essentially the standard that the Ninth Circuit adopted, which included looking at, in the words of the Supreme Court, “whether the account’s appearance and content look official.” The Supreme Court’s two-element test is more cumbersome for plaintiffs. But the upside is that the court agrees that a social media account’s “appearance and function” is relevant, even if only with respect to element two.

Reality of Government Officials Using Both Personal and Official Accounts in Furtherance of Their Public Duties Is Ignored

Another problematic aspect of the Supreme Court’s discussion of element two is that a government official’s social media page would amount to state action if the page is the “only” place where content related to government business is located. The court provides an example: “a mayor would engage in state action if he hosted a city council meeting online by streaming it only on his personal Facebook page” and it wasn’t also available on the city’s official website. The court further discusses a new city ordinance that “is **not available elsewhere,**” except on the official’s personal social media page. By contrast, if “the mayor merely repeats or shares otherwise available information … it is far less likely that he is purporting to exercise the power of his office.”

This limitation is divorced from reality and will hamstring plaintiffs seeking to vindicate their First Amendment rights. As we showed extensively in our brief (see Section I.B.), government officials regularly use both official office accounts and “personal” accounts for the same official purposes, by posting the same content and soliciting constituent feedback—and constituents often do not understand the difference.

Constituent confusion is particularly salient when government officials continue to use “personal” campaign accounts after they enter office. The court’s conclusion that a government official “might post job-related information for any number of personal reasons, from a desire to raise public awareness to promoting his prospects for reelection” is thus highly problematic. The court is correct that government officials have their own First Amendment right to speak as private citizens online. However, their constituents should not be subject to censorship when a campaign account functions the same as a clearly official government account.

An Upside: Supreme Court Denounces the Blocking of Users Even on Mixed-Use Social Media Accounts

One very good aspect of the Supreme Court’s opinion is that if the censorship amounted to the blocking of a plaintiff from engaging with the government official’s social media page as a whole, then the plaintiff must merely show that the government official “had engaged in state action with respect to any post on which [the plaintiff] wished to comment.”

The court further explains:

The bluntness of Facebook’s blocking tool highlights the cost of a “mixed use” social-media account: If page-wide blocking is the only option, a public of­ficial might be unable to prevent someone from commenting on his personal posts without risking liability for also pre­venting comments on his official posts. A public official who fails to keep personal posts in a clearly designated per­sonal account therefore exposes himself to greater potential liability.

We are pleased with this language and hope it discourages government officials from engaging in the most egregious of censorship practices.

The Supreme Court also makes the point that if the censorship was the deletion of a plaintiff’s individual comments under a government official’s posts, then those posts must each be analyzed under the court’s new test to determine whether a particular post was official action and whether the interactive spaces that accompany it are government forums. As the court states, “it is crucial for the plaintiff to show that the official is purporting to exercise state authority in specific posts.” This is in contrast to the Sixth Circuit, which held, “When analyzing social-media activity, we look to a page or account as a whole, not each individual post.”

The Supreme Court’s new test for state action unfortunately puts a thumb on the scale in favor of government officials who wish to censor constituents who engage with them on social media. However, the test does chart a path forward on this issue and should be workable if lower courts apply the test with an eye toward maximizing constituents’ First Amendment rights online.

Originally posted to the EFF Deeplinks site.

Filed Under: 1st amendment, blocking, criticism, free speech, lindke v. freed, o'connor-ratcliffe v. garnier, politicians, social media, supreme court

from the doesn't-due-process-matter dept

The copyright industry’s war on the Internet and its users has gone through various stages (full details and links to numerous references in Walled Culture the book, free digital versions available).

The first was to sue Internet users directly for sharing files. By 2007, the Recording Industry Association of America (RIAA) had sued at least 30,000 individuals. Perhaps the most famous victim of this approach was Jammie Thomas, a single mother of two. She was found liable for 222,000indamagesforsharingtwenty−foursongsonline.Eventhejudgewasappalledbytheextremenatureofthepunishment:hecalledthedamages“unprecedentedandoppressive.”He“implored”USCongresstoamendtheCopyrightActtoaddresstheissueofdisproportionateliability.HealsoorderedanewtrialforThomas.Unfortunately,onre−trial,shewasfoundliableforevenmore–222,000 in damages for sharing twenty-four songs online. Even the judge was appalled by the extreme nature of the punishment: he called the damages “unprecedented and oppressive.” He “implored” US Congress to amend the Copyright Act to address the issue of disproportionate liability. He also ordered a new trial for Thomas. Unfortunately, on re-trial, she was found liable for even more – 222,000indamagesforsharingtwentyfoursongsonline.Eventhejudgewasappalledbytheextremenatureofthepunishment:hecalledthedamagesunprecedentedandoppressive.”HeimploredUSCongresstoamendtheCopyrightActtoaddresstheissueofdisproportionateliability.HealsoorderedanewtrialforThomas.Unfortunately,onretrial,shewasfoundliableforevenmore1.92 million. The RIAA may have been successful in these court cases, but it eventually realized that suing grandmothers and 12-year-old girls, as it had done, made it look like a cruel and heartless bully – which it was.

So it shifted strategy, and started lobbying for a “graduated” approach, also known as “three strikes and you’re out”. The idea was that instead of taking users suspected of sharing copyright material to court, which had terrible optics, they would be sent progressively more threatening warnings by an appropriate government body, thus shielding the copyright industry from public anger. After three warnings, the person would (ideally) be thrown off the Internet, or at least fined.

France was the most enthusiastic proponent of the three-strikes approach, with its Hadopi law. Even though the government body sent out millions of warnings to French users, only one disconnection order was issued, and that was never carried out. In total, some €87,000 in fines were imposed, but the cost of running Hadopi was €82 million, paid by French taxpayers. In other words, a system that failed to scare people off from downloading unauthorized copies of copyright material cost nearly a thousand times more to run than it generated in fines.

Since attacking Internet users had proved to be such a failure, the copyright industry changed tack. Instead it sought to block access to unauthorized material using court orders against Internet Service Providers (ISPs). The idea was that if people couldn’t access the site offering the material, they couldn’t download it.

Italy has been at the forefront of this approach. In 2014, the country’s national telecoms regulator, Autorità per le Garanzie nelle Comunicazioni (Authority for Communications Guarantees, AGCOM) allowed sites to be blocked without the need for a court order. More recently, it has set up an automated blocking system called Piracy Shield. Rather surprisingly, according to a post on TorrentFreak, AGCOM will not check blocking requests before it validates them – it will simply assume they are justified and set the system in motion:

Once validated, AGCOM will instruct all kinds of online service providers to implement blocking. Consumer ISPs, DNS providers, cloud providers and hosting companies must take blocking action within 30 minutes, while companies such as Google must block or remove content from their search indexes.

It’s a very unfair, one-sided copyright law, which assumes that people are guilty until proven innocent. That tilting of the playing field may prove Pirate Shield’s undoing. As another post on the TorrentFreak site explains:

An ISP organization has launched a legal challenge against new Italian legislation that authorizes large-scale, preemptive piracy blocking. Fulvio Sarzana, a lawyer representing the Association of Independent Providers, informs TorrentFreak that the measures appear to violate EU provisions on the protection of service providers and the right to mount a defense.

It would be nicely ironic if the very extremism of the copyright industry, which always wants legal and technical systems biased in its favor, and with as few rights as possible for anyone else, might see the latest incarnation of its assault on the digital world thrown out.

Follow me @glynmoody on Mastodon. Originally published to Walled Culture.

Filed Under: blocking, copyright, due process, italy, piracy, pirate shield

Pornhub Says No More Porn For Folks In Utah (Unless They Know How To Use A VPN)

from the cox-blocked dept

On May 3, a new law restricting porn access in Utah will go Into effect. The response is going to be epically controversial as angry porn consumers in the state will soon lose access to much more than a few household adult entertainment industry brand names like Pornhub.

In a move that was no surprise whatsoever, Pornhub has officially blocked all IP addresses registered in the state of Utah. Fellow adult industry journalist, Gustavo Turner of Xbiz, first reported the block for the adult industry business news media. Having tested it out myself through this handy little tool that Utah seems to forget that exists (a VPN), I was able to confirm this.

Though I’m based in Colorado, my socially conservative neighbors to the West are now victim to the incongruent beliefs of zealous politicians who have no understanding of the internet or online free speech.

Porn superstar and The Daily Beast contributor Cherie Deville appeared in a safe-for-work explanation video that Utah-based users will land on when visiting. In the video, Deville delivers a stern message to the fine people of Utah by telling them that one of the world’s most popular websites, in general, has blocked the entire state due to a controversial anti-porn law.

https://vimeo.com/822125080/5b9f5cb30e?embedded=true&source=vimeo\_logo&owner=11963827

What’s more is that Deville’s video doesn’t mention Utah by name, and is clearly a broad-form video that Pornhub produced in anticipation for other U.S. states about to block legal adult entertainment websites for one reason or another. The foundation of the Holy and Great Firewall of Utah (I mean Zion) was established by Senate Bill (SB) 287. State Sen. Todd Weiler and Rep. Susan Pulsipher introduced SB 287 as a means to require age verification for users to view porn sites.

Similar to the controversial Louisiana age verification mandate that entered into force on Jan. 1, SB 287 would grant a new tort for any publisher or distributor of “material harmful to minors” that could force civil action by private citizens who claim a commercial entity dealing in this content caused harm to minors by not age-gating said content and requiring a digitized ID verification of the user. While Louisiana’s law integrated an existing digital wallet service developed by the state, the approach Utah took goes even further and brings to the forefront the debate on how to best deploy identity verification while minimizing data bloat and potential risks of catastrophic data breaches. Deville explains that there are more reasonable approaches, and that the Weiler-Pulsipher law is not the answer. Rather, Deville added, laws and regulations on age verification for adult-only content should include room for device-based verification measures instead of government-issued identification or credit card info.

“We believe that the best and most effective solution for protecting children and adults alike is to identify users by their device and allow access to age-restricted materials and websites based on that identification,” a statement posted under Deville’s warning video reads.

The Free Speech Coalition, a trade group representing the adult entertainment industry, also issued a warning to its members about SB 287 entering force on May 3.

“Unfortunately, the Utah legislation does not provide a straightforward way to comply,” the coalition declared, pointing to how “the other compliance methods required by the legislation don’t align with the current offerings from most, if not all, AVS providers.”

The state legislatures in Virginia, Mississippi, and Arkansas were able to pass similar bills. Louisiana is trying to amend the law to enable the state attorney general to take even more action against adult sites that don’t have government mandated age verification in place and “do harm” to minors who visit such sites.

One of the key weaknesses to these types of laws are virtual private networks, or VPNs. A VPN shields an IP address by allowing the network to redirect it through a remote server managed by a VPN provider. This “changes” the location of the IP address and the VPN server becomes the de facto source of data. VPNs can help users in high-censorial jurisdictions get around geo blocks and other forms of censorship on the internet. VPNs aren’t banned in Utah, or really anywhere in the United States. Most people can download relatively effective, efficient, and affordable VPNs on their mobile devices, tablets, personal computers, smart TVs, and other devices.

That is besides the point. The regulatory environment in Utah has forced Pornhub’s corporate parent firm MindGeek, now owned by Ethical Capital Partners in Montréal, to go as far as cut ties with users who did nothing but want to watch consensual adult content.

Sites owned by MindGeek include Brazzers, YouPorn, PornMD, ModelHub, Nutaku, Men.com, Mofos, My Dirty Hobby, and several others. All of these sites are blocked and feature a message from Cherie Deville.

Disclosure: The author is a member of the Free Speech Coalition. He wrote this column without compensation from the coalition, its officers, or its member firms.

Michael McGrady is a journalist and commentator focusing on the tech side of the online porn business, among other things.

Filed Under: blocking, geofencing, ip blocking, spencer cox, splinternet, utah
Companies: mindgeek, pornhub

Court Reminds St. Louis City Council That Blocking Taxpayers On Social Media Violates 1st Amendment

from the blocking-stuff-you-don't-like-is-unconstitutional dept

No matter what you may have heard on certain social media outlets, this is how the First Amendment actually works.

Free speech “heroes” can freely curb your speech. The government, however, may not. So, if you’re a government account operating on social media services, when you fuck around, you find out. This decision [PDF] — targeting St. Louis lawmakers — reminds everyone of these uncomfortable facts. (h/t Courthouse News Service)

Social media platforms are public squares… at least as far as public servants are concerned. You may not like what your constituents have to say, but you’re not allowed to silence them. That’s what a Missouri federal court has declared, following an absurd amount of precedent that should have made it clear to the city of St. Louis (as personified by Lewis Reed, the president of the city’s Board of Alderman) that blocking a resident’s Twitter account from interacting with the city’s official account was unconstitutional.

As the order notes, the jury trial over the constitutional issues got off to a somewhat strange start… at least in terms of a civil lawsuit.

Reed appeared at trial with counsel and, when called to testify, invoked the Fifth Amendment.

To be sure, invoking the Fifth isn’t an admission of guilt. But considering the only thing at stake was a court-ordered unblocking of St. Louis resident Sarah Felts’ Twitter account, this move does seem a little strange. Given this turn of events, the court reached a compromise: Felts could submit a list of questions for the (now-former — he retired two years after this lawsuit was filed) Board of Alderman president to be answered after the trial was concluded.

Everything at issue here went down fairly innocuously. And by that I mean it was rookie night on Doomscroll.com, where people said things and other people reacted terribly by not understanding how swiftly antagonistic flotsam is swept away by the tyranny of auto refresh. Read on and be amused by the give-and-take that ultimately decided to be the equivalent of a palace coup by the Board of Aldermen president.

In March 2009, Reed created a public Twitter account (the “Account”) to “put out information for people to … let them know what I’m up to.” At times, Reed changed the Account’s handle to indicate his candidacy for office, but between March 2009 and June of 2020, the most frequently used handle was @PresReed.

On his Twitter page, Reed described himself as “Father of 4 great kids, husband, public servant, life long democrat, proud St. Louis City resident, President of the Board of Aldermen.”

Any member of the public could view Reed’s posts and either “like,” reply, or “retweet” his posts.

On January 26, 2019, a Twitter account with the handle @ActionSTL tweeted: “Reeds asked to clarify his position on @CLOSEWorkhouse. He says we need to rework out [sic] court system. Eventually says yes, he does support the demand to close the workhouse but we need to change the messaging around it.” Action St. Louis, a local, black-led advocacy organization, operates the @ActionSTL Account.

Plaintiff responded to Action St. Louis’ tweet stating: “What do you mean by ‘change the messaging around #CloseTheWorkhouse,’ @PresReed? #STLBOA #aldergeddon2019 #WokeVoterSTL.” The issue of closing the St. Louis Workhouse, a medium security institution and one of two jails in the City, was a subject of political debate in January 2019. Plaintiff was among those advocating for the Board of Aldermen to take action to close the Workhouse, as was Action St. Louis.

Plaintiff believed Reed’s statement, as reported by Action St. Louis, that “we need to change the messaging around closing the Workhouse” was an attempt to avoid dealing with the underlying issue. Plaintiff sent her tweet to ask Reed what he meant by “change the messaging” and signal to other Twitter users that they could reach Reed via Twitter.

Later in the evening of January 26, 2019, Plaintiff attempted to access Reed’s Twitter profile page and learned she had been blocked by Reed, meaning she could no longer view his tweets, or otherwise interact with his Account.

According to Reed, the board president blocked the plaintiff because he believed Felts’ question (and her instructions to contact Reed via Twitter) somehow “implied violence” against him and the Board of Aldermen. No evidence was presented that any threats — violent or otherwise — followed this interaction.

On top of that, the court notes that Reed intertwined his Twitter account with official business in 2019. The city’s website was altered to include a link to Reed’s Twitter account. This was followed by an embed of his Twitter feed. This feed remained live on the city’s website until Reed was sued by Sarah Felts, at which point it was removed, presumably by a city IT employee. Felts’ Twitter account remained blocked until after she filed the lawsuit in early 2021.

So, Reed made it clear his Twitter account was also the Board president’s account. And the victim of his careless blocking wasn’t freed from this incursion on her First Amendment rights until after she engaged in litigation. Given this series of events, it’s not unsurprising (former) Board president Reed would invoke the Fifth when testifying in front of a jury of the people he was supposed to be serving.

The opinion recounts several times Reed’s Twitter account was used to engage in city business, citing several statements related to legislation, city policy changes, and Reed’s meetings with other local and federal politicians.

All of this indicates the account run by Reed was engaged in government business and used by Reed in his position as the president of the city’s Board of Alderman. So, there’s really no question his blocking of Sarah Felts violated her rights.

At all relevant times, Reed was the final decisionmaker for communications, including the use of social media, for the Office of the President of the Board of Aldermen. At or near the time Plaintiff was initially blocked, Reed’s public Twitter account had evolved into a tool of governance. In any event, by the time the Account was embedded into the City’s website in April 2019, while Plaintiff remained blocked, the Account was being operated by Reed under color of law as an official governmental account. The continued blocking of Plaintiff based on the content of her tweet is impermissible viewpoint discrimination in violation of the First Amendment. Thus, Plaintiff is entitled to judgment in her favor on her remaining claim for declaratory relief.

That is how the First Amendment actually works. The government can’t block your Twitter account simply because it doesn’t like what you’re saying. That happened here. And, while the lawsuit concludes with only a $1.00 reward in nominal damages, it does make things better for St. Louis residents, as well as those experiencing the same sort of government bullshit elsewhere in this federal circuit. It’s another ruling that clearly states government officials can’t engage in unwarranted blocking of people officials would rather not hear from. Elected officials represent and serve everyone in their jurisdictions. They can’t constitutionally pick and choose who they want to engage with.

Filed Under: 1st amendment, blocking, free speech, government, lewis reed, missouri, politicians, sarah felts, social media

Elon Musk’s Twitter Moderation Flags Article About Elon Musk’s Twitter As Dangerous

from the not-so-easy-being-the-boss-of-free-speech dept

Spaceboy Elon Musk promised the Twitter he was pretty much sued into purchasing would bring an end to all the free speech violations he claimed were happening every day under its liberal overlords. But being an edgelord troll rarely converts into competent management. Musk is speedrunning the moderation learning curve. But he is also discovering there are no God codes available to make the moderation game easier.

Comedy is legal, tweeted the Man Who Would Be Twitter King, shortly before deciding true parody was no longer legal. Pay-to-win verification will make Twitter more trustworthy, he shouted, before yanking $8 verification after it ushered in a historic wave of account impersonation.

A CEO who answers to shareholders will make decisions that may seem poorly thought out, but ultimately benefit shareholders. A guy who thought running Twitter would allow him to better serve a base of “censored” conservatives and rabid fanboys tends to make decisions that valorize him as a free speech warrior while giving this questionable customer base something to rally around.

Who knows why this happened, but one can credibly imagine Twitter’s new cleaning crew has been ordered to give any Musk-related tweets more scrutiny. Musk doesn’t seem to enjoy criticism, which aligns him with pretty much everyone everywhere. But he also has the power to mute criticism, which aligns him with authoritarian regimes/billionaires who own social media platforms.

So, this is the sort of thing that happens. Michael Luciano of Mediaite wrote an opinion piece discussing Musk’s actions since he took ownership of Twitter. Despite it being a very even-handed discussion of Musk’s business decisions, Twitter/Musk moderation decided it was a bit too spicy for public consumption.

Twitter deemed a Mediaite article that is critical of the company’s new owner Elon Musk as “potentially spammy” on Friday night, and diverted users to a warning page when they click the post.

The warning had been removed as of Saturday morning after multiple media outlets – including this one – reported on it.

[…]

The post, which is marked “Opinion” and was initially only accessible to users who read through a daunting message and clicked, “Ignore this warning and continue,” is titled “What Elon Musk Is Doing Right at Twitter.”

The entirety of the post’s text reads, “Nothing.”

Succinct. Accurate. And, apparently, potentially dangerous to Twitter users. According to the notice sent to Mediaite, the one-word post violated a bunch of Twitter rules, despite none of the violations listed applying to the post’s content.

The message listed the following categories, none of which the blocked post appears to violate:

– malicious links that could steal personal information or harm electronic devices

– spammy links that mislead people or disrupt their experience

– violent or misleading content that could lead to real-world harm

_– certain categories of content that, if posted directly on Twitter, are a violation of the Twitter Rule_s.

All the post said was that Musk was wrong. The most nefarious explanation is that Musk has elevated moderation of Musk/Twitter-related content and encourages remaining moderators to pull the trigger when the content appears to criticize Musk. The less nefarious explanation is that a bunch of Musk fans brigaded the link, reporting it en masse as ”dangerous,” forcing moderators and/or moderation AI to succumb to the hecklers’ veto. Of course, the other possible explanation is simply mere incompetence, supported by reports of others, such as NBC News also having their links blocked.

Either way, it’s a terrible look for the new boss, who has claimed Twitter is more about free speech than ever. And he’s also learning about the flipside of moderation fuck-ups: the Streisand effect. Any time something negative about Musk is buried (inadvertently or deliberately), it will attract more attention from Twitter users. Musk can’t win.

And that’s the hard truth of content moderation: you’ll never make everyone happy. An even harder truth is that you can’t even make your preferred customer base happy. At best, you can only hope to make users less miserable by removing illegal content and putting procedures in place that give users the power to create the experience they want, rather than being subjected to the outer limits of whatever the platform allows.

To quote the far-more-idealistic Google from several years back, “Don’t be evil.” But evil is subjective. So, maybe the best you can do is not be worse than your predecessors. And if that’s the low bar Musk needs to reach, he’s still appears to be unwilling to attempt clearing it.

Filed Under: blocking, content moderation, elon musk, links
Companies: mediaite, twitter

With All The Other Nonsense Going On, Senate Democrats’ Priority Is To Spy On Kids Online?

from the this-is-a-really-bad-bill dept

I do not understand the Democratic Party in the US for a wide variety of reasons. But one of the most confusing thing about them is their priorities. With everything else going on in the world that needs serious attention from Congress right now, Senate Dems have decided to host a markup on one of the worst, most ridiculous bills they’ve come up with in a long, long time: the “Kids Online Safety Act.”

I wrote about just how terrible a bill this is back when it was introduced in February by Senators Richard Blumenthal and Marsha Blackburn — two Senators who have a long and well documented history of hating on the internet and wishing to basically destroy it.

The bill is very much the typical “but think of the children!” kind of legislation that (I’m sure, coincidentally) always becomes popular right around election season. As we explained at the time, the bill assumes a lot of completely unproven things about how much “harm” the internet does to kids, and decides that companies need to magically stop the bad stuff. It ignores that companies put tons of efforts into stopping bad stuff on their platforms but (1) it’s a much more difficult problem than people realize and part of the reason it’s so difficult is that (2) trying to silence kids talking about certain topics has a long history of absolutely backfiring and making problems worse. We’ve seen that with actual data regarding things like eating disorders and suicide.

Forcing websites to take down such content, without addressing the underlying reasons why kids are seeking out such information does not help. Even worse, the mechanisms in this bill would basically just mean constant, unending, intrusive surveillance of every kid online. That’s awful.

This is not to say that people should let kids roam the internet willy nilly, but it is important to actually teach them how to handle the internet in an age appropriate way. Spying on kids at all times teaches kids the exact wrong lesson. It normalizes constant surveillance and privacy invasions and also never lets kids really learn for themselves and take responsibility.

Especially given the recent concerns over abortion data and access to abortion information, Democrats should be running away from this bill rather than supporting it. As policy expert Josh Lamel notes, if the Kids Online Safety Act becomes law, a Republican FTC or state Attorney General could force websites to block teen access to any abortion information.

KOSA will allow a Republican FTC or Republican State AG to force Internet platforms to block teen access to any abortion access or information that is not “pro-life” community approved. It has two provisions that can be used for this. First, “harmful content” and second, content and services “unlawful for minors”. A republican FTC will almost definitely use this as a cudgel on abortion issues. So will Republican State AGs.

KOSA will allow a Republican AG to file a federal court action to prevent access to information at all about sex online under the “harmful content” provision. This includes contraception info, STD info & so much more. Do you trust this fed. judiciary w/that?

He goes on to note that the bill could be used to enable parents to spy on their kids, and even to find out if their kids are purchasing contraception online. It’s a privacy nightmare at a time when we need more privacy, not less.

So, why the hell is Senator Maria Cantwell moving this bill forward? It’s not just preposterous, but it’s downright dangerous.

Bizarrely, by the way, the very same markup will review the Children and Teens Online Privacy Protection Act, which is supposed to extend privacy protections aimed at protecting children 12 and under to those 16 and under. But if the Kids Online Safety Act becomes law, kids will have no privacy at all.

Filed Under: blocking, for the children, ftc, kids online safety bill, kosa, maria cantwell, markup, marsha blackburn, richard blumenthal, spying, state ags

A Grope In Meta's Space

from the the-value-of-user-tools-in-moderation dept

Horizon Worlds is a VR (virtual reality) social space and world building game created by Facebook. In early December, a beta tester wrote about being virtually groped by another Horizon Worlds user. A few weeks later, The Verge and other outlets published stories about the incident. However, their coverage omits key details from the victim’s account. As a result, it presents the assault as a failure of user operated moderation tools rather than the limits of top-down moderation. Nevertheless, this VR groping illustrates the difficulty of moderating VR, and the enduring value of tools that let users solve problems for themselves.

The user explains that they reported and blocked the groper, and a Facebook “guide”, an experienced user trained and certified by Facebook, failed to intervene. They write, “I think what made it worse, was even after I reported, and eventually blocked the assaulter, the guide in the plaza did and said nothing.” In the interest of transparency, I have republished the beta user’s post in full, sans identifying information, here;

**Trigger Warning** Sexual Harassment. My apologies for the long post: Feel free to move on.

Good morning,

I rarely wake up with a heavy heart and a feeling of anger to start a fresh new day, but that is how I feel this morning. I want to be seen and heard. I reach out to my fellow community members in hopes of understanding and reassurance that they will be proactive in supporting victims and eliminating certain types of behavior in horizon worlds. My expectations as a creator in horizon worlds aren’t unreasonable and I’m sure many will agree.

You see this isn’t the first time, I’m sure it won’t be the last time that someone has sexually harassed me in virtual reality. Sexual harassment is no joke on the regular Internet but being in VR adds another layer that makes the event more intense. Not only was I groped last night, but there were other people there who supported this behavior which made me feel isolated in the Plaza. I think what made it worse, was even after I reported, and eventually blocked the assaulter, the guide in the plaza did and said nothing. He moved himself far across the map as if to say, you’re on your now.

Even though my physical body was far removed from the event, my brain is tricked into thinking it’s real, because…..you know……Virtual REALITY. We can’t tout VR’s realness and then lay claim that it is not a real assault. Mind you, this all happened within one minute of arriving in the plaza, I hadn’t spoken a word yet and could have possibly been a 12-year-old girl.

MY ASK:

I would like a personal bubble that will force people away from my avatar and I would like to be able to upload my own recording with my harassment ticket. I would also like that all guides are given sensitivity training on this specific subject, so they will understand what is expected. If META won’t give guides tools that will allow them to remove a player immediately from a situation, at least train them to deal with it and not run away.

Rant over, I’m still mad, but I will sort through and process. I love this community and the thought of leaving it makes me deeply sad. So I am hopeful we can evolve as a community and foster behaviors that support collaboration, understand, and a willingness to speak out against gross behaviors.

Initial coverage in The Verge did not mention the victim’s use of the block feature, even as the user describes using it in the post above. Instead, reporter Alex Heath relayed Facebook’s account of the incident, saying “the company determined that the beta tester didn’t utilize the safety features built into Horizon Worlds, including the ability to block someone from interacting with you.”

These details are important because subsequent writing about the incident builds on the false, but purported non-use of the blocking feature to make the case that offering users tools to control their virtual experience is “unfair and doesn’t work.” In Technology Review, Tanya Basu makes hay of the user’s failure to use the “safe zone” feature, which temporarily removes users from their surroundings. Yet this is a red herring. The user might not have immediately disappeared into her safe zone, but she used the block feature to do away with her assailant.

In reality, contra Basu or Facebook’s description of events, it seems that user-directed blocking put a stop to the harm while the platform provided community guide failed to intervene. VR groping is a serious issue, but it is not one that will be solved from the top-down. Inaccurate reporting that casts user-operated moderation tools as ineffective may spur platforms to pursue less effective solutions to sexual harassment in VR.

Implications of the incident’s misreporting aside, it provides a useful case study in the difficulties of moderating VR. One suggestion put forward by the user and backed by University of Washington Professor Katherine Cross warrants discussion. Closer inspection of their proposals illustrates the careful tradeoffs that inform the current safe zone and blocking tools offered to Horizon users.

They request a “personal bubble that will force people away from my avatar” or “automatic personal distance unless two people mutually agreed to be closer.” This might make some groping harder, but it creates other opportunities for abuse.

If players’ avatars can take up physical space and block movement, keeping others at bay, then they can block doorways and trap other players in corners or against other parts of world. Player collision could render abuse inescapable or allow players to hold others’ avatars prisoner.

MMOs (Massively Multiplayer Online games) have long struggled with this problem – “holding the door” is only a contextually heroic action. Player collision makes gameplay more realistic, but allows some players to limit everyone else’s access to important buildings by loitering in the doorway.

Currently, players’ avatars in Horizon may pass through one another. They can retreat into a safe zone, disappearing from the world. They can also “block” other users – preventing both the blocked and blocking users from seeing one another. Even through a block, they can still see one each other’s nametags – total invisibility created problems I covered here. As such, the current suite of user moderation tools strikes a good balance between empowering users and minimizing new opportunities for misuse.

Finally, given the similarity of the transgression, it is worth recalling Julian Dibbell’s “A Rape in Cyberspace”, one of the first serious accounts of community governance online. In this Village Voice article, Dibbell relates how users of role-playing chatroom LambdaMOO (the best virtual reality to be had in 1993) responded to a string of virtual sexual assaults. After fruitless deliberation, a LambdaMOO coder banned the offending user. After the incident, LambdaMOO established democratic procedures for banning abusive users, and created a “boot” command allowing users to temporarily remove troublemakers.

As the internet has developed content moderation has centralized. Today, users are usually expected to let platforms moderate for them. However, just as in the web’s early days, empowering users remains the best solution to interpersonal abuse. The tools they need to keep themselves safe may be different, but in virtual reality as in role-playing chat, those closest to abuse are best positioned to address it. Users being harassed should not be expected to wait for the mods.

Will Duffield is a Policy Analyst at the Cato Institute

Filed Under: blocking, content moderation, horizon worlds, sexual harassment, user tools, vr
Companies: facebook, meta

Rep. Thomas Massie Seems To Have Skipped Over The 1st Amendment In His Rush To 'Defend' The 2nd

from the unblock-people dept

This weekend, Representative Thomas Massie got an awful lot of attention for tweeting a picture of what I guess is his family holding a bunch of guns. It generated a bunch of outrage, which is exactly why Massie did it. When the culture war and “owning” your ideological opponents is more important than actually doing your job, you get things like that. Some might find it a vaguely inappropriate to show off your arsenal of weaponry just days after yet another school shooting, in which the teenager who shot up a school similarly paraded his weapon on social media before killing multiple classmates, but if that’s the kind of message that Massie wants to send, the 1st Amendment and the 2nd Amendment allow him to reveal himself as just that kind of person.

However, as someone who continually presents himself as “a strict constitutionalist,” it’s odd that Massie seems to skip over the 1st Amendment in his rush to fetishize the 2nd. That’s why the Knight First Amendment Institute at Columbia University has now sent a letter on my behalf to Rep. Massie letting him know that he is violating the 1st Amendment in blocking me and many others on Twitter.

To be honest, I had avoided tweeting about Massie’s armory family portrait, because the whole thing was just such a blatant cry for attention. But then I saw that some other users on Twitter were highlighting that Massie was blocking them — in some cases because they had tweeted at Massie a remixed version of the portrait, replacing the guns with penises. I made no comment on his photo, or his desperately pathetic desire to “own the libs” or whatever he thought he was doing. But I did tweet at him to inform him that under Knight v. Trump, he appeared to be violating the 1st amendment rights of those he was blocking.

He appears to meet the conditions laid out in that and other rulings on this issue. Massie is a government official, who uses his Twitter account for conducting official government business, and who is then blocking users based on their viewpoints.

In response to me pointing out that it violates the 1st Amendment for him to block people in this way… Rep. Massie blocked me.

Seems a bit ironic for a “strict Constitutionalist” to block someone for merely pointing out that public officials blocking someone via their official government accounts violates the 1st Amendment. But, I guess that’s the kind of “strict Constitutionalist” that Rep. Thomas Massie is. One who will support just the rights he wants to support, and will quickly give up the other ones, so long as he can be seen to be winning whatever culture war he thinks he’s waging.

This is pretty unfortunate. For all of Massie’s other nonsense, he has actually been quite good in defending the 4th Amendment rights of the American public against surveillance. Perhaps he only believes in the even-numbered Amendments?

Either way, our letter points out that his actions appear to violate the 1st Amendment, and asks that he unblocks me and everyone else that he has chosen to block.

Multiple courts have held that public officials? social media accounts constitute public forums when they are used in the way that you use the @RepThomasMassie account, and they have made clear that public officials violate the First Amendment when they block users from these fora on the basis of viewpoint. For example, the U.S. Court of Appeals for the Second Circuit reached this conclusion in Knight Institute v. Trump, and the U.S. Court of Appeals for the Fourth Circuit reached this conclusion in Davison v. Randall. The principles articulated in these cases apply here. In both of these cases, and in many others, courts have held that the First Amendment binds public officials who use their social media accounts in furtherance of official duties, and that public officials act unconstitutionally when they block individuals from these accounts on the basis of viewpoint.

Again, we ask that you unblock the Twitter account @mmasnick and any other Twitter accounts that have been blocked by you or your staff from the @RepThomasMassie account based on viewpoint.

Filed Under: 1st amendment, 2nd amendment, blocking, social media, thomas massie
Companies: knight 1st amendment institute

Texas Attorney General Unblocks Twitter Users Who Sued Him; Still Blocking Others

from the that's-not-how-this-works dept

It seems by now that public officials should know that they cannot block critics on social media if they are using their social media accounts for official business. This was thoroughly established the Knight v. Trump case, where the court made it clear that if (1) a public official is (2) using social media (3) for official purposes (4) to create a space of open dialogue (and all four of those factors are met) then they cannot block people from following them based on the views those users express, as it violates the 1st Amendment. Yet over and over again elected officials seem to ignore this.

Alexandria Ocasio-Cortez was sued over this, as was Marjorie Taylor Greene (both of them eventually settled and agreed to unblock people).

Last month, controversy prone Texas Attorney General Ken Paxton was sued over the same thing (again by the Knight First Amendment Institute). As the lawsuit notes, many of the people Paxton blocked found themselves in that situation after they replied to Paxton by reminding him of the still ongoing criminal charges he’s been facing his entire time in office. Basically, if you remind Paxton of the fact that he’s facing criminal charges, you had a decent shot at getting blocked.

However, last week, Paxton unblocked the 9 users who sued him, perhaps realizing he was clearly going to lose this case. Of course, it looks like he only removed the blocks on those 9 individuals and kept up the blocks on others. Law professor Steve Vladeck (who is at the University of Texas Law School) noted that he’s still blocked, even if the plaintiffs in the lawsuit are not:

I?m still blocked by @kenpaxtontx.

Clearly, this is only about mooting this specific lawsuit ? rather than a concession that he shouldn?t be blocking from his official account *any* constituents who publicly criticize him. https://t.co/3glxsIe8nW

— Steve Vladeck (@steve_vladeck) May 6, 2021

Vladeck is (of course) correct. The whole point of this is that public officials cannot block anyone from their official accounts like this. If he’s just unblocked the people who sued them, that means anyone blocked will have to go through the costly and time consuming process of suing to get unblocked, and that’s not how it’s supposed to work either.

It seems pretty clear that the lawyers in the case recognize that Paxton isn’t really doing what he is required to do under the 1st Amendment:

?We?re pleased that Attorney General Paxton has agreed to unblock our plaintiffs in this lawsuit and are hopeful that he will do the same for anyone else he has blocked from his Twitter account simply because he doesn?t like what they have to say,? said Katie Fallow, a senior staff attorney at the Knight First Amendment Institute.

Anyone taking bets on how many of those other people are going to need to sue first?

Filed Under: 1st amendment, blocking, ken paxton, texas
Companies: twitter