transparency reports – Techdirt (original) (raw)
Twitter Abruptly Stops Reporting On Gov’t Requests As Data Reveals Elon Obeys Gov’t Demands Way More Often Than Old Twitter
from the whatever-elon-has-said,-he's-done-the-opposite dept
To hear Elon and his biggest fans tell the story, pre-Elon Twitter was a hellhole of censorship often driven by government demands, and he had to take over the company to “bring free speech back.” As astute observers not easily misled by nonsense peddlers knew, however, in actuality, old Twitter was actually one of the most welcoming platforms to speech that other platforms refused to host, and was among the most aggressive at pushing back on government demands.
Twitter’s transparency reports on this were clear. Just looking at the last transparency report released before Elon took over, you can see this quite clearly. On requests for information from governments around the globe, it only complied with around 40%:
And when it came to government demands to remove information whether by court order or other government demands, its compliance rate was around 51% globally:
If you dig into the details, not surprisingly, you find that the variation country to country is significant. So, from here, you can see the countries that sent the most legal demands over the period of that report (second half of 2021):
From that, you can see that Twitter complied with just 20% of requests from India, and 44% of the requests from Japan.
Clearly, the company made a real effort to evaluate the requests and their legal merit before making a decision.
Now, as we just reported, Twitter under Elon has stopped doing transparency reports, though it did release a ridiculous blog post that wasn’t “transparent” at all and literally said: “Twitter’s compliance rate for these requests varied by requester country.” But provided no actual data on the compliance rate at all.
Thankfully, Russel Brandom over at Rest of World, realized that Twitter has still been automatically reporting government demand info to the good folks at the LumenDatabase… and from that found that Elon’s Twitter has been way more compliant in giving in to exactly what governments are demanding, both for removing content and for handing over information. And, from the look of things, governments are fucking thrilled with this, seeing as the numbers of demands went way, way up.
That last column is “post-Elon.” So, again, Elon’s Twitter is now doing the very nonsense that Elon and his fans falsely thought old Twitter did. We already knew that Elon had caved in and was agreeing to take down accounts from journalists and activists in India, which old Twitter had drawn a line and said it refused to cross.
Notably, and importantly, just as Brandom was doing this research, Twitter abruptly stopped sharing this data:
“Historically, it seems Twitter has sent a copy of everything they’ve received to us,” says Adam Holland, who manages the project for the Berkman-Klein center. “My understanding is that they have a small team of people that work on this and it’s a largely automatic process.”
The biggest irregularity came earlier this month when Twitter’s self-reports abruptly stopped. After averaging over a hundred copyright claims a day, the flow of new reports halted on April 15th, and Twitter has not made a submission to the database in 12 days.
After the article went up, Lumen put out a statement on Mastodon noting that Twitter had deliberately stopped sharing as their “data sharing policies are under review” and that “they will update Lumen once there is more information.”
So… as Elon has promised to be more about free speech and transparency, and less about giving in to government censorship and surveillance requests, the reality is that, once again, he’s done literally the opposite of that.
Filed Under: elon musk, government requests, lumen database, surveillance, takedowns, transparency, transparency reports
Companies: twitter
US And EU Nations Request The Most User Data From Tech Companies, Obtain It More Than Two-Thirds Of The Time
from the may-as-well-just-be-government-contractors dept
Most tech companies handling data requests from governments now publish transparency reports. As everything moves towards always-online status (including, you know, your fridge), social media platforms and other online services have become the favored targets of government data requests. It just makes sense to look there first rather than out there in the real world, where people (and their communications) are that much more difficult to locate.
Consequently, what started out as a cottage industry has quickly become a front-of-the-house operation for many governments. Year-after-year data request increases are the new normal. Twitter reported a record high in government requests last year, along with a doubling of the number targeting journalists. The only countries submitting fewer requests for user data were those (like Russia) which had blocked citizens’ access to the platform.
These trends continue. A VPN provider has compiled data from multiple companies’ transparency reports into one handy report — one that shows requests continue to skyrocket… and that these requests are complied with more often than not.
As detailed in SurfShark’s new report which analysed user data requests that Apple, Google, Meta, and Microsoft received from government agencies of 177 countries between 2013 and 2021, Tech giants get a lot of requests for user data, and the majority of the time, they comply.
Of the four Big Tech companies studied, Apple was the most forthcoming, complying with 82% of requests for user data, compared to Meta (72%), Google (71%), and Microsoft (68%). Interestingly, Big Tech was more compliant in the UK than when compared to global figures, disclosing user data 81.6% of the time.
In the nine years since these companies began producing transparency reports (all following the 2013 Snowden leaks), government requests have more than tripled. And the spike isn’t due to increased (shall we say) participation from governments with lots of human rights abuses on their permanent records. No, it’s the US and EU leading the way, with the United States taking the top spot, followed by Germany. The top 10 also includes the UK, France, Ireland, Portugal, and Belgium.
The outliers are Singapore and Taiwan — nations more often linked to pervasive surveillance and government oppression than the ones doing most of the asking.
Also of note is the compliance rate. Apple may have cultivated a reputation for protecting its users’ privacy and security but it’s also the tech company with the highest compliance rate. It has a ten percentage point lead on the second place company, Meta — a company rarely (if ever) associated with terms like security or privacy.
The year-over-year increases are unsurprising. You go where the data is and, increasingly, it’s housed by these four companies. But the increase in compliance is somewhat disheartening. It could signal that government agencies are crafting better, more targeted requests. Or it may signal that the steady increase in the number of requests means requests aren’t necessarily receiving all the scrutiny they deserve. Whatever the case, it’s something that bears watching. Fortunately, this VPN provider is making it that much easier to do.
Filed Under: government requests, law enforcement, subpoenas, surveillance, transparency, transparency reports, warrants
What Transparency? Twitter Seems To Have Forgotten About Transparency Reporting
from the that-ain't-transparent dept
One of the key things that Elon Musk promised in taking over Twitter was about how he was going to be way more transparent. He’s mentioned it many times, specifically noting that transparency is how he would build “trust” in the company.
So, anyway, about that… over a decade ago, the big internet companies set the standard for companies publishing regular transparency reports. Twitter has released one every six months for years. And since Musk’s takeover, I’ve wondered if that would continue.
Twitter’s last transparency report — published in July 2022 and covering the last six months of 2021 — found that the U.S. government made more requests for account data than any other government, accounting for over 24 percent of Twitter’s global requests. The FBI, Department of Justice, and Secret Service “consistently submitted the greatest percentage of requests for the six previous reporting periods.” Requests from the U.S. government were down seven percent from the last reporting period but Twitter’s compliance rate went up 13 percent in the latter half of 2021.
Normally, Twitter would have published the transparency data for the first half of 2022 in January of 2023. Yet, here we are.
“Elon talked a lot about the power of transparency. But the way Elon and his enablers interpret transparency is a rather creative use of the word. It’s not meaningful transparency in the way the industry defines it,” one former Twitter employee familiar with the reports tells Rolling Stone.
[….]
“We were working on the transparency reports, then all the program leads were immediately fired, and the remaining people that could’ve worked on the reports all left subsequently,” one former staffer says. “I’m not aware of any people left [at Twitter] who could produce these transparency reports.”
The former Twitter staffer adds, “It’s really a problem that there’s no transparency data from 2022 anywhere.”
Speaking to former Twitter employees, I had two of them confirm that Twitter actually had the transparency report more or less ready to go before Musk took over (remember, the January release would cover the first half of 2022 so they had time to work on it). But apparently, it’s either been lost or forgotten.
And, of course, this is a real shame, as Twitter had been seen as one of the companies that used transparency reports in more powerful ways than other companies. It was widely recognized as setting the bar quite high.
“Twitter had some of the best transparency reporting of any platform,” says Jan Rydzak, company and investor engagement manager at Ranking Digital Rights, a program hosted by the Washington, D.C., think tank New America that grades tech and telecom firms on the human-rights goals they set.
“Transparency reporting has been an important tool for companies to demonstrate to their users how they protect their privacy and how they push back against improper government requests for their data,” adds Isedua Oribhabor, business and human rights lead at Access Now, whose 2021 Transparency Reporting Index commended Twitter for nine straight years of reporting.
As we’ve discussed before, while all the other larger internet companies caved to DOJ demands regarding limits on how they report US law enforcement demands for information, Twitter actually fought back and sued the US government for the right to post that information. And while it unfortunately lost in the end (years later), that’s the kind of thing that shows a commitment to transparency which helps build trust.
In place of that, Musk’s “transparency” seems to be to cherry pick information, hand it to people who don’t understand it, but who will push misleading nonsense for clicks. That doesn’t build trust. It builds up a cult of ignorant fools.
Filed Under: elon musk, transparency, transparency reports, trust
Companies: twitter
How California’s ‘Transparency’ Bills Will Only Make It Impossible To Deal With Bad Actors: Propagandists, Disinfo Peddlers, Rejoice
from the why-california,-why? dept
It is bizarre that the California legislature, in a state that has produced most of the biggest internet companies out there, has apparently decided it wants to destroy them all in a flood of purely vexatious litigation. There are a whole series of bills that the legislature is reviewing, and so many of them are terrible — yet seem very likely to be passed by the legislature and signed into law by Governor Gavin Newsom. While bills like AB 2408 (with its ridiculously impossible “don’t addict kids”) language have received more attention, I want to talk about a pair of bills (slightly conflicting bills!) that seem likely to pass and have received somewhat less scrutiny, in part because of the myth that these are “merely” about “transparency.” The bills in question are AB 587 and SB 1018.
Most of this post will focus on 587, and I’ll fill in some details on 1018 at the end, but I will note that as far as I can tell, the California legislature currently seems completely oblivious to the fact that these two bills that are both rushing forward at breakneck speed, appear to claim to do the same thing… in ways that would conflict with one another. This would be hilarious if it weren’t so stupid.
So, AB 587. We already had a long and detailed breakdown of the many, many, many technical problems with the bill by Professor Eric Goldman. I highly recommend reading that post, though I warn you that if you believe in supporting an open internet, and you have any hair, you may tear it out by the end.
Instead of picking through the many, many problems of the bill, I wanted to explain why the bill is totally unworkable from the perspective of someone who lives in reality and understands how the internet works. Because it’s clear that the authors of this bill have no clue.
The bill is framed as being about “transparency.” And, transparency is good. Promoting and encouraging more transparency, especially from internet services is a theme that we’ve pushed here at Techdirt for over two decades.
But there’s a big difference between encouraging more transparency, and mandating transparency in a manner where it can (and will) be weaponized to attack companies for anything they do that you dislike. And AB 587 is very much a version of the latter.
In short (and, again, I encourage you to carefully read through Eric Goldman’s careful dismantling of every part of the drafting of the law), the law requires decently large internet companies to “publish” their terms of service (loosely defined), and send them to the California Attorney General every quarter. It also requires that those terms include a bunch of things about how they deal with certain types of content (including so-called “lawful, but awful” content, as judged by the California government). It also requires descriptions of processes and remedies for dealing with user complaints.
And some of that sounds good… if you have basically zero experience with running a website, but have the chutzpah to think you know how it all works. Running a website that allows third party content of any kind is a constant battle against those with malicious intent, to create a workable, useful, and safe environment for the users you’re actually trying to serve. The malicious entities you’re battling while running a website vary and change at all times. It can include spammers. It can include hackers. It can include garden variety trolls. It can include political operatives seeking to spread propaganda. It can include nation states. It can include scammers and extortionists. It can include grifters. And that’s just a sampling. The list is ridiculously long.
Almost all of these transparency proposals assume that all users (and all consumers of the transparency reports and readers of the terms of service) are there in good faith. But they’re not. As we’ve discussed recently (in a different context) regarding Twitter and its bot/spam situation, there are a lot of malicious users.
And they don’t stand still.
They don’t use the same techniques. It’s a dynamic situation, in which they are constantly probing and evaluating.
And that means that social media platforms have to constantly be adapting as well. And AB 587 makes that effectively impossible. Because sites need to publish very specific terms and it opens them up to facing a legal challenge if they fail to live up to their terms, you’ve now created two problems for online services — and two lovely openings for malicious actors.
First, thanks to these publicly revealed policies, malicious actors now have more ability to figure out how to game the rules. We already see bad faith actors (usually in the political realm) whine and complain about how they are treated unfairly and try to “litigate” publicly how whatever sketchy thing they did did not technically violate the rules (or to claim that they were treated differently than someone else, usually ignoring important contextual differences). And now, companies will be required to publish a much more clear blueprint for how to tiptoe around the rules, and still be a bad actor, but without tripping the officially declared rules.
Second, while the California assembly removed the private right of action part of the bill that would allow anyone to sue (which would have been even more ridiculous), it still allows a wide range of government officials to sue any company that they deem somehow did not live up to their terms of service.
Actions for relief pursuant to this chapter shall be prosecuted exclusively in a court of competent jurisdiction by the Attorney General or a district attorney or by a county counsel authorized by agreement with the district attorney in actions involving violation of a county ordinance, or by a city attorney of a city having a population in excess of 750,000, or by a city attorney in a city and county or, with the consent of the district attorney, by a city prosecutor in a city having a full-time city prosecutor in the name of the people of the State of California upon their own complaint or upon the complaint of a board, officer, person, corporation, or association.
That’s… a decently big list. And local prosecutors are kinda known for loving the limelight. And what better limelight is there than suing Twitter or Facebook or Google because someone in your town claims that they were unfairly banned from social media?
Even worse, the bill incentivizes local government officials to file these kinds of suits by giving them a cut of the proceeds.
If the action is brought by a district attorney or county counsel, the penalty collected shall be paid to the treasurer of the county in which the judgment was entered. If the action is brought by a city attorney or city prosecutor, one-half of the penalty collected shall be paid to the treasurer of the city in which the judgment was entered, and one-half to the treasurer of the county in which the judgment was entered.
The idea that this won’t be abused is laughable.
So, now you’ve both given bad actors a roadmap, and there are political and financial incentives for local prosecutors to go after these companies for any attempt to stop bad actors that was not clearly laid out in the terms of service. That’s a terrible combination, and one that simply enables more bad actors. And that’s somewhat hilarious, because many of the politicians pushing AB 587 claim they’re doing it to encourage websites to do a better job stopping bad actors. It will do the opposite.
There’s one other aspect that is important to call out here. Requiring companies to file reports on their terms of service and enforcement efforts almost certainly guarantees less activity on that front. Because now, every change in the terms and every enforcement action is a regulatory matter. That means it often (always?) may need to be reviewed by legal. And that greatly limits the freedom of these companies to adapt in real time to very serious and dynamic threats.
At a more fundamental level, this entire thing would seem somewhat crazy in almost any other context. Imagine the same kind of bill written for cable news, telling them they need to publicly reveal and file with the state their editorial policies, including what kinds of stories they’ll publish, and what they won’t, and if they violate that policy they could face massive fines in cases brought by government officials at basically any level.
Most people would immediately recognize the obvious 1st Amendment concerns.
But for whatever reason, the California legislature seems oblivious to it.
Speaking of obliviousness of the legislature… that brings us to SB 1018. As mentioned earlier, this bill seems to be doing the same thing. Perhaps because an earlier version of this bill did a hell of a lot more, no one in the California legislature has realized that these are two conflicting bills that both seem to be targeting the same issue in different ways.
SB 1018 was originally a dangerously unconstitutional bill trying to force websites to pull down COVID misinformation. Perhaps because actual lawyers who actually understand this stuff explained to the legislature that the 1st Amendment doesn’t allow that kind of thing, the bill was amended and revised until… it’s now just another transparency bill. Just slightly different from AB 587.
SB 1018 requires a social media platform to disclose the following information on a regular basis:
statistics regarding the extent to which, in the preceding 12-month period, items of content that the platform determined violated its policies were recommended or otherwise amplified by platform algorithms before and after those items were identified as in violation of the platform’s policies, disaggregated by category of policy violated.
This is a different level of transparency reporting from AB 587, and because the bills come from different places in different ways, SB1018 involves a different kind of reporting process of different kinds of content. But, it’s also demanding information that is confusing and difficult to keep track of. What does it mean to “recommend” or “otherwise amplify” content here? Neither term is defined in the law, and it seems like both could be subject to extensive (ridiculous) litigation.
It does appear that one of the more recent amendments to this bill was to try to align it with AB 587… but they did so by importing many of the problems of AB 587 into this bill — enabling the identical local prosecutors to sue over failure to abide by this law, and the hugely problematic definition (and exemptions) for a “public or semipublic internet-based service or application.”
Once again: transparency is a good thing. We should all strive for having more transparency in general. But having the government mandate a pointless type of transparency — one that only serves to help enable more bad acts from more bad actors — while enabling local prosecutors to make a name for themselves (and help fill the local coffers) by filing frivolous lawsuits under this bill, you’re not helping solve the problems of modern internet services.
You’re making them much, much worse.
Filed Under: ab 587, bad actors, california, california legislature, sb 1018, spam, terms of service, transparency, transparency reports
PACT Act Is Back: Bipartisan Section 230 'Reform' Bill Remains Mistargeted And Destructive
from the second-verse,-same-as-the-first dept
Last summer we wrote about the PACT Act from Senators Brian Schatz and John Thune — one of the rare bipartisan attempts to reform Section 230. As I noted then, unlike most other 230 reform bills, this one seemed to at least come with good intentions, though it was horribly confused about almost everything in actual execution. If you want to read a truly comprehensive takedown of the many, many problems with the PACT Act, Prof. Eric Goldman’s analysis is pretty devastating and basically explains how the drafters of the bill tried to cram in a bunch of totally unrelated things, and did so in an incredibly sloppy fashion. As Goldman concludes:
This bill contains a lot of different policy ideas. It adds multiple disclosure obligations, regulates several aspects of sites? editorial processes, makes three different changes to Section 230, and asks for two different studies. Any one of these policy ideas, standing alone, might be a significant policy change. But rather than proposing a narrow and targeted solution to a well-identified problem, the drafters packaged this jumble of ideas together to create a broad and wide-ranging omnibus reform proposal. The spray-and-pray approach to policymaking betrays the drafters? lack of confidence that they know how to achieve their goals.
Daphne Keller also has a pretty thorough explanation of problems in the original — noting that the bill contains some ideas that seem reasonable, but often seems sorely lacking in important details or recognition of the complexity involved.
And, to their credit, staffers working on the bill did seem to take these and other criticisms at least somewhat seriously. They reached out to many of the critics of the PACT Act (including me) to have fairly detailed conversations about the bill, its problems, and other potential approaches. Unfortunately, in releasing the new version today, it does not appear that they took many of those criticisms to heart. Instead, they took the same basic structure of the bill and just played around at the margins, leaving the new bill a problematic mess, though a slightly less problematic mess than last year’s version.
The bill still suffers from the same point that Goldman made originally. It throws a bunch of big (somewhat random) ideas into one bill, with no clear explanation of what problem it’s actually trying to solve. So it solves for things that are not problems, and calls other things problems that are not clearly problems, while creating new problems where none previously existed. That’s disappointing to say the least.
Like the original, the bill requires that service providers publish an “Acceptable Use Policy,” and then puts in place a convoluted complaint and review process, along with transparency reporting on this. This entire section demonstrates the fundamental problem with those writing the PACT Act — and it’s a problem that I know people explained to them: it treats this issue as if it’s the same across basically every website. But, it’s not. This bill will create a mess for a shit ton of websites — including Techdirt. Forcing every website that accepts content from users to post an “acceptable use policy” leads us down the same stupid road as requiring every website have a privacy policy. It’s a nonsensical approach — because the only reasonable way to write up such a policy is to keep it incredibly broad and vague, to avoid violating it. And that’s why no one reads them or finds them useful — they only serve as a potential way to avoid liability.
And writing an “acceptable use” policy that “reasonably informs users about the types of content that are allowed on the interactive computer service” is a fool’s errand. Because what is and what is not acceptable depends on many, many variables, including context. Just by way of example, many websites famously felt differently about having Donald Trump on their platform before and after the January 6th insurrection at the Capitol. Do we all need to write into our AUPs that such-and-such only applies if you don’t encourage insurrection? As we’ve pointed out a million times, content policy involves constant changes to your policies as new edge cases arise.
People who have never done any content moderation seem to assume that most cases are obvious and maybe you have a small percentage of edge cases. But the reality is often the opposite. Nearly every case is an edge case, and every case involves different context or different facts, and no “Acceptable Use Policy” can possibly cover that — which is why big companies are changing their policies all the time. And for smaller sites? How the fuck am I supposed to create an Acceptable Use Policy for Techdirt? We’re quite open with our comments, but we block spam, and we have our comment voting system — so part of our Acceptable Use Policy is “don’t write stuff that makes our users think you’re an asshole.” Is that what Schatz and Thune want?
The bill then also requires this convoluted notice-takedown-appeal process for content that violates our AUP. But how the hell are we supposed to do that when most of the moderation takes place by user voting? Honestly, we’re not even set up to “put back” content if it has been voted trollish by our community. We’d have to re-architect our comments. And, the only people who are likely to complain… are the trolls. This would enable trolls to keep us super busy having to respond to their nonsense complaints. The bill, like its original version, requires “live” phone-in support for these complaints unless you’re a “small business” or an “individual provider.” But, the terms say that you’re a small business if you “received fewer than 1,000,000 unique monthly visitors” and that’s “during the most recent 12-month period.” How do they define “unique visitors”? The bill does not say, and that’s just ridiculous, as there is no widely accepted definition of a unique monthly visitor, and every tracking system I’ve seen counts it differently. Also, does this mean that if you receive over 1 million visitors once in a 12-month period you no longer qualify?
Either way, under this definition, it might mean that Techdirt no longer qualifies as a small business, and there’s no fucking way we can afford to staff up a live call center to deal with trolls whining that the community voted down their trollish comments.
This bill basically empowers trolls to harass companies, including ours. Why the hell would Senator Schatz want to do that?!?
The bill also requires transparency reports from companies regarding the moderation they do, though it says they only have to come out twice a year instead of four times. As we’ve explained, transparency is good, and transparency reports are good — but mandated transparency reports are huge problem.
For both of these, it’s unclear what exactly is the problem that Schatz and Thune think they’re solving. The larger platforms — the ones that everyone talks about — basically do all of this already. So it won’t change anything for them. All it will do is harm smaller companies, like ours, by putting a massive compliance burden on us, accomplishing nothing but… helping trolls annoy us.
The next big part of the bill involves “illegal content.” Again, it’s not at all clear what problem this is solving. The issue that the drafters of the bill would likely highlight is that some argue that there’s a “loophole” in Section 230: if something is judged to be violating a law, Section 230 still allows a website to keep that content up. That seems like a problem… but only if you ignore the fact that nearly every website will take down such content. The “fix” here seems only designed to deal with the absolute worst actors — almost all of which have already been shut down on other grounds. So what problem is this actually solving? How many websites are there that won’t take down content upon receiving a court ruling on its illegality?
Also, as we’ve noted, we’ve already seen many, many examples of people faking court orders or filing fake defamation lawsuits against “John Does” who magically show up the next day to “settle” in order to get a court ruling that the content violated the law. Enabling more such activity is not a good idea. The PACT Act tries to handwave this away by giving the companies 4 days (in the original version it was 24 hours) to investigate and determine if they have “concerns about the legitimacy of the notice.” But, again, that fails to take reality into account. Courts have no realistic time limit on adjudicating legality, but websites will have to review every such complaint in 4 days?!
The bill also expands the exemptions for Section 230. Currently, federal criminal law is exempt, but the bill will expand that to federal civil law as well. This is to deal with complaints from government agencies like the FTC and HUD and others who worried that they couldn’t take civil action against websites due to Section 230 (though, for the most parts, the courts have held that 230 is not a barrier in those cases). But, much more problematic is that it extends the exemption for federal law to state Attorneys General to allow them to seek to enforce those laws if their states have comparable laws. That is a potentially massive change.
State AGs have long whined about how Section 230 blocks them from suing sites — but there are really good reasons for this. First of all, state AGs have an unfortunate history of abusing their position to basically shake down companies that haven’t broken any actual law, but where they can frame them as doing something nefarious… just to get headlines that help them seek higher office. Giving them more power to do this is immensely problematic — especially when you have industry lobbyists who have capitalized on the willingness of state AGs to act this way, and used it as a method for hobbling competitors. It’s not at all clear why we should give state AGs more power over random internet companies, when their existing track record on these issues is so bad.
Anyway, there is still much more in the bill that is problematic, but on the whole this bill repeats all of the mistakes of the first — even though I know that the drafters know that these demands are unrealistic. The first time may have been due to ignorance, but this time? It’s hard to take Schatz and Thune seriously on this bill when it appears that they simply don’t care how destructive it is.
Filed Under: acceptable use policy, brian schatz, civil law, intermediary liability, john thune, liability, section 230, transparency reports
A Paean To Transparency Reports
from the encouraging-nudges-are-better-than-beatings dept
One of the ideas that comes up a lot in proposals to change Section 230 is that Internet platforms should be required to produce transparency reports. The PACT Act, for instance, includes the requirement that they “[implement] a quarterly reporting requirement for online platforms that includes disaggregated statistics on content that has been removed, demonetized, or deprioritized.” And the execrable NTIA FCC petition includes the demand that the FCC “[m]andate disclosure for internet transparency similar to that required of other internet companies, such as broadband service providers.”
Any person providing an interactive computer service in a manner through a mass-market retail offering to the public shall publicly disclose accurate information regarding its content-management mechanisms as well as any other content moderation, promotion, and other curation practices of its interactive computer service sufficient to enable (i) consumers to make informed choices regarding the purchase and use of such service and (ii) entrepreneurs and other small businesses to develop, market, and maintain offerings by means of such service. Such disclosure shall be made via a publicly available, easily accessible website or through transmittal to the Commission.
Make no mistake: mandating transparency reports is a terrible, chilling, and likely unconstitutional regulatory demand. Platforms have the First Amendment right to be completely arbitrary in their content moderation practices, and requiring them to explain their thinking both chills their ability to exercise that discretion and presents issues of compelled speech, which is itself of dubious constitutionality. Furthermore, such a requirement itself threatens the moderation process on a practical level. As we are constantly reminding, content moderation at scale is really, really, hard, if not outright impossible, to get right. If we want platforms to nevertheless do the best they can, then we should leave them to be focused on that task and not encumber them with additional, and questionable, regulatory obligations.
All that said, while it is not good to require transparency reports, they are nevertheless a good thing to encourage. With Twitter recently announcing several innovations to their transparency reporting (including now having an entire “Transparency Center” to gather all released data in one place), it’s a good time to talk about why.
Transparency reports have been around for a while. The basic idea has remained constant: shed light on the forces affecting how platforms host user expression. What’s new is these reports providing more insight on the internal decisions bearing on how platforms do this hosting. For instance, Twitter will now be sharing data about how it has enforced its own rules:
For the first time, we are expanding the scope of this section [on rules enforcement] to better align with the Twitter Rules, and sharing more granular data on violated policies. This is in line with best practices under the Santa Clara Principles on Transparency and Accountability in Content Moderation.
This data joins other data Twitter releases about manipulative bot behavior and also the state-backed information operations it has discovered.
Which bears on one of the most important reasons to have transparency reports: they tell the public how *external* pressures have shaped how platforms can do their job intermediating their users’ expression. Historically these reports have been crucial tools in fighting attacks against speech because they highlight where the attacks have come from.
In some instances these censorial pressures have been outright demands for content removal. For instance, the Twitter report calls out DMCA takedown notices, and takedown demands predicated on trademark infringement claims. It also includes other legal requests for content removal. For instance, in its latest report covering 2019, it found that
[i]n this reporting period, Twitter received 27,538 legal demands to remove content specifying 98,595 accounts. This is the largest number of requests and specified accounts that we?ve received since releasing our first Transparency Report in 2012.
But removal demands are not the only way that governments can mess with the business of intermediating user speech. One of the original purposes of these reports was to track the attempts to seek identifying information about platform users. These demands can themselves be silencing, scaring users into pulling down their own speech already made or biting their tongues going forward ? even when their speech may be perfectly lawful and the public would benefit from what they have to say.
We’ve written many times before, quite critically, about how vulnerable speakers are to these sorts of abusive discovery demands. The First Amendment protects the right to speak anonymously, and discovery demands, that platforms find themselves having to yield to, can jeopardize that right.
As we’ve discussed previously, there are lots of different discovery instruments that can be propounded on a platform (ex: civil subpoenas, grand jury subpoenas, search warrants, NSLs, etc.) to demand user data. They all have different rules governing them, which affects both their propensity for abuse and the ability of the user or platform to fight off unmeritorious ones.
Transparency reports can be helpful in fighting discovery abuse because they can provide data showing how often these different instruments are used to demand user data from platforms. The problem, however, is that all too often the data in the reports is generalized, with multiple types of discovery instruments all lumped together.
I don’t mean they are lumped together the way the volume of NSL letters can only be reported in wide bands. (But do note the irony that all of these Section 230 “reform” proposals mandating transparency reports do nothing about aspects of current law that actively *prevent* platforms from being transparent. If any of these proposals actually cared about the ability to speak freely online as much as they profess, their first step should be to remove any legal obstacle currently on the books that compromises speakers’ First Amendment rights or platforms’ ability to be protective of those rights ? and the law regarding NSLs would be a great place to start.)
I mean that, for instance, multiple forms of data requests tend to get combined into one datapoint. In this aggregated form the reports have some informational value, but it obscures certain trends that are shaped by differences in each sort of instruments’ rules. If certain instruments are more problematic than others, it would be helpful if we could more easily spot their impact, and then have data to cite in our advocacy against the more troubling ones.
In the case of Twitter, these “information requests” are reported as either government requests or non-government requests. For the government requests they are further broken into “emergency” and “routine,” but not obviously broken out any further. On the other hand, Twitter has flagged CLOUD Act requests as something to keep an eye on when it goes into effect, as it will create a new sort of discovery instrument that may not adequately take into account the user and platform speech rights they implicate. But whether these existing government data requests were federal grand jury subpoenas, search warrants from any particular jurisdiction, NSLs, or something else is not readily apparent. Nor are the non-governmental requests broken out either, even though it might be helpful to know when the subpoena stemmed from federal civil litigation, state civil litigation, or was a DMCA 512(h) subpoena (where there may not be any litigation at all). Again, because the rules surrounding when each of these discovery instruments can be issued, and whether/how/by whom they can be resisted, differ, it would be helpful to know how frequently each is being used. Censorial efforts tend to take the path of least resistance, and this data can help identify which instruments may be most prone to abuse and need more procedural friction to be able to stem it.
It may of course not be feasible to report with more granularity, whether for such reasons such as amount of labor required or any rules barring more detailed disclosure (see, again, NSLs). And platforms may have other reasons for wanting to keep that information closer to the chest. Which, again, is a reason why mandating transparency reports, or any particular informational element that might go into a transparency report, is a bad idea. But platforms are not alone; if one is being bombarded with certain kinds of information requests then they all likely are. Transparency on these details can help us see how no platform is alone and help us all advocate for whatever better rules are needed to keep everyone’s First Amendment rights from being too easily trampled by any of these sorts of “requests.”
Filed Under: fcc, ntia, section 230, transparency, transparency reports
Federal Court Dismisses Twitter's Long-Running Lawsuit Over NSL Reporting
from the tiny-win-in-the-margins dept
All the way back in 2014, Twitter sued the DOJ over its National Security Letter reporting restrictions. NSLs are the FBI’s weapon of choice in all sorts of investigations. And they almost exclusively come packaged with lifetime bans on discussing them publicly or disclosing the government’s request for info to NSL targets.
Things changed a little with the passage of the USA Freedom Act and a couple of related court decisions. The DOJ is now required to periodically review NSLs to see if the ongoing silence is justified. The Act also finally provided a way for companies to challenge gag orders, which has resulted in a somewhat steady stream of published NSLs.
What’s still forbidden is publishing an actual count of NSLs a company has received. Supposedly the security of the nation would be threatened if Twitter said it had received 118 NSLs last year, rather than “0-499.” The reforms in the USA Freedom Act didn’t change that aspect of NSL reporting and the government still argues any accurate reporting would allow the terrorists to win… or somehow avoid being targeted by an NSL.
Twitter argued the publication of an accurate number was protected speech. The government, of course, argued the opposite. The federal judge handling the case ruled that accurate reporting wasn’t protected speech back in 2016, but did say Twitter could move forward with its challenge of the classification of this data.
Roughly a year later, the court changed its mind. The government’s motion to dismiss was denied by the court, which said it needed to come up with better arguments if it wanted to escape Twitter’s lawsuit. The court pointed out that denying Twitter the right to accurately report NSLs was a content-based restriction that couldn’t be justified by the government’s bare bones assertions about national security.
Nearly three years later, we’re back to where we were four years ago. The court has dismissed Twitter’s lawsuit, denying its attempt to escape the “banding” restrictions that limit the transparency it can provide to its users. (via Politico)
The decision [PDF] — which ends nearly six years of litigation — says the court believes the things the government says about detailed NSL reporting. Since these declarations tend to be delivered in ex parte hearings and/or under seal, we have to believe them, too. Actual numbers are more dangerous than vague numbers.
The declarations explain the gravity of the risks inherent in disclosure of the information that the Government has prohibited Twitter from stating in its Draft Transparency Report, including a sufficiently specific explanation of the reasons disclosure of mere aggregate numbers, even years after the relevant time period in the Draft Transparency Report, could be expected to give rise to grave or imminent harm to the national security. The Court finds that the declarations contain sufficient factual detail to justify the Government’s classification of the aggregate information in Twitter’s 2014 Draft Transparency Report on the grounds that the information would be likely to lead to grave or imminent harm to the national security, and that no more narrow tailoring of the restrictions can be made.
And that’s it. The restrictions stay in place and recipients of NSLs will only be able to deliver government-approved information about them. The good news is there’s a bit of a loophole — one the court discusses in a footnote. The DOJ may want to restrict almost all NSL reporting, but the court isn’t convinced companies can be limited to using the DOJ-approved “bands” only.
The [complaint] alleges a facial constitutional challenge to FISA’s secrecy provisions to the extent they categorically prohibit the reporting of aggregate data. The Court does not find that they do so restrict the aggregate data at issue here. The Government has, in part, argued that FISA’s statutory nondisclosure provisions, applicable to the existence and contents of individual orders, logically prohibit reporting of aggregate data about the number of such orders. The Court has never found the Government’s logic persuasive on this point. The requirement not to disclose a particular order is completely distinct from disclosing the aggregate number of orders.
This seems to say companies can accurately report the total number of NSLs they’ve received, rather than using the far more vague 0-499, etc. reporting they’ve been limited to. It’s not a lot but it’s an improvement.
Filed Under: fisa warrants, national security, nsl reporting, transparency, transparency reports
Companies: twitter
Apple's Latest Transparency Report Shows Gov't Still Not All That Interested In Seeking Warrants
from the an-NSL-a-day-keeps-the-oversight-away dept
Apple has released its latest transparency report. It shows the United States, by far, has the most interest in obtaining user content and data from the company.
New figures in the company’s second biannual transparency report for 2017 show that Apple received 29,718 demands to access 309,362 devices in the second-half of the year.
Data was turned over in 79 percent of cases.
The number of demands are down slightly on the first half of the year, but the number devices that the government wanted access to rocketed.
What it doesn’t show, however, is how much is being obtained using only subpoenas. Warrants are needed for content. That Apple’s latest report [PDF] makes clear.
Any government agency seeking customer content from Apple must obtain a search warrant issued upon a showing of probable cause.
But in 90% of cases listed in the report, only a subpoena was delivered to Apple. What isn’t made explicitly clear is whether or not content was sought using something other than a warrant. Apple says the government requested content 608 times using 270 warrants, which isn’t necessarily a problem, considering more than one device/account may have been targeted. That still leaves more than 4,000 subpoenas Apple classifies as “Device Requests.” Unfortunately, sussing this out more granularly is pretty much impossible because Apple’s definition of “device requests” leaves a lot to be desired.
[D]evice-based requests received from a government agency [seek] customer data related to specific device identifiers, such as serial number or IMEI number. Device-based requests can be in various formats such as subpoenas, court orders or warrants.
Given Apple’s public battle with the DOJ over encryption, it’s very likely the company is demanding warrants when customer content is sought. But it could do better breaking down these requests into content and non-content demands.
That being said, there’s a lot of detail in the report that isn’t found in transparency reports by other tech companies. The whole thing is worth reading, if only to marvel at the massive amount of data demands being made by US law enforcement. And it appears the FBI (and other federal agencies) still prefer writing their own paperwork, rather than subject themselves to the minimal judicial scrutiny subpoenas require. National Security Letters are, by far, the most popular way for the government to seek subscriber/customer data. Apple received more than 16,000 NSLs targeting ~8,000 accounts in the last six months of 2017 alone.
While Apple has refused to publish the NSL behind a successfully challenged gag order, it appears ready to add yet another layer of transparency to future reports.
The company said beginning in the next transparency report — expected later this year — Apple will disclose the number of apps removed from its app stores.
This should make the next report an even more interesting read. It would be nice if Apple would set up a clearinghouse for government demands — a la Lumen’s database of takedown/removal requests — but for now, any transparency is better than the opacity we dealt with prior to Ed Snowden outing multiple pervasive surveillance programs.
Filed Under: law enforcement, subpoeanas, transparency, transparency reports, us government, warrants
Companies: apple
How Government Pressure Has Turned Transparency Reports From Free Speech Celebrations To Censorship Celebrations
from the this-is-not-good dept
For many years now, various internet companies have released Transparency Reports. The practice was started by Google years back (oddly, Google itself fails me in finding its original trasnparency report). Soon many other internet companies followed suit, and, while it took them a while, the telcos eventually joined in as well. Google’s own Transparency Report site lists out a bunch of other companies that now issue such reports:
We’ve celebrated many of these transparency reports over the years, often demonstrating the excesses of attempts to stifle and censor speech or violate users privacy, and in how these reports often create incentives for these organizations to push back against those demands. Yet, in an interesting article over at Politico, a former Google policy manager warns that the purpose of these platforms is being flipped on its head, and that they’re now being used to show how much these platforms are willing to censor:
Fast forward a decade and democracies are now agonizing over fake news and terrorist propaganda. Earlier this month, the European Commission published a new recommendation demanding that internet companies remove extremist and other objectionable content flagged to them in less than an hour ? or face legislation forcing them to do so. The Commission also endorsed transparency reports as a way to demonstrate how they are complying with the law.
Indeed, Google and other big tech companies still publish transparency reports, but they now seem to serve a different purpose: to convince authorities in Europe and elsewhere that the internet giant is serious about cracking down on illegal content. The more takedowns it can show, the better.
If true, this is a pretty horrific result of something that should be a good thing: more transparency, more information sharing and more incentives to make sure that bogus attempts to stifle speech and invade people’s privacy are not enabled.
Part of the issue, of course, is the fact that governments have been increasingly putting pressure on internet platforms to take down speech, and blaming internet platforms for election results or policies they dislike. And the companies then feel the need to show the governments that they do take these “issues” seriously, by pointing to the content they do takedown. So, rather than alerting the public to all the stuff they don’t take down, the platforms are signalling to governments (and some in the public too, frankly) that they frequently take down content. And, unfortunately, that’s backfiring, as it’s making politicians (and some individuals) claim that this just proves the platforms aren’t censoring enough.
The pace of private sector censorship is astounding ? and it?s growing exponentially.
The article talks about how this is leading to censorship of important and useful content, such as the case where an exploration of the dangers of Holocaust revisionism got taken down because YouTube feared that a look into it might actually violate European laws against Holocaust revisionism. And, of course, such censorship machines are regularly abused by authoritarian governments:
Turkey demands that internet companies hire locals whose main task is to take calls from the government and then take down content. Russia reportedly is threatening to ban YouTube unless it takes down opposition videos. China?s Great Firewall already blocks almost all Western sites, and much domestic content.
Similarly, a recent report on how Facebook’s censorship of reports of ethnic cleansing in Burma are incredibly disturbing:
Rohingya activists?in Burma and in Western countries?tell The Daily Beast that Facebook has been removing their posts documenting the ethnic cleansing of Rohingya people in Burma (also known as Myanmar). They said their accounts are frequently suspended or taken down.
That article has many examples of the kind of content that Facebook is pulling down and notes that in Burma, people rely on Facebook much more than in some other countries:
Facebook is an essential platform in Burma; since the country?s infrastructure is underdeveloped, people rely on it the way Westerners rely on email. Experts often say that in Burma, Facebook is the internet?so having your account disabled can be devastating.
You can argue that there should be other systems for them to use, but the reality of the situation right now is they use Facebook, and Facebook is deleting reports of ethnic cleansing.
Having democratic governments turn around and enable more and more of this in the name of stopping “bad” speech is acting to support these kinds of crackdowns.
Indeed, as Europe is pushing for more and more use of platforms to censor, it’s important that someone gets them to understand how these plans almost inevitably backfire. Daphne Keller at Stanford recently submitted a comment to the EU about its plan, noting just how badly demands for censorship of “illegal content” can turn around and do serious harm.
Errors in platforms? CVE content removal and police reporting will foreseeably, systematically, and unfairly burden a particular group of Internet users: those speaking Arabic, discussing Middle Eastern politics, or talking about Islam. State-mandated monitoring will, in this way, exacerbate existing inequities in notice and takedown operations. Stories of discriminatory removal impact are already all too common. In 2017, over 70 social justice organizations wrote to Facebook identifying a pattern of disparate enforcement, saying that the platform applies its rules unfairly to remove more posts from minority speakers. This pattern will likely grow worse in the face of pressures such as those proposed in the Recommendation.
There are longer term implications of all of this, and plenty of reasons why we should be thinking about structuring the internet in better ways to protect against this form of censorship. But the short term reality remains, and people should be wary of calling for more platform-based censorship over “bad” content without recognizing the inevitable ways in which such policies are abused or misused to target the most vulnerable.
Filed Under: censorship, filtering, free speech, transparency reports
Cloud Communications Service Twilio Releases Two NSLs Sprung From Their Gag Order Cages
from the all-purpose-paperwork dept
Another communications platform has published National Security Letters it has received from the FBI. Twilio — a San Francisco-based cloud communications platform — has published two NSLs freed from the confines of their accompanying gag orders.
When Twilio receives requests that are issued without the review of a court, such as National Security Letters, Twilio will ask the agent to instead produce a court order or withdraw the nondisclosure component of the request.
Twilio requested judicial review of the nondisclosure requirement, and as a result, received permission from the U.S. Department of Justice to publish two National Security Letters, in addition to the letters authorizing Twilio to do so.
Twilio was also permitted to count the two National Security Letters in our semi-annual transparency report for the second half of 2017. Therefore, Twilio indicates receiving between 2 and 999 National Security Letters in the time range of July 1, 2017 through December 31, 2017.
Twilio says it will continue to challenge the gag orders attached by default to FBI NSLs, which should result in more published NSLs in the future. The two posted by Twilio are fairly recent. Both were received in May of last year. Both also contain the FBI’s response letter letting Twilio know the gag orders had been lifted.
The first [PDF] of the two published lets Twilio know the FBI has agreed to lift the gag order. It also states the FBI is withdrawing its request for subscriber info. The second [PDF] is a little more interesting. The FBI agreed to lift the gag order, but requested Twilio give it a ring before notifying the affected customer.
Please be advised that the FBI has reviewed the nondisclosure requirement imposed in connection with the NSL at issue and determined that the facts and circumstances supporting nondislosure under 18 USC 2709(c) no longer continue to exist. Consequently, the government is lifting the nondisclosure requirement imposed in connection with the NSL at issue… [T]he FBI also asks that Twilio notify Special Agent [redacted] of the FBI Cincinnati Field Office, in the event Twilio chooses to inform the subscriber of the account at issue regarding the NSL request or any of the information set forth in that request…
This sounds like “assessment” stuff — where the FBI rounds up everything it can obtain without a warrant to start building towards a preliminary investigation and possibly even the probable cause needed to continue pursuing a suspect. But the FBI office is seemingly willing to spook a subject in exchange for whatever minimal account info Twilio has on hand. That’s a little strange, considering the gag order was lifted within a few months of the NSL being sent. The two published by Twilio are unlike the NSLs published elsewhere, some of which are closer to a decade old at this point.
Whatever the case, it’s more transparency from another service provider, adding to the body of public knowledge on the FBI’s use of NSLs.
Filed Under: gag orders, nsls, surveillance, transparency reports
Companies: twilio