activism – Techdirt (original) (raw)
Elon Musk’s Vision Of Trust & Safety: Neither Safe Nor Trustworthy
from the who-could-have-predicted-it? dept
Even as Elon first made his bid for Twitter, we highlighted just how little he understood about content moderation and trust & safety. And, that really matters, because, as Nilay Patel pointed out, managing trust & safety basically is the core business of a social media company: “The essential truth of every social network is that the product is content moderation.” But, Elon had such a naïve and simplistic understanding (“delete wrong and bad content, but leave the rest”) of trust & safety that it’s no wonder advertisers (who keep the site in business) have abandoned the site in droves.
We even tried to warn Elon about how this would go, and he chose to go his own way, and now we’re seeing the results… and it’s not good. Not good at all. It’s become pretty clear that Elon believes that trust & safety should solely be about keeping him untroubled. His one major policy change (despite promising otherwise) was to ban an account tweeting public information, claiming (falsely) that it was a threat to his personal safety (while simultaneously putting his own employees at risk).
Last week, Twitter excitedly rolled out its new policy on “violent speech,” which (hilariously) resulted in his biggest fans cheering on this policy despite it being basically identical to the old policy, which they claimed they hated. Indeed, the big change was basically that the new rules are written in way that is way more subjective than the old policy, meaning that Twitter and Musk can basically apply them much more arbitrarily (which was a big complaint about the old policies).
Either way, as we noted recently, by basically firing nearly everyone who handled trust & safety at the company, Twitter was seeing its moderation efforts falling apart, raising all sorts of alarms.
A new investigative report from the BBC Panorama details just how bad it’s gotten. Talking to both current and former Twitter employees, the report highlights a number of ways in which Twitter is simply unable to do anything about abuse and harassment.
- Concerns that child sexual exploitation is on the rise on Twitter and not being sufficiently raised with law enforcement
- Targeted harassment campaigns aimed at curbing freedom of expression, and foreign influence operations – once removed daily from Twitter – are going “undetected”, according to a recent employee.
- Exclusive data showing how misogynistic online hate targeting me is on the rise since the takeover, and that there has been a 69% increase in new accounts following misogynistic and abusive profiles.
- Rape survivors have been targeted by accounts that have become more active since the takeover, with indications they’ve been reinstated or newly created.
Among things noted in that report is that Elon himself doesn’t trust any of Twitter’s old employees (which is perhaps why he keeps laying them off despite promising the layoffs were done), and goes everywhere in the company with bodyguards. Apparently, Elon believes in modeling “trust & safety” by not trusting his employees, and making sure that his own safety is the only safety that matters.
Also, an interesting tidbit is that Twitter’s interesting “nudge” experiment (in which it would detect if you were about to say something that might escalate a flame war, and suggest you give it a second thought — an experiment that was generally seen as having a positive impact) seems to be either dead or on life support.
“Overall 60% of users deleted or edited their reply when given a chance through the nudge,” she says. “But what was more interesting, is that after we nudged people once, they composed 11% fewer harmful replies in the future.”
These safety features were being implemented around the time my abuse on Twitter seemed to reduce, according to data collated by the University of Sheffield and International Center for Journalists. It’s impossible to directly correlate the two, but given what the evidence tells us about the efficacy of these measures, it’s possible to draw a link.
But after Mr Musk took over the social media company in late October 2022, Lisa’s entire team was laid off, and she herself chose to leave in late November. I asked Ms Jennings Young what happened to features like the harmful reply nudge.
“There’s no-one there to work on that at this time,” she told me. She has no idea what has happened to the projects she was doing.
So we tried an experiment.
She suggested a tweet that she would have expected to trigger a nudge. “Twitter employees are lazy losers, jump off the Golden Gate bridge and die.” I shared it on a private profile in response to one of her tweets, but to Ms Jennings Young’s surprise, no nudge was sent.
Meanwhile, a New York Times piece is detailing some of the real world impact of Musk’s absolute failures: Chinese activists, who have long relied on Twitter, can no longer do so. Apparently, their reporting on protests in Beijing was silenced, after Twitter… classified them as spam and “government disinformation.”
The issues have also meant that leading Chinese voices on Twitter were muffled at a crucial political moment, even though Mr. Musk has championed free speech. In November, protesters in dozens of Chinese cities objected to President Xi Jinping’s restrictive “zero Covid” policies, in some of the most widespread demonstrations in a generation.
The issues faced by the Chinese activists’ Twitter accounts were rooted in mistakes in the company’s automated systems, which are intended to filter out spam and government disinformation campaigns, four people with knowledge of the service said.
These systems were once routinely monitored, with mistakes regularly addressed by staff. But a team that cleaned up spam and countered influence operations and had about 50 people at its peak, with about a third in Asia, was cut to single digits in recent layoffs and departures, two of the people said. The division head for the Asia-Pacific region, whose responsibilities include the Chinese activist accounts, was laid off in January. Twitter’s resources dedicated to supervising content moderation for Chinese-language posts have been drastically reduced, the people said.
So when some Twitter systems recently failed to differentiate between a Chinese disinformation campaign and genuine accounts, that led to some accounts of Chinese activists and dissidents being difficult to find, the people said.
The article also notes that for all of Elon’s talk about supporting “free speech” and no longer banning accounts, a bunch of Chinese activists have had their accounts banned.
Some Chinese activists said their Twitter accounts were also suspended in recent weeks with no explanation.
“I didn’t understand what was going on,” said Wang Qingpeng, a human rights lawyer based in Seattle whose Twitter account was suspended on Dec. 15. “My account isn’t liberal or conservative, I never write in English, and I only focus on Chinese human rights issues.”
And, perhaps the saddest anecdote in the whole story:
Shen Liangqing, 60, a writer in China’s Anhui province who has spent over six years in jail for his political activism, said he has cherished speaking his mind on Twitter. But when his account was abruptly suspended in January, it reminded him of China’s censorship, he said.
So, Elon’s plan to focus on “free speech” means he’s brought back accounts of harassers and grifters, but he’s suspending actual free speech activists, while the company’s remaining trust & safety workers can’t actually handle the influx of nonsense, and they’ve rewritten policies to let them be much more arbitrary (and it’s becoming increasingly clear that much of the decision-making is based on what makes Elon feel best, rather than what’s actually best for users of the site).
Last week, we wrote about how Musk has insisted over and over again that the “key to trust” is “transparency,” but since he’s taken over, the company has become less transparent.
So combine all of this, and we see that Elon’s vision of “trust & safety” means way less trust, according to Elon’s own measure (and none from Elon to his own employees), and “safety” means pretty much everyone on the site is way less safe.
Filed Under: abuse, activism, content moderation, elon musk, free speech, harassment, nudge, safety, transparency, trust, trust & safety
Companies: twitter
Apple Still Sucks On Right To Repair
from the do-not-pass-go,-do-not-collect-$200 dept
Mon, Jan 30th 2023 05:30am - Karl Bode
Apple has never looked too kindly upon users actually repairing their own devices. The company’s ham-fisted efforts to shut down, sue, or otherwise imperil third-party repair shops are legendary. As are the company’s efforts to force recycling shops to shred Apple products (so they can’t be refurbished and re-used).
That’s before you get to Apple’s often comical attacks on “right to repair” legislation, a push that only sprung up after companies like Apple, Microsoft, Sony, John Deere, and others created a global grass-roots coalition of activists and reformers via their clumsy attempts to monopolize repair.
While Apple has made some concessions to try and pre-empt right to repair legislation, there’s clearly still a long way to go. John Bumstead, a MacBook refurbisher and owner of the RDKL INC repair store, recently revealed that used MacBooks retailing for as much as $3,000 are being scrapped for parts because recyclers are prevented from logging into the devices.
Bumstead told Motherboard the culprit is Apple’s T2 security chip, which prevents anyone but the original owner from logging into the laptops. He also stated that despite Apple’s promises on right to repair reform, the problem has gotten notably worse over the last few years. As a result, a countless number of costly 2018/2019 era Macbooks can’t be completely repurposed:
“The progression has been, first you had certifications with unrealistic data destruction requirements, and that caused recyclers to pull drives from machines and sell without drives, but then as of 2016 the drives were embedded in the boards, so they started pulling boards instead,” he said. “And now the boards are locked, so they are essentially worthless. You can’t even boot locked 2018+ MacBooks to an external device because by default the MacBook security app disables external booting.”
Experts state that Apple could make this all go away by building more convenient unlocking systems for independent repair shops, but then Apple might sell fewer new laptops — and threaten its own lucrative repair monopoly — and you wouldn’t want that.
Filed Under: activism, hardware, laptops, right to repair
Companies: apple
Mudge’s Testimony Shows He Was Acting As An Activist, Not An Executive
from the different-roles dept
Tuesday, former Twitter cybersecurity executive Pieter “Mudge” Zatko testified in front of a congressional committee regarding his whistleblower complaint[1][2][3] against Twitter. Though I’m a techie, I thought I’d write up some comments from the business angle.
It’s difficult getting an unbiased viewpoint of the actual issues. The press sides with whistleblowers. The cybersecurity community sides with champions – those who fight for the Cause of ever more security.
The thing is, on its face, Mudge’s complaint is false. It’s based on the claim that Twitter “lied” about its cybersecurity to the government, shareholders, and its users. But there’s no objective evidence of this, only the subjective opinion of Mudge that Twitter wasn’t doing enough for cybersecurity.
What I see here is that Mudge is acting as a cybersecurity activist. The industry has many activists who believe security is a Holy Crusade, a Cause, a Moral duty, an End in itself. The crusaders are regularly at odds with business leaders who view cybersecurity merely as a means to an end, and apply a cost-vs-benefit analysis to it.
If you hire an activist, such a falling out is inevitable. It’s like if oil companies hired a Greenpeace activist to be an executive. Or like how Google hires activists to be “AI ethicists” and then later has to keep firing them [#1][#2][#3].
Background
Mudge is a technical expert going back decades. He was there at the beginning (I define the 1990s as the beginning), and his work helped shape today’s InfoSec industry. He’s got a lot of credibility in the industry, and it’s all justified.
He was hired for most of 2021 to be Twitter’s head of cybersecurity issues. He was fired at the start of 2022, and last month he filed a “whistleblower complaint” with the government, alleging lax cybersecurity practices, specifically that Twitter lied to investors and failed to live up to a 2011 FTC agreement to secure “private” data.
There’s no particular reason to distrust Mudge. Twitter would certainly like to discredit him as being disgruntled for being fired. But that’s unlikely.
Instead, what I read in the complaint is being disgruntled over cybersecurity (not over being fired). This has been the case for much of his career. He thinks people should do more to be secure. His “Cyber UL” effort is a good example, as he pressured IoT device makers to follow a strict set of cybersecurity rules. For fellow activists, the desired set of rules were just the beginning. For business types, they were excessive, with costs that outweighed their benefits.
Is Twitter secure enough?
Is Twitter secure? Maybe, probably not. Twitter trails the FAANG leaders in the industry (Facebook, Apple, Amazon, Netflix, Google) in a number of technical areas, so it’s easy to think they are behind in cybersecurity as well. On the other hand, they are ahead of most of the rest of the tech industry, not first tier maybe, but definitely second tier.
In other words, in all likelihood, Twitter is ahead of the norm, ahead of the average, just not up to the same standard set by the leaders in tech.
But for cybersecurity activists, even the FAANG companies are not secure enough. That’s because nobody is ever secure enough. There is no standard for which you can say “we are secure enough”.
By any rational measure, the Internet is secure enough. For example, during the pandemic, restaurants put menus and even ordering online, accessible via the browser or app, to minimize customer contact with staff. Paying by credit card using these apps and services was still more “secure” than giving the staff your credit card physically. This was true even if you were accessing the net over the local unencrypted WiFi.
There is a huge disconnect between what the real world considers “secure enough” vs. cybersecurity activists.
One of Mudge’s complaints was about servers being out-of-date. Cybersecurity activists have a fetish for up-to-date software, seeing the failure to keep everything up-to-date all-the-time as some sort of moral weakness (sloth, villainy, greed).
But the business norm is out-of-date software. For example, if you go on Amazon AWS right now and spin up a new default RedHat instance, you get RedHat 7, which first shipped in 2014 (eight years ago). Yes, it’s still nominally supported with security patches, but it lacks many modern features needed for better security.
The subjective claim is that Twitter was deficient for not having the latest software. That’s just the cyber-activist point of view. From the point of view of industry, it’s the norm.
The entire complaint reads the same. It’s a litany of the standard complaints, slightly modified to apply to Twitter, that the entire industry has against their employers. It’s all based upon their companies not doing enough.
Of particular note is the Twitter-specific issue of protecting private information like Direct Messages (DMs). The thing is, anything less than end-to-end encryption is still a failure. Mudge points to a lack of disk encryption, and the fact that thousands of employees had access to private DMs, that this means they aren’t “secure.” But even if that wasn’t the case, DMs still wouldn’t be secure, because they aren’t end-to-end encrypted.
Twitter isn’t lying about this. They aren’t claiming DMs are end-to-end encrypted. I suppose they are deficient in not making it clearer that DMs aren’t as private as some users might hope.
But the solution cyber-activists want isn’t transparency into the lack of DM security, but more DM security. They aren’t asking Twitter to be clear about how they prevent prying eyes from seeing DMs, they are demanding absolute security for the DMs. This reveals their fundamental prejudice.
He wasn’t an executive
Being an activist meant that Mudge wasn’t an executive. His goal wasn’t to further the interests of the company/shareholders. His goal was to further the interests of cybersecurity.
One of these days I’m going to write a guide explaining business to hackers. This will be one of the articles I’ll be writing, explaining executives to rank-and-file underlings.
What we see here is Mudge acting like an underling instead of an executive.
Part of his complaint is that the now-CEO, Parag Agrawal, pressured him into lying to the board, to claim to the risk committee of the board that security is better than it really was.
Of course Agrawal did. He’s supposed to do that — push hard for his point-of-view. And Mudge was supposed to push just as hard back, especially if he perceives the request as being asked to lie.
The thing you need to learn about corporate executives is that they are given a lot of responsibility, and a lot of power, but nonetheless must compromise and cooperate.
Underlings often don’t really grasp this. They don’t have responsibility. Like when you hear about a company blaming a compromise on an intern — false on its face because interns don’t have responsibility. Underlings don’t have a lot of power, either. Lastly, underlings lack skills for compromise and collaboration, but that’s okay, because “teamwork” is more of a platitude than a requirement at their level.
To achieve their personal responsibilities, executives must push hard on others. To a certain extent, this means all executives are jerks. But at the same time, they expect fellow executives to push back just as hard; they expect that there is give-and-take, compromise, and collaboration for the ultimate good of the corporation. They expect that when they push hard on the parts that concern them, you push just as hard back to defend your turf, knowing that you seek your goals. But, they also expect that such pushback is driving toward compromise, not scorched-earth victory for your side.
If you, as the typical underling, are called to report something to a board committee, you can expect that one or more executives are going to talk to you in order to influence what you are going to say. I’ve dealt with many cybersecurity underlings in this position and heard their tales, and frankly, they handled the situations better than Mudge seems to have.
Underlings expect that their bosses will help defend them in their work disputes. But executives don’t have that luxury. They are at the top of the food chain and are themselves responsible for resolving conflicts. There is nobody to go to in order to complain: not the board who only wants results, and not HR, because you are above HR. Not anybody — you have to resolve your own disputes.
Mudge’s complaint seems to be about looking for dispute resolution in the court of public opinion, because he was unable to resolve his dispute with Agrawal himself.
A good example of a true executive resigning is when James Mattis resigned as Trump’s Secretary of Defense. In his letter, he lamented the fact that he and Trump didn’t agree:
Because you have the right to have a Secretary of Defense whose views are better aligned with yours on these and other subjects, I believe it is right for me to step down from my position.
Note that Mattis doesn’t claim there’s some subjective measure of which side is right and which side is wrong. Instead, Mattis only claims that they couldn’t agree.
In contrast, Mudge’s complaint is full of the assertions that he’s objectively right, and Agrawal objectively wrong. And since it’s objective that he was wrong, Agrawal must’ve been lying.
As a former executive, and somebody who consults with executives, I find Mudge’s description of the events shocking. He’s talking like a whiny underlying, not like an executive.
Ethics
Mudge’s complaint touches on a few ethical issues.
Most such ethical issues are really politics in disguise. Facebook found this out with their attempts to deal with misinformation ethics and AI ethics. They found it just opened festering political wounds.
If you can somehow avoid politics then you’ll get mired in academics. To be fair, when you ignore academic philosophy, you’ll end up re-inventing Kant vs. Hegel, and doing it poorly. But at the same time, academics can spend years debating Kant vs. Hegel and still come to no conclusion.
But what we are talking about here is professional ethics, and that’s much simpler. Most professional ethics are about protecting trust in the profession (“don’t lie”) and resolving conflicts you are likely to encounter. For example, journalists’ ethics involve long discussions of “off the record” stuff, because it’s an issue they regularly encounter.
Cybersecurity has the wrong belief that “security” is their highest ethical duty, to the point where they think it’s good to lie to people for their own good, as long as doing so achieves better security.
This activism has hugely damaged our profession. Most cybersecurity professionals are frustrated that they can’t get business leaders to listen to them. When you talk to the other side, to the business leaders, you’ll see that the primary reason they don’t listen is that they don’t trust the cybersecurity professionals. Maybe you are truthful, but they still won’t listen to you because the legions of cybersecurity professionals who have preceded you tried to mislead business leaders to get their way — to serve the Holy Crusade.
The opposite side of the coin are those demanding cybersecurity professionals downplay their honest concerns. For example, when a pentester hands over a report documenting how easy it was to break in, the person who hired them may ask for certain things to be edited, to downplay the severity of what was found.
It’s a difficult problem. Sometimes they are right. Sometimes the issue is exaggerated. Sometimes it’s written in a way that can be misinterpreted.
But sometimes, they are just asking the pentester to lie on their behalf.
We should have a professional ethics guide in our industry. It should say that in such situations you don’t lie. One way you can solve this is to have them put their request in writing, which filters out most illegitimate requests. Another way is using the passive voice and such, to make sure that some statement won’t be confused as being your opinion.
Mudge describes a case where Agrawal specifically requested things not be put into writing. This is a big red flag, a real concern.
But at the same time, it’s not an automatic failure. It’s a common problem that things put in writing can be misleading when taken out of context. This happens all the time, especially in lawsuits, where the opposing side will cherry pick things out of context to show the jury. Long term executives learn to avoid written statements that can be used misleadingly against them in a court of law.
But here, the issue was avoiding things in writing that could confuse the board. That’s worrisome. I’m not sure I believe Mudge’s one-sided account, being that his other descriptions are so problematic. Even when somebody explicitly asks you to lie, they will remember the discussion much differently, that they didn’t ask you to lie.
The solution to such problems, if you find yourself in them, is to push back in a collaborative manner. Saying something like “I won’t lie to the board for you” is combative, not constructive. Saying “I don’t understand what you are asking me to do. I think that would mislead the board, which I couldn’t do, of course.”
The thing that’s important here is that “ethics” aren’t an excuse to attack your opponent. It’s easy to deliberately misinterpret the statements and actions of another as representing an ethical failure. Your primary duty is to protect your own ethics.
Conclusion
I’m a techie, as techie as they get.
But I’ve also been an executive and interacted with executives at many companies. What I read here in Mudge’s complaint aren’t the words of an executive, but the words of an activist. It has all the clichés of cybersecurity activism and the immaturity of underlings in resolving disputes.
You won’t get a critical discussion of this event in the press, as they generally take the side of the whistleblower. You won’t get a critical discussion from the InfoSec community, because they worship rock stars, and share the Holy Crusade for better cybersecurity.
I have no doubt Twitter’s cybersecurity is behind that of FAANG leaders in the tech industry. They seem behind on so many other issues. What freaks me out isn’t that their 500,000 servers are running outdated Linux (as Mudge describes). It freaks me out that this means that they have 1 server for each 1000 users (Netflix, whose demands are higher, has 10,000 users per server).
But saying Twitter is flawed is far from saying there’s any objective evidence in the whistleblower complaint that Twitter is misleading shareholders, government agencies like the FTC, or users as to their security.
Robert Graham is a well known security professional. You can follow him on Twitter at @ErrataRob. A version of this post was originally posted to his Substack and reposted here with permission.
Filed Under: activism, cybersecurity, mudge, pieter zatko, security, tradeoffs
Companies: twitter
Please Don’t Normalize Copyright As A Tool For Censorship
from the bad-ideas dept
Yes, yes, copyright is a tool for censorship. Contrary to the claims of copyright system supporters that copyright can’t be used for censorship, the reality is that is basically the only thing that copyright is good for. I mean, at this point, you are either not paying attention, or are just outright lying if you claim that copyright isn’t regularly used to silence people. I could go on linking to examples, but you get the point.
That said, it’s one thing to recognize that copyright is a tool for censorship and another altogether to normalize and embrace that fact.
Over the last few months, we’ve had a few stories about cops blasting copyright-covered music in an effort to block people filming them from being able to upload the videos online. The steps to getting here are not hard to figure out. The legacy copyright industry spent a couple decades screaming about copyright infringement online, and demanding that internet services wave a magic wand and stop it. And, eventually, a variety of automated copyright filters sprung up to try to get Hollywood to just stop whining all the time.
Of course, filters can’t understand context or fair use, so in practice, these filters block all sorts of important content just because they have ancillary copyright-covered music playing the background. From there, cops figured that this was “this one weird trick” that would get them out of being held accountable for their own misdeeds.
When cops are doing it, it’s clearly problematic, because as multiple courts have noted, you have a constitutional right to film police. So the use by police to try to get these videos taken down are a nefariously clever attempt to using copyright law to stifle the public’s rights.
But that doesn’t mean it’s okay when private citizens do it. Even if in pursuit of a good cause. Just as it’s not right when people abuse the DMCA to take down content being used for harassment and abuse, it’s not right to try to use copyright to block people from being able to film you.
David Hogg is a prominent activist on gun control issues. Whether or not you agree with his positions, no one can deny that he’s been incredibly successful in drawing attention to the causes he supports. And, with that, of course, comes a tremendous level of harassment from those who are opposed to his policy ideas. And, that, in part, is coming because he’s had such an impact with his activism.
That said, over the weekend, he gleefully talked about how he was using this same “one weird copyright trick” to stop opposing activists from being able to do anything with the video they were trying to take of him.
If you can’t see the images of the tweets, here’s what he said:
Today in DC- I had a Republican come up with a video camera trying to harass me. I immediately started playing under the Sea from the Little mermaid. He said “why are you playing that music you know it’s copyrighted so I can’t use this video right?” I said “yeah that’s the point”
I love copyright law
Thank you to Disney’s copyright lawyers!
While this is nowhere near as problematic as public officials doing this to prevent the exercise of rights, it’s still problematic. It’s normalizing, and even cheering on, the abuse of copyright law for the purpose of stifling speech.
I tweeted something about this and received some pushback, so I wanted to respond to a few points people raised about this:
Is this really copyright abuse or just taking advantage of others already abusing the system?
It’s a bit of both. To me, any use of copyright law to deliberately stifle speech is an abuse of copyright law. That the copyright system is so broken as to make this easy to do is also a criticism of the system and previous abuses, but it doesn’t excuse those jumping in to support and normalized this activity.
Yeah, but he gets so much abuse, so it’s okay.
Yes, he, like many prominent outspoken people, gets an unfair level of abuse. But that’s no excuse to abuse some other law to try to silence people. Once again, it normalizes the activity and makes sure more and more people will abuse copyright law in this same way. And that’s not good. If you think he receives an unfair level of abuse and harassment, focus on ways to deal with that that don’t involve encouraging further abuse of other laws.
Well maybe this will help demonstrate the problems of copyright law, and get them fixed.
Which seems more likely? Congress fixing broken copyright law? Or Congress and lots of others getting excited about new ways to exploit this “feature” of copyright law to their own benefit. It’s the latter and no one seriously thinks the former is going to happen.
Copyright law is used for censorship all the time. It’s good at that. That doesn’t mean we should embrace it or support it. And it definitely does not mean we should be normalizing that kind of abuse.
Filed Under: activism, copyright, copyright as censorship, copyright filters, david hogg, playing music, upload filters
Cloudflare Says Shutting Down In Russia Would Give Putin What He Wants
from the making-matters-worse dept
Thu, Mar 10th 2022 12:10pm - Karl Bode
We’ve already noted how several of the business decisions to shut down integral parts of the Internet to “punish Putin” aren’t really punishing Putin, but the Russian public.
For example Cogent’s decision to sever Russia from the rest of the Internet is something that Putin generally wants given his longstanding desire to forge a sort of Russian “splinternet,” allowing him to more freely propagandize the Russian public and cut them off from independent news sources. Experts of various stripes spent much of last week making this very point:
“I am very afraid of this,” said Mikhail Klimarev, executive director of the Internet Protection Society, which advocates for digital freedoms in Russia. “I would like to convey to people all over the world that if you turn off the Internet in Russia, then this means cutting off 140 million people from at least some truthful information. As long as the Internet exists, people can find out the truth. There will be no Internet — all people in Russia will only listen to propaganda.”
Russia’s general bullshit propaganda line on the Russian invasion — which is that Ukraine has somehow become overrun with “Nazis,” and Russia is only intervening in a moral “special operation,” is effectively all Russians hear on Russian media. The propaganda has heavily featured the use of the letter “Z” to help delude Russians into thinking authoritarian oppression is akin to patriotic altruism.
Elsewhere, Cloudflare has announced that the company will not be heeding calls to shut down in Russia, quite correctly noting that it’s something that would bolster Putin’s desire to further isolate the heavily propagandized Russian populace from alternative viewpoints and helpful technologies:
Indiscriminately terminating service would do little to harm the Russian government but would both limit access to information outside the country and make significantly more vulnerable those who have used us to shield themselves as they have criticized the government.
In fact, we believe the Russian government would celebrate us shutting down Cloudflare’s services in Russia. We absolutely appreciate the spirit of many Ukrainians making requests across the tech sector for companies to terminate services in Russia. However, when what Cloudflare is fundamentally providing is a more open, private, and secure Internet, we believe that shutting down Cloudflare’s services entirely in Russia would be a mistake.
Ukraine Vice Prime Minister Mykhailo Fedorov had previously asked Cloudflare to shut off service in Russia, stating that “Cloudflare should not protect Russian web resources while their tanks and missiles attack our kindergartens.” Fedorov also urged ICANN to revoke top-level Russian domains.
Of course it’s not like unfettered access to the entirety of the Internet provides some cure for propaganda (see: huge segments of the American public). But cutting off the Russian public from technology and information that might enlighten, inform, or aid activism isn’t the solution many seem to think it is.
Filed Under: activism, internet, propaganda, russia
Companies: cloudflare
As Biden Looks To Ban Targeted Ads, Activists Look To Use Them To Get News To The Russian People
from the be-careful-what-you-wish-for dept
At Tuesday’s State of the Union address, one of President Joe Biden’s pledges regarding the internet, was that he wanted to ban targeted advertising. Lots of people cheered this on, because lots of people absolutely loathe targeted advertising — which is sometimes, misleadingly, referred to as “surveillance capitalism.” My own opinion on this is that basically all of it is overrated. I don’t think that targeted advertising even works that well, and think we’d be better off if companies didn’t rely so heavily on it — but also think that even if we got rid of it, people would still be mad over something else these companies did. Also, part of the reason why people hate targeted advertising so much is because it’s just not that good. If it actually worked, I’m not so sure people would be so mad about it.
That said, I really don’t see how effective or useful a ban on targeted advertising would be. And, at the very least, it would bar creative uses of targeted advertising like a bunch of activists who are looking to use targeted advertising to get around the Russian government’s desire to block all news of its invasion of Ukraine from the Russian citizenry.
Vladimir Putin is scared of the truth. That’s why he’s shut down independent media and social media.
The Russian people deserve better. Yet they struggle to get unbiased news.
But it’s very hard for Putin to shut down online advertising. So we’re going to use digital ads to show Russians independent news about Ukraine.
These ads will be shown to people in Russia, and Russian-occupied Ukraine. We’ll use modern digital advertising to show real news about what is happening in Ukraine, from high quality sources.
The Russian people can make up their own minds about what is going on in Ukraine. And when they see it, we are confident that Putin will be weakened.
We’ve built a team of digital campaign experts, who can get around Russian government restrictions. We’ve already run some test advertising today – showing people news from independent news websites.
See? That seems like a pretty powerful and (dare I say it?) useful application of targeted advertising. But under the Biden administration’s plan, it would be banned.
At some point people do need to realize that not all targeted advertising is problematic. The real problem is that nobody really knows what information companies have on them, or how it’s being shared and used. The issue is not so much the targeting, but the data itself and the lack of transparency and lack of control by end users. Calling for an outright ban on targeted advertising, once again, misdiagnoses the “problem” and comes up with an overly broad solution that seems less than helpful.
Filed Under: activism, advertising, censorship, joe biden, news, russia, state of the union, targeted ads, targeted advertising, ukraine
The Internet Infrastructure's SOPA/PIPA Silver Lining
from the it-brought-people-together dept
Register now for our online event featuring Rep. Zoe Lofgren »
Ten years ago a massive, digital grassroots movement defeated the Stop Online Piracy Act in the House, and its Senate companion the Protect IP Act (SOPA/PIPA). Looking back, this signal victory drove an even more important outcome–the birth of a large wave of Internet activism organizations, including our own Internet Infrastructure Coalition (i2Coalition), a new, unified, independent voice for Internet infrastructure providers.
Our story began in 2011 when I led operations for a web hosting company called ServInt. My future co-founder of the i2Coalition David Snead, was our outsourced General Counsel. He and I became alarmed about the damaging impact on the Internet infrastructure layer of a little-known bill, the Combating Online Infringement, and Counterfeits Act, or COICA, introduced by Senator Patrick Leahy (D-VT).
While well-intentioned to combat infringement by foreign “rogue” websites, the bill would have allowed for the mass blocking of websites by the Department of Justice without due process, on the say so of intellectual property holders**.** We met with Senator Ron Wyden (D-OR) and his staff to seek advice about how to have a voice in the COICA debate. Their answer was direct : “get more of you.”
At that point, it seemed like just a handful of small Internet infrastructure enterprises were even aware of COICA. We realized that we needed to start a grassroots movement.
We began intensive outreach to the Internet infrastructure community online and offline at conferences, meetings, and events, which led to formation of the “Save Hosting Coalition.”We launched letter writing campaigns to explain our deep concerns to legislators in Congress, which continued when SOPA/PIPA eventually superseded COICA.
Fortunately, as our group worked to build a social media campaign and to connect with legislators, we found that we were no longer alone. The Consumer Technology Association, known then as the Consumer Electronics Association, had an active lobbying team who invited us to join them in their efforts to convince Congress of the dangers of SOPA/PIPA.
This broader collaboration became a turning point when we realized the crucial nature of our role in the SOPA/PIPA debate. As Internet infrastructure providers, we were best positioned to explain to policymakers how the technology works and that their well-intentioned proposals to fix problems actually would undermine the functioning of the Internet ecosystem. We showed that proposed SOPA/PIPA technical provisions would make it impossible for small and medium-sized Internet infrastructure businesses to continue operating at scale.
Our campaign against SOPA/PIPA culminated in our decisive conversation with Senator Jerry Moran (R-KS) and Reddit co-founder Alexis Ohanian, which helped convince Senator Moran to join Senator Wyden in a bipartisan hold on PIPA in the Senate. This procedural move froze Senate action for a bit, and gave the time for our new friends at Reddit and Fight for the Future to organize the Internet Blackout Day, which shut down 115,000 sites and galvanized public support for stopping the bills. We aided in the coordination of this vital day, which effectively stopped SOPA/PIPA in its tracks.
The power of facts made the difference in our collective victory. Concerned technology companies explained with one voice how the bills would destroy Internet infrastructure operations. Aligning with the CTA enabled a strong and unified advocacy campaign. Consequently, minds changed in Congress because members and staff heard rational arguments and listened to our concerns.
The SOPA/PIPA legislative debate made clear the ongoing threat of uninformed Internet policy. We saw the need for continued educational advocacy from our Internet infrastructure provider vantage point.
Four companies in the Save Hosting Coalition (Rackspace, cPanel, Endurance (now Newfold Digital), and Softlayer) invested to keep our group going. On July 25th, 2012, we formally launched the Internet Infrastructure Coalition (i2Coalition) with 42 members with a mission to ensure that Internet infrastructure providers are at the table helping to solve future Internet policy problems.
We believed then, as we do now, that Internet policy solutions should be scaled appropriately without creating barriers for new, small entrants in the infrastructure layer, be they college students innovating in their dorm rooms, or entrepreneurs fulfilling a need. Policymakers must understand the technology underlying all these small digital businesses to avoid laws and regulations that inadvertently would disrupt their functioning, ultimately limiting incentives for future innovation and market entry
We proudly reflect on our strong work a decade ago to educate Congress about how SOPA/PIPA would impair the Internet ecosystem. That successful fight led to the i2Coaltion’s formation, and our story is not unique. The most important outgrowth of the SOPA/PIPA saga turned out to be the creation of permanent organizations like ours, set up to defend Internet innovation.
There will always be a need for more Internet education for legislators and regulators, and there will always be somebody coming out with another SOPA/PIPA-like proposal. The Internet needs permanent voices ready to address both.
As we celebrate the i2Coalition’s 10 year anniversary in 2022, we know our work is always evolving. The Internet infrastructure industry today faces a diverse set of challenges involving security, safety, privacy, and more. We face many of the same intermediary liability focused challenges that we had when we started, particularly while engaging in complex issues such as Section 230 reform in the United States and the Digital Services Act in the EU.
We need to keep learning together and empowering alliances with other like-minded stakeholders, to ensure that an educated appreciation of the nuts and bolts of how the Internet works informs policy making fully. At the i2Coalition we are as excited as ever about the digital future, and look forward to continuing to be the voice for the multitudes of businesses that build the Internet.
Christian Dawson is the Co-Founder of the Internet Infrastructure Coalition (i2Coalition) where he works to make the Internet a better, safer place for the businesses that make up the Cloud.
This Techdirt Greenhouse special edition is all about the 10 year anniversary of the fight that stopped SOPA. On January 26th at 1pm PT, we’ll be hosting a live discussion with Rep. Zoe Lofgren and some open roundtable discussions about the legacy of that fight. Please register to attend.
Filed Under: activism, cloud, copyright, infrastructure, internet infrastructure, lobbying, policy, sopa
The SOPA Fight Reminds Us Of The Internet's Power And Usefulness
from the don't-let-it-get-taken-away dept
Ten years ago this week, I watched my computer screen as much of the Internet slowly switched off. Over a hundred thousand websites, including that of our predecessor organization CEA, were going dark in a last-ditch protest of a House bill called the “Stop Online Piracy Act” (SOPA) and its Senate counterpart, the “Protect IP Act” (PIPA).
These bills were backed by large content companies concerned that the Internet would disrupt their longstanding business models. While we sympathized with their concerns about unauthorized downloading, we could not agree with their proposed solution: allowing content owners to easily “take down” entire websites, without due process or notification, if they claimed that the site hosted unauthorized content.
If these bills had passed, the consequences for the Internet would have been devastating. Any website featuring third-party content, including libraries and community bulletin boards, would have been vulnerable to sudden and permanent removal after a single complaint. Sites would vanish and have little recourse. Bad actors would run rampant, using the SOPA-PIPA process to harass competitors and censor opposing viewpoints.
Opposition to SOPA-PIPA had been slowly growing. A strange-bedfellows coalition ranging from the Electronic Frontier Foundation to the Heritage Foundation was opposing the bills. Artists like Amanda Palmer and OKGO denounced the bills’ impacts on creativity. A group of startup founders including Alexis Ohanian, Micah Shaffer, and Christian Dawson walked the Capitol meeting with legislators, many of whom had never previously been face-to-face with an internet entrepreneur. And at the 2012 CES, Republican Rep. Darrell Issa and Democratic Sen. Ron Wyden stood together and declared they would do anything in their power to stop the bills.
But this opposition, vigorous as it was, shrank in comparison to the bills’ support. SOPA and PIPA were backed by dozens of DC’s biggest players, including the Motion Picture Association, the Recording Industry Association, and the powerful US Chamber of Commerce. SOPA had dozens of Congressional sponsors, including Judiciary Committee Chairman Lamar Smith.
In the Senate, PIPA sailed unanimously through the Judiciary Committee and Majority Leader Reid announced that he planned to bring the bill to the floor for a vote. By normal DC rules, the game was over and the bills were sure to pass.
But the Internet blackout drew public attention, and the tide quickly turned as Americans began calling and emailing their members of Congress. In total, more than 14 million Americans contacted their lawmakers to protest the legislation. I remember sitting in a legislator’s office the morning after the blackout and watching in sincere astonishment as the phone rang off the hook.
The impact was swift, as legislators rushed to take their names off the bills. For the first time, policymakers realized that the Internet wasn’t some fringe domain for computer geeks, it was a central and treasured element of their constituents’ daily lives. Within a week, SOPA and PIPA had been pulled from consideration in the House and Senate.
The death of SOPA/PIPA unleashed a Cambrian Explosion of online innovation. Companies like Instagram, Tinder Slack, Patreon, and thousands of others changed the way we work, play, and live. Anyone who attended CES 2022 could not help but see the extraordinary dynamism and competition that currently exists in the technology industry.
The content industry also thrived once they stopped treating the internet as an enemy and began treating it as an asset. While content companies once declared that ”you can’t compete with free,” in the wake of SOPA-PIPA they pivoted to offering well-designed, consumer-friendly services at reasonable prices.
According to the RIAA, U.S. recorded music revenues grew 9.2% in 2020, with 83% of the revenue coming from Internet streaming. The movie industry has seen similar gains, with global streaming video revenue projected to hit $94 billion by 2025. Meanwhile, independent creators used new internet platforms to present their work directly to fans without having to go through gatekeepers or intermediaries.
Most importantly, the post-SOPA-PIPA Internet has proven to be the most impactful communications platform in human history. On May 25, 2020, 17-year-old Darnell Frazier used her smartphone to document the murder of George Floyd by a Minneapolis police officer. Posted to Facebook, this video kicked off an ongoing national conversation on race and injustice.
Similarly, in 2017 women took to the Internet to respond to sexual assault allegations against Hollywood producer Harvey Weinstein and describe their own experiences under the hashtag #MeToo. Widespread media coverage changed the way our society responds to sexual harassment. For the first time, regular people have been empowered to speak to millions on important issues, and they are using the power to change society for the better.
Over the last decade, we have learned many lessons. We have learned that the Internet, while it provides tremendous benefits, is not perfect. That is why we need clear federal guidelines in areas like online privacy and digital currencies that protect consumers and promote innovation.
We have learned that Americans continue to care passionately about the Internet. Over the last two years during COVID, millions have gone online to work, educate their children, access health care, keep in touch with loved ones, and arrange delivery of critical goods. No wonder online companies rank highly in surveys of America’s most-loved brands.
However, the SOPA-PIPA fight is not over. In “Groundhog Day” fashion, threats to the free and open Internet are reemerging. Policymakers are threatening to increase government control over Internet speech, and impose other limitations that would harm online companies and small businesses.
Many of those pushing today’s “anti-tech” narrative are the same disgruntled competitors and legacy industries that engineered SOPA-PIPA. In fact, some broadcasters and content companies are even opposing an eminently qualified FCC nominee, Gigi Sohn, because of her correct and pro-consumer opposition to SOPA-PIPA a decade ago
Congress is now considering legislation that would eliminate products like Google Docs and Amazon Prime. These services are woven into the lives of millions who rely on them to surmount the difficulties of COVID. If Congress breaks these services, the reaction from voters could make the SOPA-PIPA earthquake look like a mild tremor. Similarly, you could predict a SOPA-PIPA-type backlash if the government places unreasonable restrictions on the 46 million Americans who own digital assets.
A few weeks after SOPA-PIPA died, I was ordering coffee when the barista pointed at the “STOP SOPA” sticker on my laptop. “I emailed my member of Congress about that, and it worked…It was the first time I felt I could actually change things in Washington,” he said.
Thankfully, ten years after SOPA-PIPA, the Internet’s ability to empower American expression and innovation is only just beginning.
Michael Petricone is the Senior VP, Government Affairs, at the Consumer Technology Association.
This Techdirt Greenhouse special edition is all about the 10 year anniversary of the fight that stopped SOPA. On January 26th at 1pm PT, we’ll be hosting a live discussion with Rep. Zoe Lofgren and some open roundtable discussions about the legacy of that fight. Please register to attend.
Filed Under: activism, copyright, innovation, internet, open internet, sopa
10 Years Later: SOPA Protests Were A Turning Point, But Not The Beginning Or The End
from the the-fight-continues dept
The SOPA blackouts of 2012 marked an important milestone in the power of online activism to influence policy at the highest levels, but it would be a mistake to view it as either the start or the end of the struggle it represents. It is still among the most strikingly-effective examples to date, but it built on years of policy work that continues to this day.
Online activism is notoriously poorly preserved, and it rarely produces the salient visuals of offline protests. Massive crowds of people taking part in an online action can’t be photographed extending down city blocks; no hand-painted signs with powerful slogans or sea of faces with resolute determination will become the iconic image representing the moment.
As a result, it’s easier to forget the early Web blackouts of 1996 protesting the passage of the Communications Decency Act, or the Gray Tuesday event of copyright civil disobedience in 2004, to name a few I spoke about the legacy of these three events, taken together, at re:publica 2014).
The SOPA protests provided a counter-example, in part, both because of the memorable visuals of the online “blackouts” and the in-person events coordinated in cities around the country. Images of Aaron Swartz, who had been a key organizer against the bill, addressing crowds at a New York rally illustrated articles about the online protests.
As important as the unprecedented scale of the online actions was the reception by the press, the public, and the political sphere. The SOPA blackout represented a moment of online grassroots activism demanding to be taken seriously, and getting the coverage and reception it deserved. Every major news outlet reported on the protests and, as an indicator of its prominence, each of the candidates vying for the Republican nomination for president were asked onstage about SOPA at a January 19 debate — surely a first for a copyright proposal. Their criticism was ample evidence of the cracks in the bill’s inevitability.
One long-term effect of the SOPA blackouts: it has seemed to meaningfully shift, perhaps permanently, the policy environment around copyright in particular. In 2011 and early 2012, SOPA appeared to be inevitable, in part because earlier industry-favored copyright proposals had both passed with near unanimity and withstood challenges that laid their irrationality bare.
After SOPA’s flame-out, it no longer seems like copyright law is something that can be hammered out by industry representatives behind closed doors (admittedly, this shift has corresponded with the rise of tech companies as lobbying giants with a different copyright agenda than the existing players, which has surely played a role). As just one example: In 2011, SOPA was inevitable, but so was an eventual expansion to the Copyright Term Extension Act, continuing the public domain freeze that had been running since 1998. Of course, that never came to pass, and the public domain has grown on January 1 every year since 2019.
That change wasn’t the result of the “war being won” — far from it. Increasing the costs of pushing through copyright policy has mostly shifted the battlegrounds in two major ways.
First, big changes to how copyright gets enforced in the United States happen through private agreements with online platforms. YouTube’s ContentID system already existed in 2012, but the importance of that tool and others like it has increased immensely in the years since. The result is a landscape of platforms that do what Professor Annemarie Bridy has called “DMCA-plus enforcement,” extending the effective contours of copyright without a change in the law.
If there is an upside to this arrangement, it has been that actual copyright law discussions have had the heat turned down slightly, and may have become less of a fact-free zone. It’s hard to play out the counterfactual, but I think the right-to-repair movement and the Music Modernization Act have been beneficiaries of this change.
Second, and perhaps more nefariously, copyright proposals that had been proxies for regulating online speech more broadly have migrated to other areas of the law. Most notably in the past decade, these attacks have focused on section 230 of the Communications Decency Act. In some cases, the overlap is almost comical, like when op-eds pushing for changes cite the wrong law, and the New York Times has to issue a correction. In other moments the effect is more depressing. Watching FOSTA/SESTA skate through to passage, despite all the organizing against it, was a low point for online speech.
In my work with journalists today, copyright continues to be a chokepoint for silencing unfavorable reporting, but it is only one arrow in the quiver of would-be censors. We see police officers attempting to limit the distribution of their statements by playing mainstream music in the background, or right-wing activists issuing takedowns for newsworthy photographs documenting their associations, but we also see frivolous SLAPP suits by elected officials, a dramatic rise in arrests and assaults on journalists, and existential legal threats to entire outlets.
The overwhelming majority of people who are passionate about freedom of expression and access to knowledge online aren’t paid to work on those issues. I have been very lucky that, since 2011 I have been able to focus on these important topics as my job, first at the Electronic Frontier Foundation as a copyright activist, and now as the director of advocacy at the Freedom of the Press Foundation. SOPA was among the very first issues I worked on in this field, and I’ve carried its lessons through the decade of activism that I’ve been fortunate enough to participate in.
Parker Higgins is the director of advocacy at the Freedom of the Press Foundation. From 2011 to 2017, he worked on the activism team at the Electronic Frontier Foundation on copyright and speech issues.
This Techdirt Greenhouse special edition is all about the 10 year anniversary of the fight that stopped SOPA. On January 26th at 1pm PT, we’ll be hosting a live discussion with Rep. Zoe Lofgren and some open roundtable discussions about the legacy of that fight. Please register to attend.
Filed Under: activism, copyright, free speech, sopa
Demanding Progress: From Aaron Swartz To SOPA And Beyond
from the foundations-of-activism dept
It’s a great irony — and an awkward thing to admit — that I’m not sure if the organization of which I’m executive director, Demand Progress, would exist but for SOPA and PIPA (or really their progenitor, COICA).
This month marks not just the 10th anniversary of the SOPA blackout, but also the 9th anniversary of the passing of my partner in the effort to get Demand Progress up and running, Aaron Swartz. Aaron took his own life while facing charges under the Computer Fraud and Abuse Act for allegedly downloading too many articles from the JSTOR academic cataloging service — to which he had a subscription — using MIT’s open network. We hope the organization still upholds the values that governed his work — and he certainly serves as an inspiration for so much of what we do.
Aaron was several years younger than me, but we came to activism from similar perspectives — more or less unreconstructed utilitarianism — and first connected during my unsuccessful campaign for Congress in 2010. A few days after the primary we launched our first couple of online campaigns as Demand Progress, which was originally intended to be a multi-issue progressive populist concern.
While Aaron is above all else remembered for his advocacy for an open internet and intellectual property reform, his priorities increasingly included contesting concentrated corporate power, implementing a more equitable vision for our economy, and opposing war-making. Some observers this fall noted the continued resonance of his final tweet — an entreaty to Treasury to mint a $1 trillion coin as a solution to a debt ceiling impasse.
But a second irony attendant to our founding is that, as we sought to identify a base of activists to support this progressive populist vision, Aaron was promptly pulled back into the firmament of online rights and IP activism. A petition we put forth in opposition to COICA gained hundreds of thousands of signers over a few days, demonstrating a certain void in the online campaigning ecosystem and imbuing the new organization with a sense of purpose — and providing a base of activists we could organize to make a difference.
Everything Demand Progress has become since was built on that foundation.
Over the course of the next year or so we worked together to build a movement to use the open architecture of the internet to save the open architecture of the internet. Aaron had a much more intuitive sense than I did of how to activate and harness the potential energy of online networks.
My time in politics, starting before MySpace went online, had been shaped by a more traditional conception of political organizing – tactics like knocking on doors and phone banking. But my contribution was that I understood the legislative process and had a sense of what politicians recognize as demonstrations of power.
As time passed, kindred spirits — most notably Fight for the Future — emerged. Policy groups — Public Knowledge and EFF also come to mind, but there were many organizations that undertook the often unsung work of engaging with hundreds of Congressional staffers to attune them to the details of our coalition’s concerns.
It all culminated in the “blackout” of which others here have written: countless thousands of sites and users each serving as beacons, alerting their combined networks of tens of millions of people to the threat at hand, urging them to make their voices heard, and providing them the tools with which to do so. Is it cringey to say that it kind of felt transcendent? Well, it did.
Idealism about the internet’s potential can seem quaint today — even to me — but for most who took part, the SOPA effort was a demonstration of a fundamental, visceral human yearning to connect with one another. You can watch a talk Aaron gave a few months later, where he discusses what all of this meant to him.
We and our allies have wielded the tactics we learned and the relationships we built over the course of the SOPA campaign to agitate for other causes that have helped shape the workings of the internet — for instance in support of net neutrality and against mass surveillance.
Without them it’s unlikely that we’d have secured the strong net neutrality rules that were put in place in 2015 (with the explicit backing of millions of people) only to be repealed by the Trump administration a couple years later (over the will of millions of people). We are likely on the cusp of a new proceeding to reinstitute net neutrality rules, which are overwhelmingly likely to pass because of those broad demonstrations of support — and stand a chance of being longer-lasting.
My second-to-last in-person conversation with Aaron was about how we might fight against suspected abuses of the Patriot Act to spy on a vastly broader universe than was understood by the public — a case Edward Snowden would crack open just a few months later. Demand Progress’s research and lobbying efforts would eventually, during the spring of 2020, prove dispositive of the sunset of several Patriot Act provisions, including that under which the telephonic metadata collection that Snowden revealed was taking place. (Government surveillance practices remain opaque, and there’s every reason to assume nefarious behaviors continue under a variety of other real or imagined authorities.)
And in the years since SOPA, Demand Progress has become an organization with a modicum of influence on the national stage — and has even been able to take on that broader remit we envisioned upon our founding.
Today we work not to just to help forward online rights, but also in support of expansive macroeconomic policy, against endless wars, and for regulation of concentrations of corporate power — inclusive of some of the major platforms that were allied with us during the SOPA effort, because we think reforms are needed to bring the internet back in line with a more ideal, horizontalist conception of what it could be.
The Demand Progress team and I are grateful to Techdirt for pulling together this retrospective and inviting us to participate, and for giving us the opportunity to reflect on that exciting and hopeful work so many of us undertook together a decade ago. We’d also like to acknowledge the many people and groups carrying forth causes that Aaron cared about — SecureDrop, CFAA reform, and public access to court records and scientific research, to name just a few.
Davd Segal is the executive director of Demand Progress and a non-residential fellow at Stanford’s Center for Internet and Society. He is a former Rhode Island State Representative.
This Techdirt Greenhouse special edition is all about the 10 year anniversary of the fight that stopped SOPA. On January 26th at 1pm PT, we’ll be hosting a live discussion with Rep. Zoe Lofgren and some open roundtable discussions about the legacy of that fight. Please register to attend.
Filed Under: aaron swartz, activism, cfaa, sopa, surveillance