john thune – Techdirt (original) (raw)

Senators: Gosh, Maybe We Should Accurately Map Broadband Before Deploying $50 Billion In Telecom Subsidies?

from the can't-fix-what-you-can't-measure dept

We’ve noted for years how, despite a lot of political lip service to “bridging the digital divide,” the U.S. still doesn’t truly know where broadband is or isn’t available. Despite spending $400 billion and counting, the FCC has done an abysmal job accurately mapping broadband speeds and availability, or holding monopolies responsible for false coverage claims (or much of anything else).

That means we’ve already spent billions upon billions of dollars in telecom subsidies without truly understanding whether it’s fixing the problem. And courtesy of the COVID relief and infrastructure bills, we’re about to spend another $50 billion to fix a problem we’ve yet to competently measure.

Tasked by Congress, the FCC recently introduced new maps they say provide a more granular, crowdsourced look at broadband access. But critics say the maps remain an inaccurate mess (you can test it yourself here). While the new maps thankfully include the ability to challenge inaccurate data, municipalities and state leaders tell me the entire process is a bit of a hot mess that tends to (surprise) prioritize the interests of the country’s biggest providers.

Enter U.S. Senators Jacky Rosen and John Thune, who last week introduced new legislation that would pause subsidizing broadband deployments until the FCC has broadband mapping sorted out:

The FCC’s failure to fix their deeply flawed broadband map and the Department of Commerce’s refusal to wait to allocate broadband funding until the map is fixed puts hundreds of millions of dollars in funding for high-speed internet in Nevada at risk,” said Senator Rosen. “My bipartisan bill would ensure the FCC can fix this map before money goes out the door, so that all states receive their fair share of federal dollars to provide communities desperately needed access to high-speed internet.”

Both the American Rescue Plan Act (ARPA) and the Infrastructure Investment and Jobs Act (IIJA) included more than $60 billion in broadband subsidies. Without accurate maps, a lot of that money could be wasted on duplicative projects, or doled out to regional monopolies lying about their coverage to ensure they gobble up subsidies they don’t actually deserve.

The NTIA says it won’t begin allocating the $42.5 billion in IIJA funding until June 30, giving the government a few extra months to get this right. Having covered this sector for twenty years, I’m highly doubtful that they do. While this historic funding will lead to many great investments, the stage is set for what could be some potentially gobsmacking fraud due to unreliable data.

Unfortunately, in a Congress whose top legislative priorities involve hyperventilating about TikTok, it seems unlikely that the bill makes it over the finish line. Though it’s nice to see Senators notice that the problem exists, given DC’s myopic focus on “Big Tech” policy to the exclusion of all other internet policy considerations (like, say, having a functional telecom and media regulator).

Filed Under: bead funding, broadband, broadband mapping, broadband maps, covid relief, digital divide, fcc, gigabit, high speed internet, infrastructure bill, jacky rosen, john thune

The Great TikTok Moral Panic Continues As Senators Thune, Warner Attempt A More Elaborate Ban

from the performative-freak-out dept

Thu, Mar 9th 2023 05:27am - Karl Bode

We’ve noted for a while now how most of the outrage surrounding TikTok isn’t exactly based in factual reality.

There’s no real evidence of the Chinese using TikTok to befuddle American toddlers at scale, and the concerns about TikTok’s privacy issues are bizarrely narrow, with many of the folks proposing a ban seemingly oblivious to the broader problem: namely a lack of data broker oversight and our comical, corruption-fueled failure to pass even a basic U.S. privacy law for the internet era.

Undaunted, Senator Mark Warner and John Thune this week introduced the Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act (summary and full bill text), legislation the duo claims will make Americans far more safe and secure by, among other things, eventually, maybe banning TikTok in the United States.

Unlike other proposals that weirdly hyperventilate exclusively about TikTok, Thune and Warner’s proposal claims it will empower the Department of Commerce to more broadly review, prevent, and mitigate “technology transactions” that “pose undue risk to our national security”:

“Today, the threat that everyone is talking about is TikTok, and how it could enable surveillance by the Chinese Communist Party, or facilitate the spread of malign influence campaigns in the U.S. Before TikTok, however, it was Huawei and ZTE, which threatened our nation’s telecommunications networks. And before that, it was Russia’s Kaspersky Lab, which threatened the security of government and corporate devices,” said Sen. Warner. “We need a comprehensive, risk-based approach that proactively tackles sources of potentially dangerous technology before they gain a foothold in America, so we aren’t playing Whac-A-Mole and scrambling to catch up once they’re already ubiquitous.”

Thune and Warner are applauded for at least proposing broader solutions instead of singularly freaking out about TikTok exclusively. Still, the bill’s a bit murky, and generally structured to avoid being vulnerable to a legal challenge as a bill of attainder, something likely to plague a recent House GOP legislative proposal focused on singularly banning TikTok.

That said, these efforts are all largely based on a lot of silly fearmongering that doesn’t have much basis in reality. Before he released the bill, Warner stated that one of his key motivations for it was to thwart TikTok from becoming a tool for Chinese propaganda. But again, there’s no evidence that’s actually happening, and Warner’s proposed theoreticals are just kind of silly:

“What worries me more with TikTok is that this could be a propaganda tool,” Warner said. “The kind of videos you see would promote ideological issues.”

Warner said the app feeds Chinese kids more videos about science and engineering than American children, suggesting the app’s content recommendation system is tuned for China’s geopolitical ambitions.

That’s to say, Warner couldn’t actually come up with any examples of TikTok being used for Chinese propaganda at scale (because there aren’t any yet), so he just effectively made up a claim that the Chinese are intentionally showing Americans fewer science videos to make us stupid, which is just… silly.

Congress’ fixation on TikTok as a theoretical propaganda weapon are amusing coming from a country that’s increasingly so buried in right wing and corporate propaganda, that Americans not only routinely cheer against their own best self interests while parroting conspiracy theories, they’re increasingly likely to become radicalized and commit mass murder. Congress doesn’t seem in much of a rush, there.

The other concern about TikTok: that the Chinese will use TikTok data to spy on Americans, is obviously more valid. Yet proposals to ban TikTok — even elaborate ones like the legislation proposed by Thune and Warner — still aren’t getting at the real heart of the problem.

For decades, we’ve effectively let telecoms, app makers, OEMs, and every other company that touches the internet hoover up every last shred of consumer data. Those companies then consistently not only fail to secure this data, they sell access to it to a rotating crop of global data brokers, which in turn sell access to everything from your daily movement habits to your mental health issues.

It’s trivial for the Chinese, Russian, or American governments to purchase and abuse this data, even if you banned TikTok (and every single other Chinese app in existence) tomorrow.

But you’ll notice that the lion’s share of the Congressfolk who’ve dropped absolutely everything to hyperventilate about TikTok don’t much care about that; an attempt to regulate data brokers or implement meaningful penalties for corporations (and executives) that over-collect data and then fail to secure it might impact the revenues of U.S. companies, and you simply can’t have that.

Freaking out about TikTok is far more politically safe than addressing the bigger problem. It lets you pretend you’re being “tough on China” and genuinely care about national security and consumer privacy, even if your stubborn refusal to hold data brokers accountable or pass a privacy law undermines all the national security goals you claim to be keen on addressing.

Filed Under: chinese surveillance, commerce department, john thune, mark warner, national security, privacy, propaganda, restrict act, security, social media, tiktok ban
Companies: tiktok

Why The GOP Wants To Break Your Spam Filter: GOP Candidate Tricked Gullible Voters Into Funding Him With Misleading Spam Emails

from the the-gop-takes-you-all-for-suckers dept

Over the last few months we’ve been covering this bizarre story of how Republican politicians, pushed by their preferred spamming provider (which misrepresented a study on how email providers treat political spam), have been falsely claiming that Google is “censoring” their political emails. They’ve also been pushing a law that would require email providers not to label politician emails as spam. In response, Google caved a bit and proposed a new offering that would whitelist political campaigns, keeping them out of the spam filter (but including a button at the top asking the recipient if they want to unsubscribe from that mailing). Google asked the Federal Election Commission (FEC), to bless this idea (and specifically to note it wouldn’t be deemed to violate any campaign finance laws).

As we just noted, the public absolutely hates this idea, and their comments to the FEC reflect a visceral hatred towards (1) spam, and (2) politicians who seek to write laws that exempt themselves from laws against spamming.

In other words, this is a deeply unpopular idea that Republicans are pushing because their own digital marketing agency is so bad at crafting emails that don’t look like spam that they have to resort to special laws to keep spamming you.

Indeed, as we noted in some of our coverage, Republican politicians have a long history of especially spam-like emails, which seem designed specifically to dupe gullible people — often older people — out of money.

And now there’s a new story suggesting that this appears to be getting even worse. A Republican candidate for Congress in Florida, Erick Aguilar, has raised a tremendous amount of money by sending email spams pretending to be campaign emails from more popular politicians: Donald Trump and Ron DeSantis.

Seriously.

In his pursuit of Florida’s 4th Congressional District, Aguilar has used WinRed, a popular platform Republicans employ to process campaign contributions, to send a flurry of fundraising emails. But the solicitations did not mention Aguilar’s campaign or his leading competitor in the Aug. 23 primary, state Sen. Aaron Bean, who has the support of much of the state’s GOP establishment.

Instead, the messages were written in a way that suggested donations would actually go toward more prominent GOP politicians, including the former president, the governor or Ohio Rep. Jim Jordan.

“Governor DeSantis is always fighting back against Corrupt Left,” read one email that came under a logo using DeSantis’ name. “No matter how bad this country is the Fake News media and Biden Admin are OBSESSED with that [sic] Florida is doing.”

It added: “It is time to help America’s #1 Governor. Can we count on you to support DeSantis?”

It appears that these tactics have worked, with gullible Trump/DeSantis supporters filling Aguilar’s campaign coffers… without even realizing it.

The move appeared to have worked — particularly among retired older donors from across the country. Some of Aguilar’s WinRed emails, such as the one about DeSantis, went out in November, just before the Jacksonville-based candidate’s campaign saw nearly 16 times as much cash come in in December, campaign finance records show. Yet some of the people who sent contributions had no idea they were giving to Aguilar.

“I don’t know that name,” Pat Medford, an 88-year-old from Minnesota, said in an interview when asked about her donations to Aguilar. “I, of course, give to President Trump and DeSantis, but that’s really it. I don’t give to many others, and that name [Aguilar] is not familiar to me.”

Despite not knowing him, records show Medford gave 30 separate contributions to Aguilar’s campaign through WinRed, totaling more than $1,000.

So, let’s be clear here. Under the GOP bill proposed by John Thune and Google’s proposed pilot program, it appears these emails could not be filtered as spam. They are coming from a legitimate candidate for federal office. That they are misleading and extraordinarily spammy doesn’t much matter to these Republicans, it seems. That these Republicans have to resort to such scammy techniques to dupe gullible voters out of so much money doesn’t matter.

All that seems to matter is that they want more cash from their base, and they consider this the best way to keep Google from actually protecting people.

Filed Under: donald trump, dupes, erick aguilar, gop, john thune, ron desantis, scams, spam, spam filters
Companies: google, targeted victory

Ridiculous Republican Senators Introduce Law To Say Political Emails Can’t Be Filtered As Spam

from the the-party-of-more-spam-for-everyone dept

The latest in stupid, unconstitutional, performative, nonsense legislation from Republicans comes from Senator John Thune, and it would break your email spam filters. It’s called the “Political Bias in Algorithm Sorting Emails Act of 2022” and it’s possibly even dumber than it sounds.

First, this is all based on a bogus, cooked up, deliberately misinterpreted-by-people-who-know-better controversy. We wrote about this a couple months ago. Researchers at North Carolina State University released a preprint of a study about email spam filtering during the 2020 election. They set up a variety of email accounts, and signed up for political mailings. The study did find that Gmail’s spam filter was more likely to flag Republican political mailings as spam, but found the opposite was true of Yahoo Mail and Microsoft Outlook, which flagged more Democratic politicians’ emails as spam than Republicans.

Of course, Democrats didn’t freak out about this. Only Republicans did, egged on by a disingenuous political trickster, who tried to make this into a big deal, and was aided by Fox News and other disingenuous entities, who turned it into a thing — even to the point of some Republicans filing a laughable complaint with the Federal Election Committee trying to argue that Google was giving an unfair advantage to Democrats.

The authors of the original study, for what it’s worth, appear to be horrified about how their study is being abused by political hacks.

_“Gmail isn’t biased like the way it’s being portrayed,” [study author Muhammad Shahzad_] said. “I’m not advocating for Gmail or anything. I’m just stating that when we take the observation out of a study, you should take all of the observations, not just cherry-pick a few and then try to use them.”

Furthermore, Shahzad noted that the part of the study being pointed out only applied to Gmail accounts where users did not express their own preferences. Once users added in their own preferences, the impact for Gmail effectively disappeared:

Shahzad said while the spam filters demonstrated political biases in their “default behavior” with newly created accounts, the trend shifted dramatically once they simulated having users put in their preferences by marking some messages as spam and others as not.

“What we saw was after they were being used, the biases in Gmail almost disappeared, but in Outlook and Yahoo they did not,” he said.

In other words, there’s pretty strong evidence here that there’s nothing nefarious going on. Because, seriously, who would actually program a spam filter to try to hide one party’s political spam? The reason so much goes to spam is because many users treat the non-stop bombardment by political campaigns as spam. Because it’s often hellishly spammy.

Anyway, this bill would now effectively require email providers to whitelist all political campaigns from spam filters, unless each user directly calls the emails spam:

In General–It shall be unlawful for an operator of an email service to use a filtering algorithm to apply a label to an email sent to an email account from a political campaign unless the owner or user of the account took action to apply such a label.

The bill would also create a privacy nightmare, in that it requires email providers to release transparency reports detailing how many political campaign emails were flagged as spam. But that would require the email services snoop on your emails. The transparency report would also require the email providers to designate how many Democratic campaign emails were filtered as spam, and how many Republican campaign emails were filtered as spam. So, apparently third parties are shit out of luck.

Even worse, the bill would require any email provider to respond to frequent demands from political campaigns about how often their emails were flagged as spam.

This is performative, unconstitutional nonsense on multiple levels. Even more hilarious, in announcing the bill, Senator John Thune, gave a talk about how the Republican’s “vision” for governing was contrasted with the Democrats, because the GOP doesn’t want “more big government” but rather “allowing free markets to work” and having “a light regulatory touch”… and then uses that as the backdrop to introducing this intrusive, big government bill that would allow the government to block the free market of spam filters, in order to give politicians special rights to avoid your spam filter, and to force businesses to have to file tons of busywork reports documenting their spam filters.

In short, what the Republicans are actually standing for here is “more spam for everyone” and not allowing spam filters to work properly.

Senator, get your dirty corrupt hands off my spam filter.

Filed Under: algorithms, email, john thune, performative nonsense, spam, spam filtering
Companies: google, microsoft, yahoo

The Latest Version Of Congress's Anti-Algorithm Bill Is Based On Two Separate Debunked Myths & A Misunderstanding Of How Things Work

from the regulating-on-myths dept

It’s kind of crazy how many regulatory proposals we see appear to be based on myths and moral panics. The latest, just introduced is the House version of the Filter Bubble Transparency Act, which is the companion bill to the Senate bill of the same name. Both bills are “bipartisan,” which makes it worse, not better. The Senate version was introduced by Senator John Thune, and co-sponsored by a bevy of anti-tech grandstanding Senators: Richard Blumenthal, Jerry Moran, Marsha Blackburn, Brian Schatz, and Mark Warner. The House version was introduced by Ken Buck, and co-sponsored by David Cicilline, Lori Trahan, and Burgess Owens.

While some of the reporting on this suggests that the bill “targets” algorithms, it only does so in the stupidest, most ridiculous ways. The bill is poorly drafted, poorly thought out, and exposes an incredible amount of ignorance about how any of this works. It doesn’t target all algorithms — and explicitly exempts search based on direct keywords, or algorithms that try to “protect the children.” Instead, it has a weird attack on what it calls “opaque algorithms.” The definition itself is a bit opaque:

The term “opaque algorithm” means an algorithmic ranking system that determines the order or manner that information is furnished to a user on a covered internet platform based, in whole or part, on user-specific data that was no expressly provided by the user to the platform for such purpose.

The fact that it then immediately includes an exemption for “age-appropriate content filters” only hints at some of the problems with this bill — which starts with the fact that there are all sorts of reasons why algorithms recommending things to you based on more information than you provide directly might be kinda useful. For example, a straightforward reading of this bill would mean that no site can automatically determine you’re visiting with a mobile device and format the page accordingly. After all, that’s an algorithmic system that uses information not expressly provided by the user in order to present information to you ranked in a different way (for example, moving ads to a different spot). What’s more, “inferences about the user’s connected device” are explicitly excluded from being used even if they are based on data expressly provided by the user — so even allowing a user to set a preference for their device type, and serve optimized pages based on that preference, would appear to still count as an “opaque algorithm” under the bill’s definitions. You could argue that a mobile-optimized page is not necessarily a “ranking” system, except the bill defines “algorithmic ranking system” as “a computational process … used to determine the order or manner that a set of information is provided to a user.” At the very least, there are enough arguments either way that someone will sue over it.

Similarly, lots of media websites offer you a certain number of free articles before you hit their register or paywall — and again, that’s based on information not expressly provided by the user — meaning that such a practice might be in trouble (which will be fun to watch when media orgs who use those kinds of paywall tricks but are cheering this on as an “anti-big-tech” measure discover what they’re really supporting).

The point here is that lots of algorithm/ranking systems that work based on information not expressly provided by the user are actually doing important things that would be missed if they suddenly couldn’t be done any more.

And, even if the bill were clarified in a bill-of-attainder fashion to make it clear it only applies to social media news feeds, it still won’t do much good. Both Facebook and Twitter already let you set up a chronological feed if you want it. But, more to the point, the very rationale behind this bill makes no sense and is not based in reality.

Cicilline’s quote about the bill demonstrates just how ignorant he is of how all of this stuff actually works:

“Facebook and other dominant platforms manipulate their users through opaque algorithms that prioritize growth and profit over everything else. And due to these platforms? monopoly power and dominance, users are stuck with few alternatives to this exploitative business model, whether it is in their social media feed, on paid advertisements, or in their search results.”

Except… as already noted, you can already turn off the algorithmic feed in Facebook, and as the Facebook Papers just showed, when Facebook experimented with turning off the algorithmic rankings in its newsfeed it actually made the company more money, not less.

Also, the name of the bill is based on the idea of “filter bubbles” and many of the co-sponsors of the bill claim that these websites are purposefully driving people deeper into these “filter bubbles.” However, as we again just recently discussed, new research shows that social media tends to expose people to a wider set of ideas and viewpoints, rather than more narrowly constraining them. In fact, they’re much more likely to face a “filter bubble” in their local community than by being exposed to the wider world through the internet and social media.

So, in the end, we have a well-hyped bill based on the (false) idea of filter bubbles and the (false) idea of algorithms only serving corporate profit, which would require websites to give users a chance to turn off an algorithm — which they already allow, and which would effectively kill off other useful tools like mobile optimization. It seems like the only purpose this legislation actually serves to accomplish is to let these politicians stand up in front of the news media and claim they’re “taking on big tech!” and smile disingenuously.

Filed Under: algorithms, antitrust, big tech, david cicilline, filter bubble transparency act, filter bubbles, john thune, ken buck, opaque algorithms, ranking, richard blumenthal

PACT Act Is Back: Bipartisan Section 230 'Reform' Bill Remains Mistargeted And Destructive

from the second-verse,-same-as-the-first dept

Last summer we wrote about the PACT Act from Senators Brian Schatz and John Thune — one of the rare bipartisan attempts to reform Section 230. As I noted then, unlike most other 230 reform bills, this one seemed to at least come with good intentions, though it was horribly confused about almost everything in actual execution. If you want to read a truly comprehensive takedown of the many, many problems with the PACT Act, Prof. Eric Goldman’s analysis is pretty devastating and basically explains how the drafters of the bill tried to cram in a bunch of totally unrelated things, and did so in an incredibly sloppy fashion. As Goldman concludes:

This bill contains a lot of different policy ideas. It adds multiple disclosure obligations, regulates several aspects of sites? editorial processes, makes three different changes to Section 230, and asks for two different studies. Any one of these policy ideas, standing alone, might be a significant policy change. But rather than proposing a narrow and targeted solution to a well-identified problem, the drafters packaged this jumble of ideas together to create a broad and wide-ranging omnibus reform proposal. The spray-and-pray approach to policymaking betrays the drafters? lack of confidence that they know how to achieve their goals.

Daphne Keller also has a pretty thorough explanation of problems in the original — noting that the bill contains some ideas that seem reasonable, but often seems sorely lacking in important details or recognition of the complexity involved.

And, to their credit, staffers working on the bill did seem to take these and other criticisms at least somewhat seriously. They reached out to many of the critics of the PACT Act (including me) to have fairly detailed conversations about the bill, its problems, and other potential approaches. Unfortunately, in releasing the new version today, it does not appear that they took many of those criticisms to heart. Instead, they took the same basic structure of the bill and just played around at the margins, leaving the new bill a problematic mess, though a slightly less problematic mess than last year’s version.

The bill still suffers from the same point that Goldman made originally. It throws a bunch of big (somewhat random) ideas into one bill, with no clear explanation of what problem it’s actually trying to solve. So it solves for things that are not problems, and calls other things problems that are not clearly problems, while creating new problems where none previously existed. That’s disappointing to say the least.

Like the original, the bill requires that service providers publish an “Acceptable Use Policy,” and then puts in place a convoluted complaint and review process, along with transparency reporting on this. This entire section demonstrates the fundamental problem with those writing the PACT Act — and it’s a problem that I know people explained to them: it treats this issue as if it’s the same across basically every website. But, it’s not. This bill will create a mess for a shit ton of websites — including Techdirt. Forcing every website that accepts content from users to post an “acceptable use policy” leads us down the same stupid road as requiring every website have a privacy policy. It’s a nonsensical approach — because the only reasonable way to write up such a policy is to keep it incredibly broad and vague, to avoid violating it. And that’s why no one reads them or finds them useful — they only serve as a potential way to avoid liability.

And writing an “acceptable use” policy that “reasonably informs users about the types of content that are allowed on the interactive computer service” is a fool’s errand. Because what is and what is not acceptable depends on many, many variables, including context. Just by way of example, many websites famously felt differently about having Donald Trump on their platform before and after the January 6th insurrection at the Capitol. Do we all need to write into our AUPs that such-and-such only applies if you don’t encourage insurrection? As we’ve pointed out a million times, content policy involves constant changes to your policies as new edge cases arise.

People who have never done any content moderation seem to assume that most cases are obvious and maybe you have a small percentage of edge cases. But the reality is often the opposite. Nearly every case is an edge case, and every case involves different context or different facts, and no “Acceptable Use Policy” can possibly cover that — which is why big companies are changing their policies all the time. And for smaller sites? How the fuck am I supposed to create an Acceptable Use Policy for Techdirt? We’re quite open with our comments, but we block spam, and we have our comment voting system — so part of our Acceptable Use Policy is “don’t write stuff that makes our users think you’re an asshole.” Is that what Schatz and Thune want?

The bill then also requires this convoluted notice-takedown-appeal process for content that violates our AUP. But how the hell are we supposed to do that when most of the moderation takes place by user voting? Honestly, we’re not even set up to “put back” content if it has been voted trollish by our community. We’d have to re-architect our comments. And, the only people who are likely to complain… are the trolls. This would enable trolls to keep us super busy having to respond to their nonsense complaints. The bill, like its original version, requires “live” phone-in support for these complaints unless you’re a “small business” or an “individual provider.” But, the terms say that you’re a small business if you “received fewer than 1,000,000 unique monthly visitors” and that’s “during the most recent 12-month period.” How do they define “unique visitors”? The bill does not say, and that’s just ridiculous, as there is no widely accepted definition of a unique monthly visitor, and every tracking system I’ve seen counts it differently. Also, does this mean that if you receive over 1 million visitors once in a 12-month period you no longer qualify?

Either way, under this definition, it might mean that Techdirt no longer qualifies as a small business, and there’s no fucking way we can afford to staff up a live call center to deal with trolls whining that the community voted down their trollish comments.

This bill basically empowers trolls to harass companies, including ours. Why the hell would Senator Schatz want to do that?!?

The bill also requires transparency reports from companies regarding the moderation they do, though it says they only have to come out twice a year instead of four times. As we’ve explained, transparency is good, and transparency reports are good — but mandated transparency reports are huge problem.

For both of these, it’s unclear what exactly is the problem that Schatz and Thune think they’re solving. The larger platforms — the ones that everyone talks about — basically do all of this already. So it won’t change anything for them. All it will do is harm smaller companies, like ours, by putting a massive compliance burden on us, accomplishing nothing but… helping trolls annoy us.

The next big part of the bill involves “illegal content.” Again, it’s not at all clear what problem this is solving. The issue that the drafters of the bill would likely highlight is that some argue that there’s a “loophole” in Section 230: if something is judged to be violating a law, Section 230 still allows a website to keep that content up. That seems like a problem… but only if you ignore the fact that nearly every website will take down such content. The “fix” here seems only designed to deal with the absolute worst actors — almost all of which have already been shut down on other grounds. So what problem is this actually solving? How many websites are there that won’t take down content upon receiving a court ruling on its illegality?

Also, as we’ve noted, we’ve already seen many, many examples of people faking court orders or filing fake defamation lawsuits against “John Does” who magically show up the next day to “settle” in order to get a court ruling that the content violated the law. Enabling more such activity is not a good idea. The PACT Act tries to handwave this away by giving the companies 4 days (in the original version it was 24 hours) to investigate and determine if they have “concerns about the legitimacy of the notice.” But, again, that fails to take reality into account. Courts have no realistic time limit on adjudicating legality, but websites will have to review every such complaint in 4 days?!

The bill also expands the exemptions for Section 230. Currently, federal criminal law is exempt, but the bill will expand that to federal civil law as well. This is to deal with complaints from government agencies like the FTC and HUD and others who worried that they couldn’t take civil action against websites due to Section 230 (though, for the most parts, the courts have held that 230 is not a barrier in those cases). But, much more problematic is that it extends the exemption for federal law to state Attorneys General to allow them to seek to enforce those laws if their states have comparable laws. That is a potentially massive change.

State AGs have long whined about how Section 230 blocks them from suing sites — but there are really good reasons for this. First of all, state AGs have an unfortunate history of abusing their position to basically shake down companies that haven’t broken any actual law, but where they can frame them as doing something nefarious… just to get headlines that help them seek higher office. Giving them more power to do this is immensely problematic — especially when you have industry lobbyists who have capitalized on the willingness of state AGs to act this way, and used it as a method for hobbling competitors. It’s not at all clear why we should give state AGs more power over random internet companies, when their existing track record on these issues is so bad.

Anyway, there is still much more in the bill that is problematic, but on the whole this bill repeats all of the mistakes of the first — even though I know that the drafters know that these demands are unrealistic. The first time may have been due to ignorance, but this time? It’s hard to take Schatz and Thune seriously on this bill when it appears that they simply don’t care how destructive it is.

Filed Under: acceptable use policy, brian schatz, civil law, intermediary liability, john thune, liability, section 230, transparency reports

FTC Commissioners Are Upset About Section 230; Though It's Not At All Clear Why

from the guys,-really? dept

Another day, another bunch of nonsense about Section 230 of the Communications Decency Act. The Senate Commerce Committee held an FTC oversight hearing yesterday, with all five commissioners attending via video conference (kudos to Commissioner Rebecca Slaughter who attended with her baby strapped to her — setting a great example for so many working parents who are struggling with working from home while also having to manage childcare duties!). Section 230 came up a few times, though I’m perplexed as to why.

Senator Thune, who sponsored the problematic PACT Act that would remove Section 230 immunity for civil actions brought by the federal government, asked a leading question to FTC Chair, Joe Simons, that was basically “wouldn’t the PACT Act be great?” and Simons responded oddly about how 230 was somehow blocking their enforcement actions (which is just not true).

Senator Thune: Chairman Simons, as you know, reforming Section 230 of the Communications Decency Act has been hotly debated here in Congress. Section 230 is the law that prevents social media platforms, like Facebook and Twitter, from being sued for content that users post on their platforms. I’ve introduced a bi-partisan bill with Senator Schatz on this issue, known as the Platform Accountability and Consumer Transparency Act (the PACT Act), which among other things would stipulate that the immunity provided by Section 230 does not apply to civil enforcement actions brought by the federal government. The DOJ recommended this particular provision in its recently published list of recommendations for reforming Section 230. My question is how would consumer benefit from reforming Section 230 to ensure that the immunity provided by Section 230 does not apply to civil enforcement actions brought by the federal government, such as the FTC.

Simons: Thank you Senator. So we have a number of instances… it’s actually fairly common for us to go into court and have a defense put on us relating to Section 230. So, it would be very helpful for us to avoid having to deal with that and allow us the ability to go not only after the platform participants, but, in the right circumstances, the platform itself.

There are so many issues with this. First, he doesn’t actually answer the question. Thune asked him how it would benefit consumers, but Simons answered how it would benefit the FTC. While the FTC might like to argue otherwise, those two things are not the same. Second, what a nonsense question and answer. The point of Section 230 is to protect platforms from being held liable for actions of their users — so why would it make sense for the FTC to ever go after the platform in those cases? Third, it’s difficult to think of any case where (contrary to what Simons claims…) Section 230 ever got in the way of an FTC enforcement action. Indeed, back in 2016 we had a story showing the exact opposite. The 2nd Circuit appeals court more or less said that the FTC gets to ignore Section 230. We found that problematic at the time, but Simons (and Thune) seem to think they just need more of that.

Meanwhile, it’s not clear there’s a real split among Commissioners. Simons, the chair, is a Republican. Commissioner Rohit Chopra was also asked about Section 230 and also gave a bizarre answer. This was in response to some questions asked by Senator Wicker (who went on a bizarrely uninformed anti-Section 230 floor rant earlier this week). He first asked Simons about whether or not the FTC had a role in enforcing Section 230, and also about doing anything in regards to the President’s executive order on 230. Unlike FCC chair Ajit Pai, who caved in to the President’s unconstitutional order and started an inquiry, Simons at least pointed out that (despite what the executive order says about the FTC) he sees no role for it:

Wicker: Let’s talk about the FTC’s role in overseeing the enforcement of Section 230 of the Communications Decency Act, and in particular, President Trump’s Executive Order in May, on preventing online censorship. Specifically, section four of this order calls on the FTC to take action against online platforms that restrict speech in a manner inconsistent with their terms of service. What is your view, Mr. Chairman, on the FTC’s responsibilities under the executive order? And have you seen any examples of the behavior described in the order and taken any action under your authority so far?

Simons: Thank you, Mr. Chairman. We haven’t taken any action according to the executive order. We get complaints from a wide variety of sources. From the public, from Congress, from competitors, from people in industry, from consumer watchdogs. And it’s very important that we get those complaints and we pay attention to them. Lots of complaints have come from members of this Committee, and we’re very thankful to them that you provide us with such thoughtful complaints.

We’re an independent agency so we review all of them independently. We have jurisdiction over commercial speech — particularly on deceptive and unfair and then some other statutes. So we look to see whether the complaints are subject to unfairness… or whether they’re within our authority as I described. Our authority focuses on commercial speech, not political content curation.

If we see complaints that are not within our jurisdiction, then we don?t do anything. If we see complaints that are, we take a closer look, and figure out whether there’s a violation. And then we determine whether it’s appropriate for us to act.

Wicker: So you don’t view political speech as within your jurisdiction?

Simons: Correct.

Wicker: So if the public and members of the Senate are concerned about online platforms like Twitter and Facebook being inconsistent in the way they restrict political speech, you do not view that as within the purview of your statutory responsibilities. And therefore, the executive order does not instruct you in that specific area? Is that correct?

Simons: Yes. For political content curation. Yes.

This line of questioning was already silly enough, but at least, unlike Pai, Simons was willing to say “hey, that’s outside of our jurisdiction.” But Wicker’s line of question is silly in its own way. There’s no legal requirement that platforms treat different political speech equally. And it would violate the 1st Amendment if the law did.

From there, though Wicker goes on to one of the Democratic Commissioners, Chopra, who also doesn’t seem to like 230 either.

Chopra: Putting aside the executive order, the issue of Section 230 is one where… of great concern, and I think there’s growing bipartisan consensus that it has been abused. We see, whether it comes to counterfeit and defective goods, and the unlevel playing field between online platforms and brick and mortar stores. And in general, I think the scrutiny is warranted when it comes to technology platforms abusing any liabilities and public privileges, and using that as regulatory arbitrage.

I think many of these platforms do have too much power to dictate certain policies and regulations, and I don’t want to see them continue, in my view, to overuse and abuse the legal immunities that Congress has provided, and I think we need to take a hard look at that. Particularly when it comes to the use of surveillance-based behavioral advertising. I think that business model is inconsistent with the origins of Section 230. Section 230 is supposed to safeguard and promote speech. It’s not supposed to prioritize certain types of things over others based on what makes those companies more money.

This is also wrong and misguided on many points. First of all, you don’t “abuse” an immunity granted by Congress when you use it as intended — which is to avoid liability for 3rd party content and to make content moderation decisions. Second: regarding counterfeit or defective goods, counterfeit goods are generally a trademark issue which is entirely exempt from Section 230. You’d think that Chopra would know this? Indeed, there was a huge lawsuit regarding eBay and counterfeit goods that I’m sure he does know about — which shows that the issue is not a Section 230 one.

Also, every major platform already has a massive operation trying to fight counterfeit and defective goods — totally unrelated to Section 230. They do so because they want their consumers to be happy.

Second, there is no “unlevel playing field.” Section 230 protects all websites — including those of “brick and mortar stores.” So it’s a weird comparison to make.

Finally, as we were just discussing, it’s unclear what behavioral advertising has to do with 230. Section 230 is unrelated to business models — and having an advertising based business model has not “inconsistent with the origins of Section 230.” Section 230 has allowed a wide variety of platforms to exist, many of which is because they have relied on Section 230 protections to enable much broader consumer speech.

Once again, it would be nice if someone in our government actually understood the law before commenting on it. Unfortunately, it appears we’re not getting that from the FTC.

Filed Under: civil enforcement, counterfeits, ftc, intermediary liability, joe simons, john thune, oversight, pact act, roger wicker, rohit chopra, section 230

Another Day, Another Bad Bill To Reform Section 230 That Will Do More Harm Than Good

from the no-bad dept

Last fall, when it first came out that Senator Brian Schatz was working on a bill to reform Section 230 of the Communications Decency Act, I raised questions publicly about the rumors concerning the bill. Schatz insisted to me that his staff was good, and when I highlighted that it was easy to mess this up, he said I should wait until the bill is written before trashing it:

Feel free to trash my bill. But maybe we should draft it, and then you should read it?

— Brian Schatz (@brianschatz) September 13, 2019

Well, now he’s released the bill and I am going to trash it. I will say that unlike most other bills we’ve seen attacking Section 230, I think that Schatz actually does mean well with this bill (entitled the “Platform Accountability and Consumer Transparency Act” or the “PACT Act” and co-authored with Senator John Thune). Most of the others are foolish Senators swinging wildly. Schatz’s bill is just confused. It has multiple parts, but let’s start with the dumbest part first: if you’re an internet service provider you not only need to publish an “acceptable use policy,” you have to set up a call center with live human beings to respond to anyone who is upset about user moderation choices. Seriously.

subject to subsection (e), making available a live company representative to take user complaints through a toll-free telephone number during regular business hours for not fewer than 8 hours per day and 5 days per week;

While there is a small site exemption, at Techdirt we’re right on the cusp of the definition of a small business (one million monthly unique visitors – and we have had many months over that, though sometimes we’re just under it as well). There’s no fucking way we can afford or staff a live call center to handle every troll who gets upset that users voted down his comment as trollish.

Again, I do think Schatz’s intentions here are good — they’re just not based in the real world of anyone who’s ever done any content moderation ever. They’re based in a fantasy world, which is not a good place from which to make policy. Yes, many people do get upset about the lack of transparency in content moderation decisions, but there are often reasons for that lack of transparency. If you detail out exactly why a piece of content was blocked or taken down, then you get people trying to (1) litigate the issue and (2) skirt the rules. As an example, if someone gets kicked off a site for using a racist slur, and you have to explain to them why, you’ll see them argue “that isn’t racist” even though it’s a judgment call. Or they’ll try to say the same thing using a euphemism. Merely assuming that explaining exactly why you’ve been removed will fix problems is silly.

And, of course, for most sites the call volume would be overwhelming. I guess Schatz could rebrand this as a “jobs” bill, but I don’t think that’s his intention. During a livestream discussion put on by Yale where this bill was first discussed, Dave Willner (who was the original content policy person at Facebook) said that this requirement for a live call center to answer complaints was (a) not possible and (b) it would be better to just hand out cash to people to burn for heating, because that’s how nonsensical this plan is. Large websites make millions of content moderation decisions every day. To have to answer phone calls with live humans about that is simply not possible.

And that’s not all that’s problematic. The bill also creates a 24 hour notice-and-takedown system for “illegal content.” It seems to be more or less modeled on copyright’s frequently abused notice-and-takedown provisions, but with a 24-hour ticking time bomb. This has some similarities to the French hate speech law that was just tossed out as unconstitutional with a key difference being one element of notification of “illegal content” is a court ruling on the illegality.

Subject to subsection (e), if a provider of an interactive computer service receives notice of illegal content or illegal activity on the interactive computer service that substantially complies with the requirements under paragraph (3)(B)(ii) of section 230(c) of the Communications Act of 1934 (47 U.S.C. 230(c)), as added by section 6(a), the provider shall remove the content or stop the activity within 24 hours of receiving that notice, subject to reasonable exceptions based on concerns about the legitimacy of the notice.

The “notice requirements” then do include the following:

(I) A copy of the order of a Federal or State court under which the content or activity was determined to violate Federal law or State defamation law, and to the extent available, any references substantiating the validity of the order, such as the web addresses of public court docket information.

This is yet another one of those ideas that sounds good in theory, but runs into trouble in reality. After all, this was more or less the position that most large companies — including both Google and Facebook — took in the past. If you sent them a court ruling regarding defamation, they would take the content down. And it didn’t take long for people to start to game that system. Indeed, we wrote a whole series of posts about “reputation management” firms that would file sketchy lawsuits.

The scam worked as follows: file a real lawsuit against a “John or Jane Doe” claiming defamation. Days later, have some random (possibly made up person) “admit” to being the Doe in question, admit to the “defamation” and agree to a “settlement.” Then get the court to issue an order on the “settled” case with the person admitting to defamation. Then, send that court order to Google and Facebook to take down that content. And this happened a lot! There were also cases of people forging fake court documents.

In other words, these all sound like good ideas in theory, until they reach the real world, where people game the system mercilessly. And putting a 24 hour ticking time clock on that seems… dangerous.

Again, I understand the thinking behind this bill, but contrary to Schatz’s promise of having his “good” staffers talk to lots of people who understand this stuff, this reads like someone who just came across the challenges of content moderation and has no understanding of the tradeoffs involved. This is, unfortunately, not a serious proposal. But seeing as it’s bipartisan and an attack on Section 230 at a time when everyone wants to attack Section 230, it means that we need to take this silly proposal seriously.

Filed Under: appeals, brian schatz, call centers, censorship, john thune, notice and takedown, section 230, transparency

Comcast's Push For A Shitty New Net Neutrality Law Begins In Earnest

from the regulatory-trojan-horse dept

Mon, Dec 18th 2017 03:33pm - Karl Bode

As we’ve been noting for a while, the FCC’s 3-2 vote to kill net neutrality is really only the beginning of a new chapter in the fight for a healthy, competitive internet. The rules won’t truly be repealed until 60 days after they hit the federal register in January. And even then, the repeal will have to survive a multi-pronged legal assault against the FCC, accusing it of ignoring the public interest, ignoring feedback from countless experts, and turning a blind eye to all of the procedural oddities that occurred during its proceeding (like, oh, the fact that only dead and artificial people appear to support what the FCC is up to).

ISPs know that this legal fight faces a steep uphill battle with all of the procedural missteps at the FCC. That’s why we’ve been warning for a while that ISPs (and their army of think tankers, sock puppets, consultants, and other allies) will soon begin pushing hard for a new net neutrality law. One that professes to “put this whole debate to bed,” but contains so many loopholes as to be useless. The real purpose of such a law? To codify federal net neutrality apathy into law, and to prevent the FCC from simply passing tougher rules down the road.

Just like clockwork, Comcast responded to last week’s net neutrality killing vote with a blog post by top Comcast lobbyist David Cohen (the company, for the record, hates it when you call Cohen a lobbyist) calling for a new, Comcast-approved law. Cohen declares that it’s “time for Congress to act and permanently preserve the internet,” while repeatedly and comically trying to downplay Comcast’s own role in the chaos we’re currently witnessing:

“Unfortunately, there are others who want to continue engaging in a never ending game of back and forth, creating unnecessary anxiety and contributing to an unneeded level of hysteria. Some will undoubtedly continue threatening litigation that does nothing to protect consumers or freedom of the Internet.”

Funny, since the one doing the litigating is Comcast, which sued to overturn both the FCC’s 2010 and 2015 net neutrality protections. Regardless, Cohen would have you believe that the only path forward at this point is the creation of a new net neutrality law. One, Cohen knows very well would be quite literally written by Comcast thanks to our campaign-cash-slathered Congress. Such a law would, Comcast argues, end the “regulatory ping pong” that Comcast itself is perpetuating:

“It?s now time for all of us to take advantage of this moment in time and end the cycle of regulatory ping pong we?ve been trapped in for over a decade and put this issue to rest once and for all. And there?s a simple way to do this — we really must have bipartisan congressional legislation to permanently preserve and solidify net neutrality protections for consumers and to provide ongoing certainty to ISPs and edge providers alike.”

So what would a Comcast-approved net neutrality law look like? Comcast has repeatedly made it clear that it supports a ban on the blatant throttling or blocking of websites and services by ISPs, since that’s not something ISPs were interested in doing anyway. ISPs long ago realized there’s an ocean of more subtle ways to abuse a lack of competition in the broadband market. For example. why block Netflix outright (and risk a massive PR backlash) when you can impose arbitrary and unnecessary usage caps and overage fees that only apply to Netflix, not Comcast’s own content?

So, expect any Comcast-approved law to outlaw all of the things large ISPs never intended to do, while ignoring all of the more subtle areas that the net neutrality fight has evolved to cover. For example, a Comcast-approved law won’t even mention caps or zero rating. Nor will it address the shenanigans we’ve seen on the interconnection front. But any Comcast-approved law will include ample loopholes allowing Comcast to do pretty much whatever it likes provided it ambiguously suggests it’s for the health of the network (a major problem in the FCC’s flimsy 2010 rules).

Since he played a starring role the last time ISPs tried this, expect Senator John Thune to play a starring role in this effort. You should also expect an ocean of editorials from ISP-funded policy folk (where financial conflicts of interest aren’t disclosed) to start popping up on websites and newspapers nationwide insisting a net neutrality law is the only path forward and that anybody that opposes this push simply isn’t being reasonable.

And while many lawmakers and media folk will be tempted to support this push arguing it’s better than no rules at all that’s not really true. If flimsy and poorly-written, this new Comcast-approved legislation could simply codify federal net neutrality apathy into law, while banning any future FCCs’ or Congress’ (say, a theoretical one not quite so beholden to ISP cash) from passing real protections down the line. The best bet at stopping this net neutrality repeal currently rests with the courts. Should that fail we can revisit this conversation, but only if voters are able to drive ISP-loyal marionettes out of office.

Filed Under: ajit pai, congress, fcc, john thune, laws, net neutrality
Companies: comcast

Don't Get Fooled: The Plan Is To Kill Net Neutrality While Pretending It's Being Protected

from the pay-attention dept

Back in February, we had former top FCC staffer Gigi Sohn on our podcast and she laid out the likely strategy of Ajit Pai and Congress to kill net neutrality while pretending that they were protecting net neutrality. And so far, it’s played out exactly according to plan. Each move, though, seems to be getting reported by most of the tech press as if it’s some sort of surprise or unexpected move. It’s not. There’s a script and it’s being followed almost exactly. So, as a reminder, let’s go through the exact script:

Step 1: Set fire to old net neutrality rules

New FCC boss Ajit Pai announces that he’s going releasing a plan to roll back the Open Internet rules that his predecessor, Tom Wheeler, put in place two years ago. This has been done, and Pai has released what’s called an NPRM (a Notice of Proposed Rulemaking) which opens up a comment period. Once the comment period is over, the FCC can release its new rules and vote on them. The problem — as basically everyone in telco knows (but which almost never gets mentioned in the press coverage) is that the FCC almost certainly will lose in court if it rolls back the rules that Wheeler put in place. This is important. Contrary to what you may have heard, the FCC isn’t allowed to just willy nilly flip flop the rules.

Indeed, the FCC is barred by statute from putting in place “arbitrary and capricious” rule changes. Basically, every lawsuit challenging any FCC rulemaking includes claims that they were “arbitrary and capricious.” And, to get over that burden, the FCC can’t just change the rules willy nilly, but have to lay out clear evidence for why a change in policy is necessary. That’s why the Wheeler Open Internet rules have been upheld by the DC Circuit (who shot down previous rules). Wheeler effectively laid out the clear reasons why the market had changed drastically in the decade plus since the FCC had declared broadband to be an “information service” rather than a “telecommunications service” (under Title II).

For Pai to successfully roll back those rules, he’d need to show that there was some major change in the market since the rules were put in place less than two years ago. That’s… almost certainly going to fail in court. Again, this is important: Pai can change the rules, but that rule change will almost definitely be shot down in court. While many are assuming that the Pai’s new rules are a done deal, they are not. I mean, he’s almost certainly going to ignore the public outcry about how rolling back these rules will harm the internet. And he’s almost certainly going to continue to blatantly misrepresent reality and (falsely) claim that investment in broadband has dropped because of these rules (despite tons of clear evidence that he’s wrong). And, then he will pass new rules. But those rules will be challenged and he will almost certainly lose in court, and the old rules would remain in place.

Again: basically everyone in the FCC (including Pai) and in Congress know this. The press not reporting on this is a shame.

Step 2: Congress to the “rescue”

Congressional net neutrality haters (e.g. those receiving massive campaign contributions from big broadband players…) are well aware that Pai’s plans have no chance in court. Yet, they want there to be this kind of uproar over the plans. They want the public to freak out and to say that this is bad for the internet and all that. Because this will allow them to do two things. First, they will fundraise off of this. They will go to the big broadband providers and act wishy washy on their own stance about changing net neutrality rules, and will smile happily as the campaign contributions roll in. It’s how the game is played.

The second thing they will do… is come to “the rescue” of net neutrality. That is, they will put forth a bill — written with the help of broadband lobbyists — that on its face pretends to protect net neutrality, but in reality absolutely guts net neutrality as well as the FCC’s authority to enforce any kind of meaningful consumer protection. We’ve already seen this with a plan from Senator Thune and this new bill from Senator Mike Lee.

Unfortunately, some reporters will buy this argument and pretend that these bills will “save net neutrality.” The article at that link is correct that a change in administrations can lead an FCC to try to flip flop again on net neutrality, but totally ignores that any such attempt would totally flop in court as arbitrary and capricious, without actual evidence of a changed market. The article is also correct that Congress should fix this permanently, but misses two key factors: (1) Congress is way too beholden to broadband lobbyists to come up with anything that actually protects neutrality and (2) the plans presented so far are designed to kill net neutrality while pretending to “protect” it.

This latter point is why Verizon’s General Counsel can say with a straight face that no one wants to kill net neutrality. Because he’s going to be supporting Congress’ plan that pretends to save it. That’s because the Congressional plans do put in place a few bright line rules that seem important to net neutrality — saying that it bars “paid prioritization,” throttling and the like. The problem is that those are last decade’s net neutrality issues. The big broadband providers have already said they’re fine with those kinds of rules because they’ve found ways around them.

Specifically, the big broadband providers are doing things like deliberately overloading interconnect points to force large companies like Netflix to pay not to be throttled. Or they’re putting in place totally arbitrary and low data caps, and then offering to “zero rate” certain services, pretending that this is a “consumer friendly” move. Again, as we’ve said dozens of times, you’re not a hero if you save people from a fire that you set yourself. And that’s exactly what zero rating is. Access providers set low data caps themselves and then “save” their customers from having to pay for going over those caps… but, only if you use approved services (often ones owned by the access provider themselves).

And this is the problem. Under the existing Wheeler rules, the FCC was able to adjust and respond to efforts by the telcos to continue to abuse net neutrality and block the open internet, while pretending they were doing something else. The Congressional proposals for “net neutrality” actually take away that authority from the FCC. In other words, they are opening the floodgates for the big broadband access providers to screw over customers, by saying (1) you can’t do the obviously bad stuff, but you can do the hidden bad stuff that’s effectively creates the same problems and (2) the FCC can no longer stop you from doing this.

That’s not a plan to save net neutrality or an open internet. It’s a plan to bless the access providers’ plans to start walling off the internet and getting to double and triple charge companies for offering services. This is a plan to put tollbooths on the internet, but in ways that are less obvious than people were first worried about.

Step 3: Leverage the Controversy

Meanwhile, everyone who wants to kill net neutrality knows what’s going to happen here. They will use the fact that Pai’s rules absolutely can’t withstand scrutiny in the courts to step up and push for the Congressional “rescue.” Even more likely: they’ll say that we need Congress to step in to “prevent uncertainty” from the inevitable lawsuits. Believe it or not: they’re happy that this will get tied up in courts for years, because that gives Congress extra cover to push through this pretend “compromise.” You’ll hear lots of tut-tutting about “uncertainty” that has to be stopped. But, like zero rating and the fact that it’s not heroic if you rescue people from your own fire, the fire here is being set by Ajit Pai and big broadband’s key supporters. They’re setting this fire of rolling back Wheeler’s rules solely to whine about the uncertainty that will be caused by their own unnecessary rule change… and then will say that “only Congress can settle this.”

So, what does all this mean? It means people who are mad about this (as you should be) need to be direct in what they’re talking about here. Don’t pretend that Pai’s rule change is the real problem. It’s not. It’s just a mechanism to get to new regulations from Congress that will cause real problems. Don’t let anyone say that the Wheeler rules have harmed the internet or investment. They have not. Don’t let anyone (especially supporters of killing net neutrality) launch into self-pitying cries about “uncertainty.” Remind them that the uncertainty is coming from them and their supporters. And, most importantly, don’t pretend that a bill from Congress pretending to “save” net neutrality will actually do so, when it’s quite obvious that the bills being offered will undermine our internet, help big broadband screw over users, and diminish competition.

Filed Under: ajit pai, broadband, competition, controversy, fcc, john thune, mike lee, net neutrality, open internet, tom wheeler