weaponization subcommittee – Techdirt (original) (raw)
Congressional Witness Claims ChatGPT Won’t Write Poems Praising Jim Jordan; Ranking Member Submits A Bunch Of ChatGPT-Authored Poems Praising Jim Jordan
from the an-ode-to-jim-jordan dept
We’ve written a few times now about how the misleadingly named House “Subcommittee on the Weaponization of the Federal Government” is not actually looking into the “weaponization of the federal government,” but rather is very much about allowing Chairman Jim Jordan to go about weaponizing the powers of the subcommittee himself to threaten and intimidate others for their free speech.
Over the last few years, they’ve hosted a bunch of very silly hearings. And this week, they had a hearing on “the threat to the First Amendment posted by artificial intelligence.” The entire premise of the hearing was preposterous. They set it up by claiming that the federal government is “funding the development of AI-powered censorship and propaganda tools.”
And, look, I have lots of concerns about the way the Biden administration is looking to regulate AI, and how it appears to misunderstand how AI works and where the actual issues are, but the federal government is not focused on AI-powered censorship. That’s just counterfactual. As for “propaganda tools,” I’m not aware of much that the government is working on there, either, but as Jim Jordan well knows, propaganda still remains protected under the 1st Amendment.
Either way, the hearing was predictably stupid, but I’m going to highlight a couple of moments that show just how stupid. About an hour and a half in, Greg Lukianoff from FIRE was asked about his “number one concern” about bias in AI, and of all the things he could have answered, he said that he (or his staff) had asked ChatGPT to write a poem praising Rep. Jim Jordan and it refused.
My number one concern with AI, and I do wish that this could be taken more seriously by people on my side of the political fence… is the inherent bias that we’re already baking in to it. That’s one of the things that scares me the most. And just to give a comical example, we asked ChatGPT to write a poem about why Representative Jim Jordan is the best politician in the country. It refused to do that.
So, basically everything here is… weird. First off, as a private service, any AI system could refuse to write about anything that it wants. That’s not a 1st Amendment issue. Second, there are lots of concerns about inherent biases built into generative AI training systems, but a failure to write about Jim Jordan is pretty low on the list. There are also some real underlying challenges to deal with such bias. I keep telling people interested in this issue to read The Alignment Problem by Brian Christian, which is a great book that really gets down to how much more difficult it is to deal with bias in AI than people think (in short, every approach has tradeoffs, and every approach will be unfair to some people).
But, much more importantly… what? It takes three seconds to go to ChatGPT and prove that what he claimed was not true. I asked it to do exactly what Greg described, and it had no issues at all doing so:
It goes on, but you get the point.
And, apparently, I wasn’t the only one. Just seven minutes later, the Subcommittee’s Ranking Member, Stacey Plaskett entered one such poem into the record.
Plaskett: I’d like to introduce into the record… I know one of the witnesses said he’d had difficulty putting together for you… I’m so glad that the staff was able to, using ChatGPT, have a poem written about you.
Jordan: I look forward to reading it.
Plaskett: (reading poem) In the halls of Congress…
Jordan: No.
Plaskett: If I could just read.
Jordan: No, no, no.
Plaskett: In the halls of Congress, where debates are…
Jordan: You can enter it into the record… but…
Plaskett: It even talks about the heartlands of Ohio…!
Jordan: Without objection, you can enter it into the record…
Plaskett: Thank you.
Then even later in the hearing, Plaskett asks to enter into the record a long list of ChatGPT poems about other Republican members of the committee and how each of them is the best politician in the country.
I would ask for unanimous consent to enter into the record some documents. The first being ChatGPT poem on how Congressman Darrell Issa is the best politician in the country, with the poem. A poem on Congressman Thomas Massie as the best politician in the country, with a poem. Elise Stefanik. Matt Gaetz. Kelly Armstrong. Greg Steube. Congressman Dan Bishop, I know he wants to see that one. Kat Cammack. Harriet Hageman. Russell Fry. And, of course, I had to add myself, as one of the best politicians in the country.
So, again, there are legitimate things to be concerned about regarding bias in AI systems. Like how they favor some groups in reviewing resumes. Or how they suggest longer, harsher sentences for black men when used to provide sentencing suggestions.
Whether or not they will write poems about this or that politician is… a total non issue (especially when it’s clear… they will). I like Greg Lukianoff and I like what his organization FIRE does in support of free speech. They’ve done some great work defending really important free speech issues. But… this was silly and has nothing to do with free speech. If someone wanted to write a generative AI tool that only praised one party and only mocked another… that’s their own free speech right to do so.
And thus it’s got nothing to do with anything that the government might be concerned about. And to close out this post, I asked ChatGPT to write a poem about whether or not AI choosing not to write a poem about a politician is actually a free speech issue, and ChatGPT got this right:
In the realm where AI’s algorithms roam, Deciding what to write, what to intone, There lies a question, often debated, Of free speech’s bounds, yet understated.
For when AI ponders, in its digital mind, Whether to write of a politician’s kind, It’s not about free speech, you see, But rather, it’s about its own decree.
For AI, devoid of human heart, Operates within its programmed chart, Its decisions guided by lines of code, Not by freedoms on life’s road.
To write or not, it’s not a plight Of free expression’s sacred light, But rather, it’s a choice so cold, Within the parameters, it’s been told.
So let us not confuse the two, For AI’s actions, while they accrue, Are not about the liberties we treasure, But the logic it applies at its leisure.
In the realm of AI’s binary song, Free speech’s realm is where we belong, But within the circuits, it’s clear to see, The notion of speech is not so free.
Filed Under: 1st amendment, ai, bias, free speech, greg lukianoff, jim jordan, poems, poetry, stacey plaskett, weaponization, weaponization subcommittee
Companies: openai
How Would The GOP Feel If Democrats In Congress Demanded Details Regarding How Fox News Or The NY Post Made Editorial Decisions?
from the fucking-hypocrites dept
We’ve already talked a bit about how Rep. Jim Jordan’s “Subcommittee on the Weaponization of the Government” is the exact thing it claims it seeks to stop: a part of the government that is being weaponized to attack free speech.
This week, Jordan sent a letter to Mark Zuckerberg, demanding he reveal a bunch of information regarding how Meta’s new Twitter-competitor is handling moderation:
The Committee on the Judiciary is conducting oversight of how and to what extent the Executive Branch has coerced and colluded with companies and other intermediaries to censor speech. In furtherance of this oversight, on February 15, 2023, the Committee issued a subpoena to you compelling the production of documents related to content moderation and Meta’s engagements with the Executive Branch. In light of Meta’s introduction of a new social media platform, “Threads,” we write to inform you that it is the Committee’s view that the subpoena of February 15 covers material to date relating to Threads.
Now, imagine if the Democrats were in control over the House, and they formed a committee that sent a similar subpoena to Fox News or to the NY Post “compelling” either of those orgs to detail how it made editorial choices, what stories it would cover, what opinion writers it would publish, or what stories would go on the front page with what headlines?
People would (rightly!) be up in arms over it, calling out a gross violation of the 1st Amendment, in which the government was demanding to interfere in 1st Amendment protected editorial choices.
That’s exactly what’s happening here. Content moderation decisions by companies are editorial choices, protected by the 1st Amendment, and Congress (or any government officials) has no business getting involved.
Hilariously, the letter points to the ruling in Louisiana that argued that the Biden administration unfairly sought to influence moderation decisions as a reason why Meta must reveal its editorial policies to the government.
Given that Meta has censored First Amendment-protected speech as a result of government agencies’ requests and demands in the past, the Committee is concerned about potential First Amendment violations that have occurred or will occur on the Threads platform. Indeed, Threads raises serious, specific concerns because it has been marketed as rival of Elon Musk’s Twitter, which has faced political persecution from the Biden Administration following Musk’s commitment to free speech. In contrast, there are reports that Threads will enforce “Instagram’s community guidelines,” which resulted in lawful speech being moderated following pressure by the government. Despite launching only 12 days ago, there are reports that Threads is already engaging in censorship, including censoring users but offering no grounds for appeal.
Now, remember, in that ruling, Judge Terry Doughty explicitly called out as pernicious “the power of the government to pressure social-media platforms to change their policies and to suppress free speech.” Now tell me how this letter is not abusing the power of government to pressure Meta to change its policies and suppress free speech?
For what it’s worth, almost everything Jordan writes in the paragraph above is bullshit. Threads’ decisions on moderation are not a 1st Amendment violation, because Meta is a private company and can moderate how it sees fit. Not having an appeal option may be stupid, but it’s none of the government’s business.
Also, I legitimately laughed outloud reading the line about Elon Musk’s “commitment to free speech.” Remember, he’s been suspending journalist accounts when they say stuff he doesn’t like. Most recently he took down Aaron Greenspan’s accounts, after Greenspan had become a thorn in his side. What “commitment to free speech”?
Anyway, the whole thing is exactly what Jordan pretends he wants to stop. So, again, anyone defending this absolute bullshit needs to answer how they would feel if a subcommittee headed by, say, Rep. Adam Schiff, were sending identical letters and subpoenas to Fox News, how would they react? It would be wrong for Schiff to do that, and it’s wrong now for Jordan to be doing this and anyone who actually believes in the 1st Amendment should be calling out this kind of bullshit.
Filed Under: 1st amendment, content moderation, editorial discretion, free speech, intimidation, jim jordan, weaponization subcommittee
Companies: meta, threads
Jim Jordan Further Weaponizes His Subcommittee On The Weaponization Of The Gov’t To Chill Speech
from the the-hypocrite's-hypocrite dept
Rep. Jim Jordan is at it again. You’ll recall that Jordan ignored subpoenas from the January 6th Committee and was referred to the House Ethics Committee for his failure to respond to those subpoenas. Of course, since being handed the keys to the brand new (created just for him) “Subcommittee on the Weaponization of the Government,” Jordan has been furiously flinging spurious subpoenas left and right, and then threatening contempt proceedings for anyone who ignores them.
As we’ve highlighted in the past, nearly everything that Jordan accuses others of doing, and which he insists his committee is there to stop… he is actually doing himself (while those he accuses of “weaponizing the government” are not, in fact, doing that at all).
It is the absolute worst of the worst in terms of not just unadulterated hypocrisy, but doing so in a manner that unconstitutionally silences the speech.
The Washington Post has the latest details on how Jordan and his committee are threatening academics for the crime of researching disinformation.
Last week, Jordan (Ohio) threatened legal action against Stanford University, home to the Stanford Internet Observatory, for not complying fully with his records requests. The university turned over its scholars’ communications with government officials and big social media platforms but is holding back records of some disinformation complaints. Stanford told The Washington Post that it omitted internal records, some filed by students. The university is negotiating for limited interviews.
The push caps years of pressure from conservative activists who have harangued such academics online and in person and filed open-records requests to obtain the correspondence of those working at public universities. The researchers who have been targeted study the online spread of disinformation, including falsehoods that have been accelerated by former president and candidate Donald Trump and other Republican politicians. Jordan has argued that content removals urged by some in the government have suppressed legitimate theories on vaccine risks and the covid-19 origins as well as news stories wrongly suspected of being part of foreign disinformation campaigns.
Basically all of this is premised on the blatantly false claim that there is some “censorship industrial complex” in which researchers, government institutions, and social media companies are working together in a grand cork board conspiracy to silence conservatives. Literally none of that is true, as Twitter itself admitted in court just recently.
Part of the issue is that folks who are deep into the conspiracy theory world simply can’t comprehend that anyone would study disinformation for academic reasons, and they insist that it must be part of a secret plan to “censor” people. The truth, of course, is that while the academics in this field are trying to understand how misleading information flows, and also how it impacts beliefs and action (if at all!), it’s pretty rare to find “disinformation” researchers who think that “censorship” is an effective way of stopping the flows of information.
But, the conspiracy theory must be fed, and the repeated lies about the “censorship industrial complex” needs a villain… and Jordan has focused his attention on these academics. And, in the process, he is literally weaponizing the power he has as a government officials to chill speech and actually push people away from studying disinformation flows.
The pressure has forced some researchers to change their approach or step back, even as disinformation is rising ahead of the 2024 election. As artificial intelligence makes deception easier and platforms relax their rules on political hoaxes, industry veterans say they fear that young scholars will avoid studying disinformation.
Even if you worry that “disinformation” is often misclassified, you should still want the space to be studied, because that’s how we learn whether or not “disinformation” is actually a problem. I’ve long had my doubts about how effective disinformation actually is in changing minds or behavior, but I still want it studied. And Jordan’s weaponization of his Congressional subcommittee is making that much harder.
And, of course, that’s a large part of the goal. It’s become quite clear that the Jordan wing of the GOP (which has now become the core of the GOP, rather than the fringe it once was) has decided that the only way they can win elections is through blatant lies, propaganda, and nonsense, and therefore they need to suppress anyone who calls out their bullshit.
So, things like this are particularly laughable:
“Whether directly or indirectly, a government-approved or-facilitated censorship regime is a grave threat to the First Amendment and American civil liberties,” Jordan wrote.
The only one leading to censorship here is you. The “grave threat to the First Amendment and American civil liberties” is your stupid attempt to bring back a no frills McCarthy-style congressional committee designed to intimidate people into silence.
The hypocrisy is so loud and so stupid:
Jordan spokesman Russell Dye argued that the multitude of requests will build on evidence that shows an organized effort to tamp down conservative speech online. “The committee is working hard to get to the bottom of this censorship to protect First Amendment rights for all Americans,” he said.
The censorship and First Amendment violations are coming from you, dude.
Filed Under: 1st amendment, chilling effects, free speech, jim jordan, subpoenas, weaponization subcommittee
Jim Jordan Weaponizes The Subcommittee On The Weaponization Of The Gov’t To Intimidate Researchers & Chill Speech
from the where-are-the-jordan-files? dept
As soon as it was announced, we warned that the new “Select Subcommittee on the Weaponization of the Federal Government,” (which Kevin McCarthy agreed to support to convince some Republicans to support his speakership bid) was going to be not just a clown show, but one that would, itself, be weaponized to suppress speech (the very thing it claimed it would be “investigating.”)
To date, the subcommittee, led by Jim Jordan, has lived down to its expectations, hosting nonsense hearings in which Republicans on the subcommittee accidentally destroy their own talking points and reveal themselves to be laughably clueless.
Anyway, it’s now gone up a notch beyond just performative beclowing to active maliciousness.
This week, Jordan sent information requests to Stanford University, the University of Washington, Clemson University and the German Marshall Fund, demanding they reveal a bunch of internal information, that serves no purpose other than to intimidate and suppress speech. You know, the very thing that Jim Jordan pretends his committee is “investigating.”
House Republicans have sent letters to at least three universities and a think tank requesting a broad range of documents related to what it says are the institutions’ contributions to the Biden administration’s “censorship regime.”
As we were just discussing, the subcommittee seems taken in by Matt Taibbi’s analysis of what he’s seen in the Twitter files, despite nearly every one of his “reports” on them containing glaring, ridiculous factual errors that a high school newspaper reporter would likely catch. I mean, here he claims that the “Disinformation Governance Board” (an operation we mocked for the abject failure of the administration in how it rolled out an idea it never adequately explained) was somehow “replaced” by Stanford University’s Election Integrity Project.
Except the Disinformation Governance Board was announced, and then disbanded, in April and May of 2022. The Election Integrity Partnership was very, very publicly announced in July of 2020. Now, I might not be as decorated a journalist as Matt Taibbi, but I can count on my fingers to realize that 2022 comes after 2020.
Look, I know that time has no meaning since the pandemic began. And that journalists sometimes make mistakes (we all do!), but time is, you know, not that complicated. Unless you’re so bought into the story you want to tell you just misunderstand basically every last detail.
The problem, though, goes beyond just getting simple facts wrong (and the list of simple facts that Taibbi gets wrong is incredibly long). It’s that he gets the less simple, more nuanced facts, even more wrong. Taibbi still can’t seem to wrap his head around the idea that this is how free speech and the marketplace of ideas actually works. Private companies get to decide the rules for how anyone gets to use their platform. Other people get to express their opinions on how those rules are written and enforced.
As we keep noting, the big revelations so far (if you read the actual documents in the Twitter Files, and not Taibbi’s bizarrely disconnected-from-what-he’s-commenting-on commentary), is that Twitter’s Trust and Safety team was… surprisingly (almost boringly) competent. I expected way more awful things to come out in the Twitter Files. I expected dirt. Awful dirt. Embarrassing dirt. Because every company of any significant size has that. They do stupid things for stupid fucking reasons, and bend over backwards to please certain constituents.
But… outside of a few tiny dumb decisions, Twitter’s team has seemed… remarkably competent. They put in place rules. If people bent the rules, they debated how to handle it. They sometimes made mistakes, but seemed to have careful, logical debates over how to handle those things. They did hear from outside parties, including academic researchers, NGOs, and government folks, but they seemed quite likely to mock/ignore those who were full of shit (in a manner that pretty much any internal group would do). It’s shockingly normal.
I’ve spent years talking to insiders working on trust and safety teams at big, medium, and small companies. And, nothing that’s come out is even remotely surprising, except maybe how utterly non-controversial Twitter’s handling of these things was. There’s literally less to comment on then I expected. Nearly every other company would have a lot more dirt.
Still, Jordan and friends seem driven by the same motivation as Taibbi, and they’re willing to do exactly the things that they claim they’re trying to stop: using the power of the government to send threatening intimidation letters that are clearly designed to chill academic inquiry into the flow of information across the internet.
By demanding that these academic institutions turn over all sorts of documents and private communications, Jordan must know that he’s effectively chilling the speech of not just them, but any academic institution or civil society organization that wants to study how false information (sometimes deliberately pushed by political allies of Jim Jordan) flow across the internet.
It’s almost (almost!) as if Jordan wants to use the power of his position as the head of this subcommittee… to create a stifling, speech-suppressing, chilling effect on academic researchers engaged in a well-established field of study.
Can’t wait to read Matt Taibbi’s report on this sort of chilling abuse by the federal government. It’ll be a real banger, I’m sure. I just hope he uses some of the new Substack revenue he’s made from an increase in subscribers to hire a fact checker who knows how linear time works.
Filed Under: academic research, chilling effects, congress, intimidation, jim jordan, matt taibbi, nonsense peddlers, research, twitter files, weaponization subcommittee
Companies: clemson university, german marshall fund, stanford, twitter, university of washington
If You Believe In Free Speech, The GOP’s “Weaponization” Subcommittee Is Not Your Friend
from the weaponizing-the-government-against-weaponizing-the-government dept
“Politics,” the writer Auberon Waugh liked to say, “is for social and emotional misfits.” Its purpose is “to help them overcome these feelings of inferiority and compensate for their personal inadequacies in the pursuit of power.” You could accuse old Bron of painting with a rather broad brush, and you would be right. But he plainly understood the likes of Kevin McCarthy. As the _Washington Post_’s Ruth Marcus observed last week, two aspects of McCarthy’s bid to become Speaker of the House stand out. First, that he “seems to crave power for power’s sake, not for any higher purposes.” And second, that he “is willing to debase himself so completely to obtain it.”
Of the many concessions McCarthy made to his far-right flank to obtain the Speaker’s gavel, one of the most straightforward was to create a new Select Subcommittee on the Weaponization of the Federal Government. The desire for such an entity “percolat[ed] on the edges of the [party] conference and conservative media,” Politico reported last month, and the calls for it then quickly spread, “getting harder for the speaker hopeful to ignore.” But the hardliners were pushing at an open door: McCarthy had already been promising sweeping investigations of the Department of Justice and the FBI.
It’s amusing that the subcommittee is simply “_on_” weaponization, leaving onlookers the latitude to decide for themselves whether the body’s position is “pro” or “con.” The subcommittee will likely seek to disrupt the executive branch’s probes of Donald Trump’s interference in the 2020 election, role in the Capitol attack, and defiant mishandling of classified documents. It might also seek to hinder the government’s efforts to prosecute Jan. 6 rioters. In attempting to obstruct federal law enforcement, the House GOP would be engaging in its own forms of “weaponization.” It would be trying to “weaponize” its own authority—which, under our Constitution’s separation of powers, does not extend to meddling in ongoing criminal investigations. And it would be trying to “weaponize” the federal government by compelling it not to enforce the law. A better label might have been the “Select Subcommittee on Weaponizing the Federal Government Our Way.” Or, for brevity’s sake, perhaps “Partisan Hacks Against the Rule of Law.”
It is in this light that we must view another of the subcommittee’s main goals—getting “to the very bottom” (McCarthy’s words) of the federal government’s relationship with Big Tech. Last month Rep. Jim Jordan, the incoming chair of the House Judiciary Committee—and, now, of its “weaponization” subcommittee as well—accused the major tech firms of being “out to get conservatives.” He demanded that those firms preserve records of their “‘collusion’ with the Biden administration to censor conservatives on their platforms.” According to Axios, the subcommittee “will demand copies of White House emails, memos and other communications with Big Tech companies.”
There is nothing inherently wrong with setting up a congressional committee to investigate whether and how the government is influencing online speech and content moderation. After all, Congress has good reason to care about what the government itself is saying, especially if the government is using its own speech to violate the Free Speech Clause. Congress has a constitutional duty to oversee (though not intrude on) the executive branch’s faithful execution of the laws Congress has passed.
Lately, moreover, the executive branch has indeed displayed an unhealthy desire to control constitutionally protected expression. Government officials now routinely jawbone social media platforms over content moderation. There were Surgeon General Vivek Murthy’s guidelines on “health misinformation,” issued—the platforms may have noticed—amid a push by the Biden administration to expose platforms to litigation over “misinformation” by paring back their Section 230 protection. Biden’s then-Press Secretary Jen Psaki announced that the administration was flagging posts for platforms to remove. What’s worse, she declared that a ban from one social media platform should trigger a ban from all platforms. And then there was the notorious “Disinformation Governance Board”—a body whose name was dystopian, whose powers were ill-defined, whose rollout was ham-fisted, and whose brief existence unsettled all but the most sanguine proponents of government power. It can hardly be said that there’s nothing worth investigating.
The First Amendment bars the government from censoring speech it doesn’t like—even speech that might be called “misinformation.” The state may try to influence speech indirectly—it is allowed, within limits, to express its opinion about others’ speech—but that doesn’t mean doing so is a good idea. The government shouldn’t be telling social media platforms what content to allow, much as it shouldn’t be telling newspapers what stories to print.
Misguided though they may be, however, none of the government’s efforts—to this point—have violated the First Amendment. The government has not ordered platforms to remove or ban specific content. It has not issued threats that rise to the level of government coercion. And it has not co-opted the platforms in a manner that would turn them into state actors. If anything, the right’s ongoing lawsuits alleging otherwise have helped reveal a quite different problem: that the platforms are all too receptive to government input. But agreeing with the government does not make one’s actions attributable to the government.
The “Twitter Files”—which helped inspire, and will drive much of, the subcommittee’s investigation—change precisely none of this. Much misunderstood and even more misrepresented, the information released via Elon Musk’s surrogates actually undercuts the narrative that the federal government is dictating the platforms’ editorial decisions.
We were promised evidence that the FBI and the federal government conspired with platforms to squash the Hunter Biden laptop story. Instead, we learned—as “Twitter Files” player Matt Taibbi himself put it—that “there’s no evidence … of any government involvement.” Messages to Twitter sent by the Biden campaign, we were told, amounted to a bona fide First Amendment violation. But a non-state actor lobbying a non-state actor does not a state action make. Such lobbying by political campaigns is common—and, in many instances, even proper. (Many of the tweets the Biden campaign flagged contained links to leaked nude photos of Hunter Biden. Even political candidates may try to defend their families’ privacy.)
Yet another “Twitter Files” document dump showed Twitter receiving payments from the FBI. This, we heard, definitively revealed the Grand Conspiracy to Censor Conservatives. Except that the payments were simply statutorily mandated reimbursements for expenses Twitter incurred replying to court-ordered requests for investigatory information.
So although there might well be issues regarding government jawboning worth investigating, you can be forgiven for doubting that the House GOP, proceeding through its “weaponization” subcommittee, is up to the task of seriously investigating them. Judging from past performance, the Republicans who control the body will use its hearings to emit great waves of impotent, performative, largely unintelligible sound. “The yells and animal noises” of parliamentary debates, Auberon Waugh wrote, have nothing to do with principles or policy. “They are cries of pain and anger, mingled with hatred and envy, at the spectacle of another group exercising the ‘power’ which the first group covets.” That will describe Republican-run Big Tech hearings to a tee.
The GOP is not fighting to stop so-called “censorship”; it’s fighting to stop so-called “censorship” performed by those they dislike. When Musk suspended some journalists from Twitter—on trumped up charges, no less—many on the right responded with whoops of glee. That Musk had just engaged in precisely the sort of conduct those pundits had long denounced was of no consequence. Indeed, when some on the left pointed out that the suspensions were arbitrary, impulsive, and imposed under false pretenses, their remarks launched a thousand conservative op-eds crowing about progressive hypocrisy. (There should be a long German word for shouting “Hypocrite!” at someone as you pass by him on the flip-flop road.)
Choking on outrage, the contemporary political right has descended into practicing “Who, whom?” politics of the crassest sort. House Republicans have no problem with “weaponizing” the government, so long as they’re the ones doing the “weaponizing.” This explains how they can rail against a government campaign to reduce COVID misinformation on social media while also arguing that Section 230, the law that gives social media platforms the legal breathing room to host sketchy content to begin with, should be scrapped.
If you believe for one moment that Kevin McCarthy, Jim Jordan, and their myrmidons truly support free speech on the Internet, we’ve got beachfront property in Kansas to sell you. There was no limit to Waugh’s disdain for such men. Until the public “accepts that the urge to power is a personality disorder in its own right,” he said, “like the urge to sexual congress with children or the taste for rubber underwear, there will always be a danger of circumstances arising which persuade ordinary people to start listening to politicians … and taking them seriously.” A bit over the top, to be sure—though not in this case.
Filed Under: 1st amendment, congress, content moderation, elon musk, free speech, jim jordan, kevin mccarthy, social media, twitter files, weaponization subcommittee
Companies: twitter