youtube – Techdirt (original) (raw)

More Of RFK Jr.’s ‘Don’t Moderate Me, Bro’ Cases Are Laughed Out Of Court

from the that's-not-how-any-of-this-works dept

In the last month, I wrote about two of Robert F. Kennedy Jr.’s bullshit crazy lawsuits over him being very, very mad that social media companies keep moderating or limiting the spread of his dangerous bullshit anti-vax nonsense. In one, the Ninth Circuit had to explain (not for the first time) to RFK and his disgraced Yale Law professor lawyer, Jed Rubenfeld, that Meta fact checking RFK Jr. does not violate the First Amendment, and that Section 230 does not turn every internet company into a state actor.

In the other case, one of the MAGA world’s favorite judges ignored both the facts and the scolding he just got from the Supreme Court to insist that the Biden administration has been trying to censor RFK Jr., a thing that has not actually happened.

But Professor Eric Goldman reminds me that there were two other cases involving RFK Jr. and his anger at being moderated that had developments that I hadn’t covered. And both of them were, thankfully, not in courtrooms of partisan judges who live in fantasylands.

First, we had a case in which RFK Jr. sued Meta again. I had mentioned this case when it was filed. The Ninth Circuit one mentioned above was also against Meta, but RFK Jr. decided to try yet again. In this case, he also sued them claiming that efforts to restrict a documentary about him by Meta violated his First Amendment rights.

If you don’t recall, Meta very temporarily blocked the ability to share the documentary, which they chalked up to a glitch. They fixed it very quickly. But RFK Jr. insisted it was a deliberate attempt to silence him, citing Meta’s AI chatbot as giving them the smoking gun (yes, they really did this, even the chatbot is just a stochastic parrot spewing whatever it thinks will answer a question).

What I had missed was that district court Judge William Orrick, who is not known for suffering fools lightly, has rejected RFK Jr.’s demands for a preliminary injunction. Judge Orrick is, shall we say, less than impressed by RFK Jr. returning to the well for another attempt at this specious argument, citing the very Ninth Circuit case that RFK Jr. just lost in his other case against Meta.

The plaintiffs assert that they are likely to succeed on the merits of their First Amendment claim, which is that Meta violated their rights to free speech by censoring their posts and accounts on Meta’s platforms. But the First Amendment “‘prohibits only governmental abridgment of speech’ and ‘does not prohibit private abridgment of speech.’” Children’s Health Def. v. Meta Platforms, Inc., —F. 4th—, No. 21-16210, 2024 WL 3734422, at *4 (9th Cir. Aug. 9, 2024) (first quoting Manhattan Cmty. Access Corp. v. Halleck, 587 U.S. 802, 808 (2019); and then citing Prager Univ. v. Google LLC, 951 F.3d 991, 996 (9th Cir. 2020)). Because there is no apparent state action, this claim is unlikely to succeed.

RFK Jr. twists himself into a pretzel shape to try to claim that Meta is magically a state actor, but the court has to remind him that these arguments are quite stupid.

The Ninth Circuit recently has twice affirmed dismissal of claims filed by plaintiffs alleging that social media platforms violated the plaintiffs’ First Amendment rights by flagging, removing, or otherwise “censoring” the plaintiffs’ content shared on those platforms. See Children’s Health, 2024 WL 3734422 at *2–4; O’Handley, 62 F.4th at 1153–55. In both cases, the Ninth Circuit held that the plaintiffs’ claims failed at the first step of the state action framework because of “the simple fact” that the defendants “acted in accordance with [their] own content-moderation policy,” not with any government policy…..

The only difference between those cases and this one is that here, the plaintiffs seem to allege that the “specific” harmful conduct is Meta’s censorship itself, rather than its policy of censoring. Based on the documents submitted and allegations made, that is a distinction without a difference.

RFK Jr. tried to argue that the ruling by Judge Doughty in Louisiana supports his position, but Judge Orrick wasn’t born yesterday and that he can actually read what the Supreme Court wrote in the Murthy decision rejecting these kinds of arguments.

The Murthy opinion makes my decision here straightforward. Murthy rejected Missouri’s factual findings and specifically explained that the Missouri evidence did not show that the federal government caused the content moderation decisions. Yet here, the plaintiffs rely on Missouri as their evidence that a state rule caused the defendants’ alleged censorship actions. Even if I accepted the vacated district court order as evidence here—which I do not—the Supreme Court has plainly explained why it does not support the plaintiffs’ argument.

Even though he notes that he doesn’t even need to go down this road, Judge Orrick also explains why the whole “state actor” argument is nonsense as well:

The plaintiffs’ theory is that Meta and the government colluded or acted jointly, or the government coerced Meta, to remove content related to Kennedy’s 2024 presidential campaign from Meta’s platforms. The problem with that theory is again the lack of evidence. The Missouri and Kennedy findings were rejected by the Supreme Court, as explained above. And they—and the interim report—suggest at most a relationship or communications between Meta and the government about removal of COVID-19 misinformation in 2020 and 2021. Even if the plaintiffs proved that Meta and the government acted jointly, or colluded, or that Meta was coerced by the government to remove and flag COVID-19 misinformation three years ago, that says nothing about Meta’s relationship and communications with the government in 2024. Nor does it suggest that Meta and the government worked together to remove pro-Kennedy content from Meta’s platforms.

Because of this, the plaintiffs fail to show likelihood of success on the merits—or serious questions going to the merits—for any of the three possible state action prongs. They do not provide evidence or allegations of a “specific[]” agreement between Meta and the government to specifically accomplish the goal of removing Kennedy content from Meta platforms. See Children’s Health, 2024 WL 3734422, at 5 (describing joint action test and collecting cases). Nor do they show that the government exercised coercive power or “significant encouragement” for Meta to remove Kennedy-related content in 2024. Id. at 9–10 (describing coercion test and finding that allegations about Congressmembers’ public criticism of COVID-19 misinformation on social media sites was insufficient to show government coerced platforms to remove it). And for similar reasons, the plaintiffs do not establish a “sufficiently close nexus” between the government and the removal of Kennedy-related content from Meta’s platforms. Id. at *5. Their First Amendment claim accordingly fails at step two of the state action inquiry. It is far from likely to succeed on the merits.

RFK Jr. also made a Voting Rights Act claim, that removing the documentary about him somehow interfered with people’s rights to vote for him. But the court notes that this argument is doomed by the fact that Meta noted that the blocking of links was an accident, which happens all the time:

The defendants point to compelling evidence that the video links were incorrectly automatically flagged as a phishing attack, a “not uncommon” response by its automated software to newly created links with high traffic flow. Oppo. 5–6 (citing Mehta Decl. Ex. A ¶ 7). The defendants’ evidence shows that once the defendants were alerted to the problem, through channels set up specifically for that purpose, the links were restored, and the video was made (and is currently still) available on its platform. Mehta Decl. Ex. A. ¶¶ 4–8, Exs. M–Q. Though the plaintiffs say the removal of the video was an effort to coerce them to not urge people to vote for Kennedy, the defendants’ competing evidence shows that it was a technological glitch and that the plaintiffs were aware of this glitch because they reported the problem in the first place. And if the plaintiffs were aware that a tech issue caused the removal of the videos, with that “context” it would probably not be reasonable for them to believe the video links were removed in an effort to coerce or intimidate them.

The court is also not impressed by the argument that other people (not parties to the case) had accounts removed or limited for sharing support for RFK Jr. As the judge makes clear, RFK Jr. doesn’t get to sue someone over a claim that they intimidated someone else (for which there isn’t any actual evidence anyway).

Third, the plaintiffs submit evidence that other peoples’ accounts were censored, removed, or threatened with removal when they posted any sort of support for Kennedy and his candidacy. See, e.g., Repl. 1:13–24; [Dkt No. 29-1] Exs. A, B. The defendants fail to respond to these allegations in their opposition, but the reason for this failure seems obvious. Section 11(b) provides a private right of action for Person A where Person B has intimidated, threatened, or coerced Person A “for urging or aiding any person to vote.” 52 U.S.C.A. § 10307(b). It does not on its face, or in any case law I found or the parties cite, provide a private right of action for Person C to sue Person B for intimidating, threatening, or coercing Person A “for urging or aiding any person to vote.” Id. Using that example, the three plaintiffs would be “Person C.” Their evidence very well might suggest that Meta is censoring other users’ pro-Kennedy content. But those users are not plaintiffs in this case and are not before me now.

Importantly, the plaintiffs had plenty of time and opportunity to add any of those affected users as new plaintiffs in this case, as they added Reed Kraus between filing the initial complaint and filing the AC and current motion. But they did not do so. Nor do they allege or argue that AV24 has some sort of organizational or third-party standing to assert the claims of those affected users. And while they seem to say that Kennedy himself is affected because that evidence shows Meta users are being coerced or threatened for urging people to vote for him, the effect on the candidate is not what § 11(b) protects. Accordingly, this evidence does not support the plaintiffs’ assertions. The plaintiffs, therefore, fail to counter the compelling evidence and reasons that the defendants identify in explanation for the alleged censorship.

More critically, the plaintiffs do not deny the defendants’ portrayal of and reasons for the defendants’ actions. The plaintiffs fail to incorporate those reasons into their assessment of how a “reasonable” recipient of Meta’s communications would interpret the communications in “context.” See Wohl III, 661 F. Supp. 3d at 113. Based on the evidence provided so far, a reasonable recipient of Meta’s communications would be unlikely to view them as even related to voting, let alone as coercing, threatening, or intimidating the recipient with respect to urging others to vote.

Towards the end of the ruling, the court finally gets to Section 230 and notes that the case is probably going nowhere even without everything earlier, because Section 230 makes Meta immune from liability for its moderation actions. However, the case didn’t hinge on that because neither side really went deep on the 230 arguments.

As for the other RFK Jr. case, I had forgotten that he had also sued Google/YouTube over its moderation efforts. At the end of last month, the Ninth Circuit also upheld a lower court ruling on that case in an unpublished four-page opinion where the three-judge panel made quick work of the nonsense lawsuit:

Google asserts that it is a private entity with its own First Amendment rights and that it removed Kennedy’s videos on its own volition pursuant to its own misinformation policy and not at the behest of the federal government. Kennedy has not rebutted Google’s claim that it exercised its independent editorial choice in removing his videos. Nor has Kennedy identified any specific communications from a federal official to Google concerning the removed Kennedy videos, or identified any threatening or coercive communication, veiled or otherwise, from a federal official to Google concerning Kennedy. As Kennedy has not shown that Google acted as a state actor in removing his videos, his invocation of First Amendment rights is misplaced. The district court’s denial of a preliminary injunction is AFFIRMED.

If RFK Jr. intends to appeal the latest Meta ruling (and given the history of his frivolous litigation, the chances seem quite high that he will), the Ninth Circuit might want to just repurpose this paragraph and swap out the “Google” for “Meta” each time.

Now, if only the Fifth Circuit would learn a lesson or two from the Ninth Circuit (or the Supreme Court), we could finally dispense with the one case that ridiculously went in RFK Jr.’s favor.

Filed Under: 1st amendment, 9th circuit, content moderation, free speech, jed rubenfeld, rfk jr., state actor, voting rights act, william orrick
Companies: google, meta, youtube

Ctrl-Alt-Speech: ChatGPT Told Us Not To Say This, But YOLO

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Daphne Keller, the Director of the Program on Platform Regulation at Stanford’s Cyber Policy Center. They cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Filed Under: age appropriate design code, chatgpt, content moderation, dsa, kosa, ninth circuit
Companies: patreon, tiktok, twitter, x, yolo, youtube

YouTube War Crime: Site Takes Down Rifftrax Briefly In A Sort Of Collective Punishment Example

from the collective-punishment dept

Because of the wonderful world in which we live, we get to learn about certain unhappy terms and practices, one of which is the concept of “collective punishment.” As a matter of war, collective punishment is a war crime. The idea is that a belligerent force cannot punish an entire population or group merely for the actions committed by a member or associate of that group. For a real world example, see: Gaza.

Now, spicy headlines aside, YouTube has not committed an actual war crime. However, it appeared to briefly engage in something similar in the digital realm when it went about shutting down the channel for Rifftrax, a spinoff and independently run channel from some veterans of the classic Mystery Science 3000 cast. It appears that the channel was shut down not because of any activity it engaged in on the channel, but rather because of some questionable copyright strikes levied against the channel of its former parent company.

So, to summarize, Rifftrax’s channel was taken down because of some copyright strikes that may or may not be legitimate, but which were also targeting a channel Rifftrax was previously affiliated with, but which it no longer is. So, the sins of one channel resulted in the takedown of another due to an indirect association. Collective punishment.

As you can see from Rifftrax’s statement, it had resigned itself to simply being off YouTube entirely. I’m sure there had been some back and forth with YouTube’s support, but that didn’t seem to result in any progress. It was only when Rifftrax made its public statement and rallied its very passionate fanbase that YouTube managed to realize its error and reinstate the channel.

It’s great that public backlash resulted in justice being done in this instance, but what if we were talking about a channel with much less of a following than Rifftrax? What would have happened then? Would that channel ever have been reinstated? Without the pressure from the public, would YouTube ever have lifted a finger?

While we can’t answer that question definitively, it certainly is certain that this kind of collateral damage occurs far too often on many internet platforms, YouTube especially. And nobody seems to be willing to do anything about it.

Filed Under: copyright, dmca, rifftrax, takedown
Companies: legend, rifftrax, youtube

from the treating-the-symptoms dept

It’s been a decade or so since one of the silliest ways to combat the symptom of a broken copyright system came to be: safe streaming settings in video games. Because of the way licensing works for the musical compositions in video games, and because some games include mainstream music a la Grand Theft Auto, special settings have to be put in the menu of these games that prevent copyrighted music from being played while game-streamers do their thing. The whole thing is quite silly, since there can’t possibly be anyone at all that believes that listening to a let’s play video game stream that includes copyrighted music somehow replaces anyone’s impetus to buy that music elsewhere. It’s just part of the game that comes along for the ride.

But like I said, it’s been a decade of this, and apparently there are enough of us that are cool with the status quo, since EA just updated The Sims 4 with this same safe to stream setting. Notably, this is a game that came out ten years ago and just got this setting.

If you go into the the Music section of Game Options, you’ll now see a “Safe for Steaming” toggle that allows you to play only the music that’s safe for streaming.

TikToker @jeremy_gonewild explains the update thusly: “This blocks copyrighted music from playing on your Sims radio when they listen to the radio.”

But, Jeremy cautions, “This isn’t about build mode music, or the Create-A-Sim music, that stuff is fine. This is preventing a Simlish version of Last Friday Night by Katy Perry from coming on your Sims radio while you’re livestreaming, earning you a copyright strike.”

The game allows for this kind of music to be played in game, but the moment it’s done as part of a let’s play stream it suddenly becomes a problem. And while that’s all technically true as a matter of current copyright laws, what is most useful in this for our purposes is to highlight how absurd this all is. If a streamer of this game happens to have some copyrighted music playing in the background… who gets harmed? Is a musician really losing a sale due to a video game stream? Are viewers of the stream going to refuse to listen to the song on some other streaming service?

Or is it actually more likely that some music will be discovered by a new generation via video game streams like this? Especially because these “simlish” versions of mainstream songs that were introduced into the game were done so with the expressed purpose of promoting the artists and giving them more visibility.

you’ll notice quite a difference between songs that are copyright approved. You’ll notice all of the in-game original Sims music—including those composed by Mark Mothersbaugh—are free to use in streaming mode.

While your sims won’t be having a Brat Summer during a live stream anytime soon, you’ll no need to worry about your Sims live stream being rudely interrupted because of the man!

Cute, but ultimately not all that funny. It’s just too bad we have to navigate this sort of thing with workarounds rather than creating a more sane copyright system.

Filed Under: copyright, safe for streaming, the sims
Companies: ea, youtube

from the what-are-you-hiding? dept

Terms of Service are a reality we deal with all the time with digital goods and services. And by “deal with”, I mostly mean we don’t read them and simply agree to whatever they say instead, assuming there is nothing crazy in them. But that also causes a lot of problems, with customers of these products suddenly having changes to them foisted upon them, or realizing that they can’t do with their purchase what they thought they could, all of which are covered by the ToS that was essentially unread.

As a result, there are some folks out there who like to dive into the ToS for targeted industries. YouTuber Miss Krystle’s channel, Top Music Attorney, is one example of this. She is an attorney and musical artist who dives into Terms of Service within the music industry, analyzing them and pulling out anything that would be of interest or concern to a customer of the product or service. For instance, she covered the ToS for Splice, which provides a catalogue of royalty-free music samples for musicians to use.

“I have a series where I go through the terms of service for these music businesses, and I tell you guys what these contracts say that you’re being forced to sign in order to use these platforms,” she explains.

Krystle claims she was handed a cease and desist order from Splice’s legal department, to which she suggested jumping on a phone call to clarify some of the stipulations of the company’s ToS, saying she wanted to create a followup video for her audience’s clarity.

She says the call was productive and that Splice had agreed to update its ToS to iron out flagged inconsistencies, and that she left feeling a positive resolution was had by all.

So far, this whole thing reads as annoying but not terribly surprising. Splice’s legal team likely came across Miss Krystle’s video and, because ToS are somehow afforded copyright protection, sent out a C&D claiming the reproduction of those ToS in her YouTube video was infringing. Again, Miss Krystle is an attorney, so I imagine the call she had with Splice included explaining how this use is likely to be covered under fair use provisions. After all, it’s not as though she were using Splice’s ToS to copy it for her own ToS, which is where copyright claims for Terms of Service tend to come from. In any case, she indicates the call ended on a positive note.

And then Splice issued a copyright strike against her channel.

It was only the next day that she discovered her Top Music Attorney YouTube channel had been issued a copyright infringement takedown notice at the request of Splice, resulting in a harmful copyright strike. If a YouTube user receives three copyright strikes in 90 days, their account and channel is permanently terminated.

And now Splice has a problem. For starters, the original video Splice complained about is still up on her channel. And because of both the C&D and moreso because of the copyright strike, a whole lot more attention is being paid to that original video, Miss Krystle’s follow up video in which she tells the story of the C&D and getting the copyright strike, and the issues surrounding Splice’s ToS as a whole. You have to assume that Splice took these actions because it wanted to limit the content of these videos’ critiques of its ToS from as much public attention as it could. In true Streisand Effect fashion, it achieved the exact opposite.

Which brings me to the question that everyone should be asking: just what is in these Terms of Service that Splice is so terrified its customers and potential customers will see?

Fortunately, a lot more people can get the answer to that question from Miss Krystle, all because Splice wanted to try to silence a critic to keep them hidden.

Filed Under: copyfraud, copyright, miss krystle, terms of service
Companies: splice, youtube

Ctrl-Alt-Speech: Over To EU, Elon

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Domonique Rai-Varming, Senior Director, Trust & Safety at Trustpilot. Together they cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Filed Under: artificial intelligence, content moderation, deepfakes, elon musk, machine learning, syria
Companies: amazon, etsy, expedia, trustpilot, twitter, x, youtube

Ninth Circuit Dumps Lawsuit Against YouTube Brought By Anti-Vaxxer Whose Account Was Terminated

from the YouTube-isn't-obligated-to-hold-your-microphone,-dumbass dept

These lawsuits don’t work. They just don’t. And yet, they’re filed seemingly all the time.

When YouTube decides as a private company it would rather you take your stupid shit elsewhere, it’s allowed to do so. Its terms and conditions contain a phrase found pretty much anywhere: “or for any other reason.” That means that even if you — if you’re anti-vax loudmouth [squints skeptically at court filings] Dr. Joseph Mercola — can’t find any explicit language in the terms and conditions that applies to your content, YouTube can still kick your account to the curb.

Then there’s Section 230 of the CDA, which immunizes service providers against lawsuits like these that try to claim there’s something legally wrong with the way services moderate content. This one never reaches that point in the legal discussion, but if it had, Section 230 would have ended this lawsuit at this point as well.

But, as this short opinion [PDF] from the Ninth Circuit Appeals Court points out, YouTube actually had enumerated a rationale for this moderation decision. Just because Mercola didn’t agree with it doesn’t mean he has a legitimate cause for action. (h/t Eric Goldman)

The district court had it right when it made the first call, as recounted by the Ninth Circuit en route to its affirmation:

Mercola alleges that the Agreement’s Modification Clause required that YouTube provide it “reasonable advance notice” before YouTube terminated its account for allegedly violating YouTube’s Community Guidelines. The district court held that the Modification Clause did not override other provisions in the Agreement that allow YouTube to immediately take down content considered harmful to its users and that the Agreement did not give Mercola any right of access to the contents of a terminated account. It also found that the Agreement’s Limitations on Liability provision foreclosed relief.

Mercola thought the “reasonable advance notice” would net him a win in court since he apparently hadn’t received this “advance notice.” But, as the Ninth Circuit points out, other clauses in the same contract allowed YouTube to do what it did without doing anything remotely legally actionable.

However, the Agreement’s “Removal of Content” section states that if YouTube “reasonably believe[s]” that any content “may cause harm to YouTube, our users, or third parties,” it “may remove or take down that Content in our discretion,” and “will notify you with the reason for our action” unless doing so would breach the law, compromise an investigation, or cause harm. Also, the Agreement’s “Termination and Suspensions by YouTube for Cause” section states that YouTube may “suspend or terminate” an account if “you materially or repeatedly breach this Agreement” or if “we believe there has been conduct that creates (or could create) liability or harm to any user, other third party, YouTube or our Affiliates.”

In YouTube’s moderation opinion, spreading anti-vax horseshit might “cause harm” to other users, in which case it was free to remove the content (and the account spreading it) without notice. If the court was to buy Mercola’s argument about the terms and conditions, YouTube would be prevented from protecting other users until it had “notified” the user creating the perceived harm first, which means the platform would be doing more to protect harmful users than protect other users from harm. That would be even more problematic, which is why the language used in the terms and conditions is either/or, rather than something more restrictive.

The Ninth Circuit points this out specifically, if somewhat problematically:

[T]o construe the Modification Clause to prohibit the immediate termination of an account that causes harm to others would be contrary to protecting the public. In September 2021, when YouTube terminated Mercola’s account, it was reasonable (even if incorrect) to consider “anti vaccine” postings to be harmful to the public.

That’s kind of weird. Hopefully, that’s not a Ninth Circuit judge going off script to express a personal opinion about vaccinations. I mean, this isn’t the Supreme Court. This is one of the more-respected lower courts.

As Eric Goldman points out in his post on the lawsuit, this unfortunate wording may be just that: unfortunate.

This is a non-precedential memo opinion, so I’m going to assume that the reference to “incorrect” was sloppily phrased and instead intended as a hypothetical (i.e., even if YouTube was incorrect)–and not as a declaration that it’s official Ninth Circuit policy that anti-vax postings do not harm the public (of course they do).

And that ends this idiotic lawsuit. The dismissal is affirmed and the Ninth Circuit refuses to grant Mercola a chance to amend the lawsuit to pursue a heretofore unexamined legal theory. That means Mercola is completely out of luck as the lower court has already ruled his lawsuit is “barred as a matter of law,” which means no amount of rewriting can save it.

This should never have been filed. Mercola is still free to spread misinformation about vaccines. He’ll just have to do it elsewhere. YouTube isn’t contractually obligated to provide him a platform for his stupidity.

Filed Under: 9th circuit, content moderation, free speech, joseph mercola, lawsuit, section 230
Companies: alphabet, youtube

YouTuber Has Video Demonitized Over Washing Machine Chime

from the money-laundering dept

It should not be controversial to state that, as it stands today, YouTube’s ContentID platform for policing copyright on YouTube videos is hopelessly broken. The system is wide open to abuse from bad actors who might lay claim to content that simply isn’t theirs, sometimes to the tune of raking in millions of dollars. ContentID is also abused by some in law enforcement to prevent recordings of police from showing up on YouTube. And then of course there are all the times that ContentID simply flags content that it shouldn’t, such as the sound of a cat purring or plain white noise.

And so it isn’t much of a surprise that these issues keep popping up. YouTuber Albino took to social media to complain about how he received a copyright strike for a let’s play video because, well, a home appliance made a noise.

On May 27, 2024, Norwegian YouTuber ‘Albino’ revealed that one of his six-hour playthroughs of Fallout: New Vegas had been given a strike due to supposedly including the song ‘Done’ by music artist Aduego.

However, this track was never actually in Albino’s video. Instead, the audio that plays at that particular point in his playthrough was the jingle from a Samsung washing machine, which plays when a wash cycle is complete.

Sadly, it’s even dumber than that. Apparently this recording by this particular “artist” isn’t a song at all, but just an upload of that same washing machine jingle that’s been on YouTube for nearly a decade. So, some rando records his washing machine jingle, uploads it to YouTube, then registers it with ContentID, and goes around demonetizing other YouTube videos where the jingle plays. And, because of how ContentID is policed — or not —, none of this is caught by anyone at all.

Albino also pointed out the myriad of comments criticizing Aduego underneath his video, with one viewer writing: “Did you record the Samsung washer, then upload it to YouTube with a content ID?” At the time of writing, it appears that Adeugo’s video has been either privated or removed from YouTube.

“This is the most egregious example of the MANY outright fraudulent content ID claims I’ve gotten over the years,” he wrote. “Are you guys doing anything to prevent this? It’s completely out of hand.”

YouTube’s response was of standard fare. It indicated that Albino could dispute the strike and then Adeugo would have 30 days to respond. This of course would open Albino’s channel to the risk of being bounced off the platform completely. Whatever this is, it is obviously not good and sound enforcement of valid copyrights.

But we’ve had a million of these posts on the site over the years and it doesn’t seem to be getting any better. At some point, YouTube is going to have to come to terms that its Content ID system is broken and come up with something better. If all of this can occur because of a washing machine, after all, there’s no hope for far more nuanced copyright claims and issues.

Filed Under: abuse, contentid, copyright, fraud, streaming, washing machine
Companies: samsung, youtube

Ctrl-Alt-Speech: Do You Really Want The Government In Your DMs?

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Filed Under: content moderation, deepfakes, digital services act, eu, india, ofcom, section 230, singapore
Companies: facebook, instagram, meta, tiktok, youtube

When You Need To Post A Lengthy Legal Disclaimer With Your Parody Song, You Know Copyright Is Broken

In a world where copyright law has run amok, even creating a silly parody song now requires a massive legal disclaimer to avoid getting sued. That’s the absurd reality we live in, as highlighted by the brilliant musical parody project “There I Ruined It.”

Musician Dustin Ballard creates hilarious videos, some of which reimagine popular songs in the style of wildly different artists, like Simon & Garfunkel singing “Baby Got Back” or the Beach Boys covering Jay-Z’s “99 Problems.” He appears to create the music himself, including singing the vocals, but uses an AI tool to adjust the vocal styles to match the artist he’s trying to parody. The results are comedic gold. However, Ballard felt the need to plaster his latest video with paragraphs of dense legalese just to avoid frivolous copyright strikes.

When our intellectual property system is so broken that it stifles obvious works of parody and creative expression, something has gone very wrong. Comedy and commentary are core parts of free speech, but overzealous copyright law is allowing corporations to censor first and ask questions later. And that’s no laughing matter.

If you haven’t yet watched the video above (and I promise you, it is totally worth it to watch), the last 15 seconds involve this long scrolling copyright disclaimer. It is apparently targeted at the likely mythical YouTube employee who might read it in assessing whether or not the song is protected speech under fair use.

Image

And here’s a transcript:

The preceding was a work of parody which comments on the perceived misogynistic lyrical similarities between artists of two different eras: the Beach Boys and Jay-Z (Shawn Corey Carter). In the United States, parody is protected by the First Amendment under the Fair Use exception, which is governed by the factors enumerated in section 107 of the Copyright Act. This doctrine provides an affirmative defense for unauthorized uses that would otherwise amount to copyright infringement. Parody aside, copyrights generally expire 95 years after publication, so if you are reading this in the 22nd century, please disregard.

Anyhoo, in the unlikely event that an actual YouTube employee sees this, I’d be happy to sit down over coffee and talk about parody law. In Campell v. Acuff-Rose Music Inc, for example, the U.S. Supreme Court allowed for 2 Live Crew to borrow from Roy Orbison’s “Pretty Woman” on grounds of parody. I would have loved to be a fly on the wall when the justices reviewed those filthy lyrics! All this to say, please spare me the trouble of attempting to dispute yet another frivolous copyright claim from my old pals at Universal Music Group, who continue to collect the majority of this channel’s revenue. You’re ruining parody for everyone.

In 2024, you shouldn’t need to have a law degree to post a humorous parody song.

But, that is the way of the world today. The combination of the DMCA’s “take this down or else” and YouTube’s willingness to cater to big entertainment companies with the way ContentID works allows bogus copyright claims to have a real impact in all sorts of awful ways.

We’ve said it before: copyright remains the one tool that allows for the censorship of content, but it’s supposed to only be applied to situations of actual infringement. But because Congress and the courts have decided that copyright is in some sort of weird First Amendment free zone, it allows for the removal of content before there is any adjudication of whether or not the content is actually infringing.

And that has been a real loss to culture. There’s a reason we have fair use. There’s a reason we allow people to create parodies. It’s because it adds to and improves our cultural heritage. The video above (assuming it’s still available) is an astoundingly wonderful cultural artifact. But it’s one that is greatly at risk due to abusive copyright claims.

Let’s also take this one step further. Tennessee just recently passed a new law, the ELVIS Act (Ensuring Likeness Voice and Image Security Act). This law expands the already problematic space of publicity rights based on a nonsense moral panic about AI and deepfakes. Because there’s an irrational (and mostly silly) fear of people taking the voice and likeness of musicians, this law broadly outlaws that.

While the ELVIS Act has an exemption for works deemed to be “fair use,” as with the rest of the discussion above, copyright law today seems to (incorrectly, in my opinion) take a “guilty until proven innocent” approach to copyright and fair use. That is, everything is set up to assume it’s infringing unless you can convince a court that it’s fair use, and that leads to all sorts of censorship.

So even if I think the video above is obviously fair use, if the Beach Boys decided to try to make use of the ELVIS Act to go after “There I Ruined It,” would it actually even be worth it for them to defend the case? Most likely not.

And thus, another important avenue and marker of culture gets shut down. All in the name of what? Some weird, overly censorial belief in “control” over cultural works that are supposed to be spread far and wide, because that’s how culture becomes culture.

I hope that Ballard is able to continue making these lovely parodies and that they are able to be shared freely and widely. But just the fact that he felt it necessary to add that long disclaimer at the end really highlights just how stupid copyright has become and how much it is limiting and distorting culture.

You shouldn’t need a legal disclaimer just to create culture.

Filed Under: contentid, copyright, dmca, dustin ballard, fair use, parody, there i ruined it
Companies: youtube