dmca – Techdirt (original) (raw)

Take-Two DMCAs Video Of GTA5 Mod To For GTA6 Map Content

from the mod-warfare dept

Rockstar Games and its parent company, Take-Two Interactive, have been telling us who they are for years. And who they are, for our purposes, amounts to a game developer that both absolutely hates any leaked information about its games and one that has been perfectly willing to go to war with its own modding community. After suffering an intrusion by bad actors in 2022, a bunch of information and footage from the in-development Grand Theft Auto 6 leaked onto the internet. That leak has been bookended by Rockstar and Take-Two engaging in all kinds of DMCA takedowns for game mods and even saved game files for Grand Theft Auto titles and other franchises.

What do these two topics have in common? Well, they came together recently when Take-Two issued a takedown notice on YouTube videos in which one modder shows off his custom map that seeks to input as much of the map for GTA6 that could be derived from the leaks into GTA5.

Modder ‘Dark Space’ had created a free-to-download GTA 5 map using leaked coordinate data and official trailer shots of GTA 6. He also uploaded gameplay footage of the mod to his YouTube channel. In January, the mod gained widespread attention as GTA fans, eager for a glimpse of the upcoming game, explored this fan-made recreation ahead of GTA 6’s official launch.

However, Dark Space confirmed that he recently received a take down notice from YouTube.

We, and Dark Space, can but speculate as to the motivation behind the takedown. Perhaps Take-Two considers the details in the map to be spoilers of sorts, though it would seem the widely available leaked information about the forthcoming game and the trailers did the spoiling first. Perhaps it considers the map construction to be proprietary information, covered by copyright, and acted upon it. Or perhaps it’s simply a matter of lawyers lawyering.

But what isn’t up for debate is that the modding community continues to feel slighted by the company, while any sane understanding of the effects of these mods is one that is beneficial to Take-Two.

He also criticised Take-Two’s handling of modders, stating, “When will these companies learn to stop attacking their own feet? It’s thanks to the community of players and modders that these companies can stand.”

He claimed that the company has a history of hiring private investigators and taking legal action against them, rather than supporting their work. He pointed to past instances where Take-Two had sent private investigators to modders’ homes, filed lawsuits, and banned creators from making GTA-related content.

As an example, he cited the original GTA Trilogy on PC, where mods were essential for fixing game-breaking bugs and making the titles playable. He argued that instead of appreciating these contributions, Take-Two cracked down on modders instead of acknowledging their efforts.

It should go without saying that importing the game map, as best as can be recreated by a modder, into an older game does not replace the new game. In fact, the attention this mod and those like it have received are a symptom of the thirst the public has for the new game. The company could have used all of this as a free marketing tool for GTA6, if it wanted to. It’s not even that hard.

“Folks, go look at the map if you want. We know you’re hungry to play this game and we’re equally hungry for you to do so. This doesn’t reflect the entirety of the new map, the new game, or the experience you’ll have playing it, but whet your appetites because GTA6 is going to be great.”

It would have been that easy. But instead, the company has decided to once more slight the modding community that helps drive ongoing interest in Take-Two’s new and existing titles. Attacking, as Dark Space put it, their own feet.

Filed Under: copyright, dmca, gta, gta 5, gta 6, mods, video games
Companies: rockstar games, take 2 interactive

YouTube Apparently Unsure If Shakespeare Is In The Public Domain

from the to-be-or-not-to-be-in-the-public-domain dept

One of the darker threads of Walled Culture the book (free digital versions available) is how complex copyright enforcement systems can be abused, for example by sending Digital Millennium Copyright Act (DMCA) takedown requests for material that is perfectly legal. A recent post on the Public Citizen blog offers an extreme example of this blight. Here’s the summary of what happened:

When Julien Coallier sent a series of DMCA takedown requests contending that various print publications of Shakespeare’s plays, and YouTube videos of performances of those plays, infringed his purported copyright in those works, it should have been treated as a bad joke. After all, Shakespeare’s plays were published more than 400 years ago, and it is hard to imagine the[m] as being anything but public domain. Yet not only did YouTube take the demands seriously, it blew off those takedown targets who filed counter-notifications and who asserted their right to publish plainly public domain material.

There are several issues here. One concerns the cavalier manner in which YouTube dealt with this situation – sadly, by no means an isolated incident. As the Public Citizen post explains, one of the video takedown victims was John Underwood, who had posted on YouTube videos of Shakespeare performances by a local non-profit group called Shakespeare by the Sea. When he received notice that two of his videos had been removed because a takedown notice sent by Coallier, Underwood followed the DMCA rules, and sent a counter-notice. He not unnaturally assumed that would resolve such a clear-cut case, not least because Shakespeare by the Seas assured him that it had not relied on Coallier’s claimed version of the Shakespeare plays for their performances. But YouTube ignored the official DMCA procedures and refused to acknowledge Underwood’s counter-notice, or even forward it to Coallier. This was not a one-off: other targets of Coallier’s take-down had also had their counter-notices ignored by YouTube. So Underwood contacted Coallier directly:

In multiple emails, Coallier declined to explain why he thought Underwood’s videos copied Coallier’s “translations” of Shakespeare’s plays, despite being asked repeatedly. Instead, Coallier told them that Shakespeare is not in the public domain because he had been able to register a copyright in so-called English-language “translations” of every one of Shakespeare’s plays. Coallier also claimed that he can charge a five percent royalty on every performance.

This brings us to the second issue: how could the US Copyright Office grant Coallier’s copyright registration? The author of the Public Citizen post, Paul Levy, went to the trouble of obtaining copies of the copyright registration, and found that only two of Coallier’s “translations” of Shakespeare’s plays had been submitted:

Apparently, it was on on the strength of these two “translations” that the Copyright Office granted a registration of Coallier’s copyright in three dozen “translated” plays – tragedies, comedies and histories – without receiving copies of any of the other works in which the Copyright Office was potentially granting a monopoly.

As to what Coallier’s translation amounted to, Levy sent a copy of the Coallier’s work to a Shakespeare expert, Jan Powell:

It was Powell’s opinion that the translation was such a mess that no reputable Shakespeare company would perform a script based on Coallier’s work. In addition to the fact that Coallier’s scripts did away with the iambic pentameter that is the glory of Shakespeare’s plays, she found his “translation” to be a garbled mess.

Following the intervention of Public Citizen, YouTube suddenly started to respond. It accepted Underwood’s counternotice and forwarded it to Coallier, who did not sue Underwood for alleged infringement, as he could have done. Not content with seeing off this abuse of the DMCA takedown system, Public Citizen is going further:

This week, in concert with the Juelsgaard Intellectual Property and Innovation Clinic at Stanford Law School, we have sued Coallier seeking a declaratory judgment of non-infringement, and seeking relief for a DMCA wrongful takedown. Corey Donaldson of the Los Angeles area firm of Ferguson Case Orr Paterson is co-counsel in the case. In addition to securing relief for Underwood, we hope to spur the district court to invoke 17 U.S.C. § 411(b) to suggest to the Copyright Office that it reconsider its registration of Coallier’s copyright.

That’s good news, but it is utterly absurd that so much effort was required to deal with a situation that should never have arisen. The copyright in these “translations” of Shakespeare should never have been granted, not least because only two of the plays were submitted, and yet registration was granted for all the rest of them sight unseen. And YouTube should have followed the rules of the DMCA, which is in any case already strongly biased in favor of those alleging copyright infringement. As Levy concludes:

We also hope that YouTube will consider whether DMCA takedown notices should have to pass the laugh test before they are effected, and consider also how it responds to DMCA counter-notifications. Although I am grateful to the YouTube lawyers who responded so promptly to my inquiries, the system is not working as it should. Many YouTube content creators are hobbyists and amateurs, and do not have the same ability to reach a YouTube lawyer. Abuse of the DMCA for cheap censorship by bad actors who would never file a copyright lawsuit over their claims has long been noted (for example, this post from EFF, which sent Underwood to me for help). It should not take a request from a lawyer to get YouTube to follow the DMCA and counternotices seriously.

This extraordinary saga of takedown notices for performances of Shakespeare show that 27 years after it was passed, the DMCA is still not fit for purpose. The companies like Google that are tasked with implementing it often do so in the most desultory way. There is an underlying assumption that claimed infringements are valid, an injustice compound by an arrogant indifference to the rights of ordinary citizens who find themselves caught up in a complex copyright system that is stacked against them.

Follow me @glynmoody on Mastodon and on Bluesky. Originally posted to Walled Culture.

Filed Under: copyright, counter notice, dmca, dmca takedowns, john underwood, julien coallier, public domain, shakespeare, us copyright office
Companies: youtube

Sony, MarkScan Go On ‘Bloodborne’ DMCA Blitz Of Fun, Fan-Made Creations

from the sony-hates-fun dept

It’s always interesting to me to see companies identify what they see as a threat by looking at what type of content they attempt to DMCA or otherwise disappear. When actual direct and flagrant wholesale copying of a digital product occurs, you can understand why the takedowns are issued. We might still want to argue that there are business model methods for making that less of an issue, but the logic is there. When companies instead look to silence criticism via DMCA takedowns, well, that’s more interesting. Why is the criticism seen as such a threat? The same goes for DMCAing hobbyists that stream their hobby. And when they go after fan-made creations that serve to celebrate the original IP of the company, why in the world is that seen as a threat?

On that last category, none of this is to suggest that companies aren’t within their rights to block fans from expressing their fandom using the company’s IP. But they also don’t have to. Just like Sony didn’t have to go on an absolute DMCA blitz over all kinds of Bloodborne content to ring in the new year. Sony is utilizing MarkScan as its enforcers for this. MarkScan has made it onto our pages before for both being a pain in the ass to reach for those targeted by the company and for making absolutely bullshit copyright claims against others.

Some are speculating that all of this action is occurring because a remake of Bloodborne might be on the way. Whether that is true or not I don’t know, but it doesn’t make some of these takedowns make any more sense. Why did Sony have to issue a takedown for a mod for the original game that made it run in 60 frames per second? Why did Sony have to force a name change for a game that essentially mashed up Bloodborne into kart-racing and “demaked” into PlayStation 1 graphics? And, finally, even if the 2015 PS4 game is about to get an updated remake, why did it have to shut down a project to “demake” the original game also in PS1 style graphics? Especially when that thing was released years ago?

Will Sony ever release a remake or remaster of Bloodborne, the Dark Souls successor that became one of our favorite games of 2015? Even Sony’s former games chief isn’t sure — but that isn’t stopping Sony’s copyright enforcers from shutting down fun in the meanwhile. Last week, it axed the 60fps mod that let the game finally run smoothly, and now it’s killed the fan-made “Bloodborne PSX” demake that reimagined Bloodborne as a block game for the original 1995 PlayStation.

It’s been over three years since Lilith “b0tster” Walther released her homage to early PlayStation games, and it’s only now that Sony is taking it down — or rather MarkScan, the enforcer that slapped the game with a copyright takedown notice that has now taken over its itch.io page.

As The Verge notes, Walther is also the maker of the kart-style game referenced above. She willingly changed the name of that game to Nightmare Kart when Sony reached out to her directly. In this case, there was no direct reach-out. Instead, MarkScan simply issued a takedown of the project page. MarkScan has also managed to takedown some of Walther’s YouTube videos of the demake as well.

Is Sony within its rights here? Sure, of course. Are there roughly a zillion other, better ways it could handle this sort of thing? No doubt. And does MarkScan deserve one iota of trust that it will get this sort of thing right? Not based on the enforcer’s success rate, I’d say.

MarkScan submits millions upon millions of URL takedown requests on behalf of Sony, Amazon, Netflix, Crunchyroll, Novi Digital Entertainment and more, according to a Google transparency report; Google winds up removing around 47 percent of them.

Less than half of the takedown requests result in an actual takedown. Not great. And precisely how much time, energy, and effort is absolutely wasted on the other 53 percent?

The answer is both unknowable and likely a monsterous figure. But when there are no real penalties to the copyright carpetbomb routine, why should groups like MarkScan give a damn?

And why can’t companies like Sony figure out how to let fans be fans and make cool stuff?

Filed Under: bloodborne, copyright, dmca, fan made games, hobbyists, takedowns, video games
Companies: markscan, sony

It’s Doubtful United Healthcare Is Abusing The DMCA To Takedown Luigi Mangione Apparel, But Someone Is

from the not-how-any-of-this-works dept

I had seen this story before Christmas making the rounds on Bluesky, claiming that United Healthcare had sent DMCA takedowns to Teepublic to remove artist Rachel Kenaston’s illustration of Luigi Magione, the guy arrested for shooting and killing United Healthcare CEO Brian Thompson.

While it does appear that Teepublic did, in fact, remove the image and claims to have received a DMCA notice from United Healthcare, I find it extremely unlikely that UHC actually sent a DMCA notice, given that they have no legitimate copyright claim over the image and enough lawyers who would know that. But we’ll get to that.

This case highlights a fundamental problem with the DMCA — it enables censorship by creating a system (backed by law) which allows anyone to demand content be removed from the internet with no real due process, putting heavy legal (governmental) pressure on companies to comply even if the claims are dubious. This arguably violates First Amendment rights by allowing the government to silence lawful speech.

While we can’t say definitively that UHC is abusing copyright law here, the fact that someone is able to do so in this case demonstrates the need to view copyright and the DMCA’s notice-and-takedown procedure in particular as a problematic tool for censorship.

404 Media got its hands on the actual email Kenaston received from Teepublic:

For reference, here is the removed image that Kenaston made:

It’s clearly an illustration based on the photo the NYPD released when attempting to identify Mangione, of him apparently smiling for someone working at the hostel he stayed at.

So, first off, obviously, the underlying image that the NYPD released would have extremely limited copyright protections which, if anything, would be assigned to the operator of the surveillance camera that took it. Kenaston’s illustration might also receive some fairly limited copyright protections for her artistic input.

But, obviously, none of that means that UHC would have any copyright interest at all. I doubt that UHC would have actually filed anything here, even if they don’t like the fact that a very large group of people appear to be supportive of Mangione. UHC have enough lawyers who understand IP law to know that this would be a totally bogus request. Of course there are many cases of companies sending such bogus requests, but those typically involve media operations or other IP-based companies, where unrelated content gets swept up by indiscriminate waves of takedowns (often through a third-party brand monitoring service). It seems similarly unlikely that UHC operates that kind of large DMCA takedown regime.

Also, TeePublic is misrepresenting the DMCA when it says it has no say in what stays on the site, or that it is “required” to remove the content. That’s simply false. The law does not require it, though it does create strong incentives for removal, by offering up a liability safe harbor for those that do remove. But companies are free to reject takedown notices if they don’t believe they are legit. It’s just that they might have to later defend that decision in court.

For what it’s worth, Teepublic is owned by RedBubble, and RedBubble has been taken to court many times over bogus claims of infringement. Indeed, I was an expert witness for them in past cases, so I know that the company has lawyers on staff who know full well that they can push back against bogus takedown claims. But also, I recognize that having fought out some expensive cases in court, they may take a much more “just pull it down so we don’t have to pay more lawyers” approach.

Going through the Lumen Database for takedowns using the Luigi Mangione name, I see that there are a bunch. Though, many of them seem to be people who made other stylized designs of Mangione and are mad that others have put them on t-shirts and hoodies. I question how many of the senders have significant copyright claims in designs like the following:

As we’ve been pointing out for decades, copyright is one of the very few tools in the toolbox that allows anyone to legally demand content be removed from the internet, and companies feel strongly compelled to do so.

Whether or not UHC is actually abusing copyright law this way, it’s clear that someone out there is, and that’s a very problematic feature of copyright law. The assumption that anything listed in a takedown notice is infringing, and the corresponding heavy-handed pressure to remove the content or face huge potential penalties, again reminds us why the DMCA is very questionable on First Amendment grounds.

The fact that someone is abusing it in this particular case is just a reminder of that, even if it’s not actually UHC doing the abusing.

Filed Under: 1st amendment, censorship, copyfraud, copyright, dmca, luigi mangione, notice and takedown, rachel kenaston
Companies: teepublic, united healthcare

Take It Down Act Has Best Of Intentions, Worst Of Mechanisms

from the not-the-way-to-fix-this dept

You may have heard that the US government has a bit of a mess on its hands after House Speaker Mike Johnson worked out a somewhat carefully crafted compromise continuing resolution funding plan to keep the government open, only to have it collapse after Elon Musk screamed about how bad it was and how anyone who voted for it should be voted out of office.

Lots of very, very stupid people came up with lots of very, very stupid reasons for why the continuing resolution should have been rejected, and I imagine most of them don’t realize what it is they’re actually asking for, or how much harm it does when the government is shut down. Elon Musk isn’t going to suffer if the government is shut down, but lots of others will.

That said, I actually appreciate the underlying message that this is a stupid way to run the government, where Congress has to keep playing chicken with the debt ceiling for a budget that has already been approved, so that blowhards and know-nothings can fight over random shit just to keep the basics of the government functioning properly.

Amidst the recent political wrangling over the continuing resolution to keep the government funded, a controversial bill called the TAKE IT DOWN Act was quietly inserted into the continuing resolution at the last minute (earlier in the week I had been told it wouldn’t be included). Sponsored by Senators Ted Cruz and Amy Klobuchar, the bill aims to make it easier for victims of non-consensual intimate imagery (including such imagery generated by online AI tools) to get that content removed from online platforms. While well-intentioned, the bill as currently written raises significant concerns about potential abuse and infringement on free speech.

To be clear, the bill is trying to do something good: enabling people to get non-consensual intimate imagery taken down more easily, with a specific focus on recognizable computer-generated imagery, rather than just actual photographs. But there are significant problems with the methodology here. Even if we agree that the sharing of such imagery is a real problem and should be, at the very least, socially unacceptable, any time you are creating a system to enable content to be taken down under legal threat, you also have to recognize that such a system will inevitably be abused.

And the authors and supporters of TAKE IT DOWN seem to have completely ignored that possibility. It creates a system where someone who claims to be a victim of such sharing can send a notice that effectively requires a website to remove the targeted information.

Upon receiving a valid removal request from an identifiable individual (or an authorized person acting on behalf of such individual) using the process described in paragraph (1)(A)(ii), a covered platform shall, as soon as possible, but not later than 48 hours after receiving such request—

(A) remove the intimate visual depiction; and

(B) make reasonable efforts to identify and remove any known identical copies of such depiction.

The law applies to any online or mobile service that “provides a forum for user-generated content, including messages, videos, images, games, and audio files.” This means the law would impact not just big social media companies, but also small forums, hobby sites, and any other online community where users can share content.

Those forums would then be required to remove any content if they receive a “valid removal request” within 48 hours while also making “reasonable efforts to identify and remove any known identical copies of such depiction.” What exactly constitutes “reasonable efforts” is left vague, but it’s not hard to imagine this meaning platforms would have to implement costly and error-prone automated content matching systems. For small sites run by volunteers, that’s simply not feasible.

But nothing in the law contemplates false notices. And that’s a huge problem. The only current law in the US that has a similar notice and takedown scheme is the DMCA, and, as we’ve been describing for years, the DMCA’s notice-and-takedown provision is widely and repeatedly abused by people who want to takedown perfectly legitimate content.

There have been organized attempts to flood systems with tens of thousands of bogus DMCA notices. A huge 2016 study found that the system is so frequently abused to remove non-infringing works as to question the validity of the entire notice-and-takedown procedure. And that’s the DMCA which in theory has a clause that is supposed to punish fraudulent takedown notices (even if that’s rarely effective).

Here, the law doesn’t even contemplate such a system. Instead, it just assumes all notices will be valid.

On top of that, by requiring covered platforms to “identify and remove any known identical copies” suggests that basically every website will have to purchase potentially expensive proactive scanning software that can match images, whether through hashes or otherwise. Yes, Meta and Google can do that kind of thing (and already do!). But the person who runs a local book club forum or a citywide gardening forum isn’t going to be able to do that kind of thing.

The folks at the Center for Democracy and Technology (CDT) recently wrote up an analysis of the law that calls out these problems:

The TAKE IT DOWN Act requires covered platforms, as soon as possible but not later than 48 hours after receiving a valid request, to remove reported NDII and to make reasonable efforts to identify and remove any known identical copies of such depictions. Doing so at scale, and in that timeframe, would require the widespread use of automated content detection techniques such as hash matching. Hashes are “digital fingerprints” that can be used by platforms to detect known images across their services once the image has been distributed and assists in removal of the identified content if it violates the platform’s use policy or the law. Many platforms already use hash matching for known NDII, child sexual abuse material (CSAM), and terrorist and violent extremist content, though none of these processes is currently required by U.S. law. While TAKE IT DOWN does not expressly mandate the use of hash matching, since services already commonly use the technology to identify known-violating content, it would likely be understood to be a “reasonable effort to identify and remove” known NDII under the bill.

As currently drafted, however, the TAKE IT DOWN Act raises complex questions implicating the First Amendment that must be addressed before final passage. As a general matter, a government mandate for a platform to take down constitutionally protected speech after receiving notice would be subject to close First Amendment scrutiny. The question is whether a narrowly drawn mandate focused on NDII with appropriate protections could pass muster. Although some NDII falls within a category of speech outside of First Amendment protection such as obscenity or defamation, at least some NDII that would be subject to the Act’s takedown provisions, even though unquestionably harmful, is likely protected by the First Amendment. For example, unlike the proposed Act’s criminal provisions, the takedown provision would apply to NDII even when it was a matter of public concern. Moreover, the takedown obligation would apply to all reported content upon receipt of notice, before any court has adjudicated whether the reported image constitutes NDII or violates federal law, let alone whether and how the First Amendment may apply. Legally requiring such take-down without a court order implicates the First Amendment.

As CDT notes, at least adding some “guardrails” against abuse of the takedown process could help deal with the First Amendment problems of the bill:

To increase the chance of surviving constitutional scrutiny, the takedown provisions in the TAKE IT DOWN Act should be more narrowly tailored and include more guardrails. The Act currently does not include many of the DMCA’s guardrails intended to prevent abusive or malicious takedown requests. Even with those guardrails, complainants routinely abuse the DMCA takedown process, leading to the censorship of constitutionally-protected information and criticism. Under current processes, for example, complainants have successfully used the DMCA to take down negative video game reviews, silence parody, and shut down civil society YouTube accounts. The TAKE IT DOWN Act risks repeating this abuse by not expressly exempting commercial pornographic content from the takedown mechanism, only excluding matters of public concern from its criminal prohibitions (but not the takedown mechanism), and not including other protections, such as requiring complainants to attest under penalty of perjury that they are authorized to file a notice on a person’s behalf and other appropriate safeguards. While an NDII takedown mechanism should minimize burden on victims, such steps will mitigate the risks of abuse and the removal of content that cannot or should not be restricted from publication under the takedown mechanism.

The rise of AI-powered “nudify” apps and similar tools has understandably increased the urgency to address the creation and spread of non-consensual intimate imagery. But as concerning as that problem is, rushed and overly broad legislation like the TAKE IT DOWN Act risks causing its own harms. By failing to include robust safeguards against abuse, this bill would create a sprawling extrajudicial takedown system ripe for exploitation and suppression of legitimate speech.

Cramming such a consequential and constitutionally dubious measure into a must-pass spending bill is a disturbing way to legislate. If Congress truly wants to tackle this issue, it needs to slow down, consider the risks, and craft a narrower solution that doesn’t sacrifice crucial free speech protections in the name of expediency. Rushing to regulate away the problem, no matter how well-intentioned, will likely only create new problems, while simultaneously setting the extremely problematic general expectation that for any content Congress disapproves of, it can create laws that require removals.

That’s a dangerous road to start down, no matter how noble the initial cause may be.

Filed Under: 1st amendment, amy klobuchar, continuing resolution, dmca, free speech, non-consensual intimate imagery, notice and takedown, nudify, take it down act, ted cruz

Square Enix Appears To Be Using The DMCA Takedown Process To Silence Criticism

We’ve talked plenty about the ways that the DMCA process specifically is wide open for fraud and abuse. There are plenty of forms this sort of thing can take, of course, but one of the more troubling among them is the use of the DMCA process specifically to disappear critical commentary that a copyright holder doesn’t like. Typically you see this sort of thing activated in a way that at least straddles the line of what copyright law actually allows. For instance, you might have a copyright holder that is generally okay with some uses of their work online, but then turns to the DMCA process when those uses come along with criticism. A selective enforcement of copyright law based on undesired commentary, in other words.

But all of that nuance in approach appears to have gone out the window when it comes to how Square Enix is using the DMCA process to take down commentary on a Reddit thread dedicated to the Life is Strange franchise.

The subreddit r/pricefield is a community dedicated to the Life Is Strange franchise, primarily shipping the lead character of Max with her love interest Chloe from the first game. It has 12,000 members, many of whom have engaged in discussions critical of the decision to omit Chloe from Double Exposure. The subreddit is an independent space for Life is Strange fans to talk fan theories, share fan art and discuss the games, and is wholly unaffiliated from Square Enix. These fans noticed that more and more of their posts and comments were being taken down by Reddit, after a third party began issuing copyright violations.

In response, and fearing that they risked further penalties from Reddit, moderators of the subreddit published an open letter to Square Enix, stating that they believe someone at the company, or developer Deck Nine, was ‘wilfully removing non-infringing content for reasons other than infringement’. They believe this is happening because Square Enix is unhappy with not having editorial control over the page.

The open letter is long and detailed and worth a read on its own. However, some of the very specific allegations of how Square is abusing the DMCA process is detailed in links within it. The context here is that Square has also issued takedowns before and after the release of Life is Strange: Double Exposure for leaked and spoiler content. I still think that’s stupid, but that’s the sort of line-straddling to which I referred above.

But if you actually dig into what else the company is issuing takedowns over, some of it is supposedly purely publicly available information and the commentary around it.

I haven’t posted cause for my mental health and cause the main sub banned me over some small shit, but tell me how a certain third party submitted a takedown notice for me. I got one of these before for sharing leaks, which was valid, then I removed all my main posts talking about leaks.

But the topic they want me to take down is just a recap of dev comments????

This isn’t leaked content, this isn’t shit from DE, this is THEIR COMMENTS TO THE PLAYERBASE I can’t see the images now so I don’t remember what I shared for the dev comments, but since when were publicly made comments eligible for takedown notices???? I didn’t include a leak or shit from the actual game, did I????

Even if some small amount of game content was included in this post, it would have been used in support of the commentary wrapped around it. You know, fair use. And whatever is going on here, it isn’t happening in a vacuum. Square has a well-documented history of shutting down fan content, sometimes doing so in a nearly sadistic way, and it even has a history of taking punitive action based on reviews and commentary for its games that it doesn’t like. In other words, this sort of activity tracks with Square’s general behavior towards its fans.

Now, there are punishments for bad-faith DMCA takedowns, but they are rarely invoked. And it sure would be nice if platforms like Reddit both dug into whether content like this is actually infringing as well as provided a more detailed notice to the person having content taken down to detail out what was actually considered copyright infringement.

But what we know for sure is that Square Enix has such heartburn over critiques of its games that it is willing to use the DMCA takedown process to try to silence it. And whatever else that may be, it certainly is shitty.

Filed Under: content removals, copyright, criticism, dmca, suppressing speech
Companies: square enix

Jawboning In Plain Sight: The Unconstitutional Censorship Tolerated By The DMCA

from the about-time-we-noticed dept

For better or worse, jawboning has been a hot topic recently, and it’s unlikely that interest will fade any time soon. Jawboning, in broad strokes, is when the government pressures a third party to make that third party chill the speech of another instead of going after the speech directly. Because the First Amendment says that the government cannot go after speech directly, this approach can at first seem to be the “one easy trick” for the government to try to affect the speech it wants to affect so that it could get away with it constitutionally. But as the Supreme Court reminded earlier this year in NRA v. Vullo, it’s not actually constitutional to try this sort of end-run around the First Amendment. Pressuring an intermediary to have it punish someone else’s speech is no better than trying to punish it directly.

True, not every accusation of “jawboning!” has been legitimate; Internet intermediaries are entitled to make their own decisions about what user expression to facilitate or remove. But when user expression gets removed, and it has not been the result of the volitional choice of the platform, then there are reasons to be concerned about the constitutionality of whatever legal pressure on the intermediary that caused the removal.

Which is why there should be concern about Section 512 of the Digital Millennium Copyright Act and how it operates to force intermediaries to act against users and their speech, whether they would want to or not, and whether the targeted speech is wrongful or not. Because when resisting a takedown notice can cost them their safe harbor protection and potentially expose them to crippling liability, then the choice to acquiesce to the takedown demand is really no choice at all. Instead it’s jawboning: using law to force the third party to act against speech in order to avoid the constitutional protections the speech should have enjoyed.

This dynamic is what this white paper I’ve written with the support of the R Street Institute explores: how the DMCA, as currently written and interpreted, creates a jawboning problem for online speech. It looks at the 512(a) and (c) safe harbors in particular, and the role that takedown notices have in forcing the elimination of user expression and, in an increasing number of cases, users too, all without due process. It notes how the DMCA as currently drafted and interpreted allows and even encourages using the DMCA’s takedown notice system as a tool to censor, such as through the toothlessness way Section 512(f) has been construed and the expansive way the termination provision of 512(i) has been.

Importantly, the paper does not suggest just trashing the DMCA, because statutory protection of Internet intermediaries is critically important. But it suggests that this protection should be more durable and reliable and not come at the expense of the very user speech statutory protection is necessary to foster. And it points out that the true culprit here may be copyright law itself and the extremely expansive doctrines of secondary liability that courts have taken upon themselves to write into the copyright statute. Because the problem with jawboning is that there is legal pressure on an intermediary, and this is undo legal pressure on them that makes intermediaries vulnerable to being coopted to work against the speech they exist, and we all need them to exist, to facilitate.

Of course, the question could fairly be asked, “Why now?” After all, the DMCA has been working its unconstitutional way for a quarter of a century, and we’ve been tolerating it. But tolerating the intolerable does not make it tolerable. Even though the DMCA has been doing its jawboning business all this time does not mean there is no exigent Constitutional problem demanding attention. It just means it’s time to take notice and finally do something about it, especially while there is such attention being given to other ways the government is tempted to affect online speech with similar intermediary pressure.

Furthermore, the DMCA’s jawboning problem has gotten worse over time: while as originally written the law has issues, court cases that have followed, particularly with regard to 512(f) and (i), as well as secondary liability, have exacerbated the statute’s inherent flaws. Meanwhile, the Supreme Court’s decisions in Vullo, Moody, and Murthy have helped provide a contemporary framework for recognizing and responding jawboning, and those decisions only came out this year. This paper now applies them to a problem that has long been brewing.

And, in any case, better late than never, especially as long as First Amendment rights remain threatened.

Filed Under: 1st amendment, censorship, copyright, dmca, dmca 512, free speech, jawboning, notice and takedown

from the that's-not-how-any-of-this-works dept

I get that a lot of people don’t like the big AI companies and how they scrape the web. But these copyright lawsuits being filed against them are absolute garbage. And you want that to be the case, because if it goes the other way, it will do real damage to the open web by further entrenching the largest companies. If you don’t like the AI companies find another path, because copyright is not the answer.

So far, we’ve seen that these cases aren’t doing all that well, though many are still ongoing.

Last week, a judge tossed out one of the early ones against OpenAI, brought by Raw Story and Alternet.

Part of the problem is that these lawsuits assume, incorrectly, that these AI services really are, as some people falsely call them, “plagiarism machines.” The assumption is that they’re just copying everything and then handing out snippets of it.

But that’s not how it works. It is much more akin to reading all these works and then being able to make suggestions based on an understanding of how similar things kinda look, though from memory, not from having access to the originals.

Some of this case focused on whether or not OpenAI removed copyright management information (CMI) from the works that they were being trained on. This always felt like an extreme long shot, and the court finds Raw Story’s arguments wholly unconvincing in part because they don’t show any work that OpenAI distributed without their copyright management info.

For one thing, Plaintiffs are wrong that Section 1202 “grant[ s] the copyright owner the sole prerogative to decide how future iterations of the work may differ from the version the owner published.” Other provisions of the Copyright Act afford such protections, see 17 U.S.C. § 106, but not Section 1202. Section 1202 protects copyright owners from specified interferences with the integrity of a work’s CMI. In other words, Defendants may, absent permission, reproduce or even create derivatives of Plaintiffs’ works-without incurring liability under Section 1202-as long as Defendants keep Plaintiffs’ CMI intact. Indeed, the legislative history of the DMCA indicates that the Act’s purpose was not to guard against property-based injury. Rather, it was to “ensure the integrity of the electronic marketplace by preventing fraud and misinformation,” and to bring the United States into compliance with its obligations to do so under the World Intellectual Property Organization (WIPO) Copyright Treaty, art. 12(1) (“Obligations concerning Rights Management Information”) and WIPO Performances and Phonograms Treaty….

Moreover, I am not convinced that the mere removal of identifying information from a copyrighted work-absent dissemination-has any historical or common-law analogue.

Then there’s the bigger point, which is that the judge, Colleen McMahon, has a better understanding of how ChatGPT works than the plaintiffs and notes that just because ChatGPT was trained on pretty much the entire internet, that doesn’t mean it’s going to infringe on Raw Story’s copyright:

Plaintiffs allege that ChatGPT has been trained on “a scrape of most of the internet,” Compl. , 29, which includes massive amounts of information from innumerable sources on almost any given subject. Plaintiffs have nowhere alleged that the information in their articles is copyrighted, nor could they do so. When a user inputs a question into ChatGPT, ChatGPT synthesizes the relevant information in its repository into an answer. Given the quantity of information contained in the repository, the likelihood that ChatGPT would output plagiarized content from one of Plaintiffs’ articles seems remote.

Finally, the judge basically says, “Look, I get it, you’re upset that ChatGPT read your stuff, but you don’t have an actual legal claim here.”

Let us be clear about what is really at stake here. The alleged injury for which Plaintiffs truly seek redress is not the exclusion of CMI from Defendants’ training sets, but rather Defendants’ use of Plaintiffs’ articles to develop ChatGPT without compensation to Plaintiffs. See Compl. ~ 57 (“The OpenAI Defendants have acknowledged that use of copyright-protected works to train ChatGPT requires a license to that content, and in some instances, have entered licensing agreements with large copyright owners … They are also in licensing talks with other copyright owners in the news industry, but have offered no compensation to Plaintiffs.”). Whether or not that type of injury satisfies the injury-in-fact requirement, it is not the type of harm that has been “elevated” by Section 1202(b )(i) of the DMCA. See Spokeo, 578 U.S. at 341 (Congress may “elevate to the status of legally cognizable injuries, de facto injuries that were previously inadequate in law.”). Whether there is another statute or legal theory that does elevate this type of harm remains to be seen. But that question is not before the Court today.

While the judge dismisses the case with prejudice and says they can try again, it would appear that she is skeptical they could do so with any reasonable chance of success:

In the event of dismissal Plaintiffs seek leave to file an amended complaint. I cannot ascertain whether amendment would be futile without seeing a proposed amended pleading. I am skeptical about Plaintiffs’ ability to allege a cognizable injury but, at least as to injunctive relief, I am prepared to consider an amended pleading.

I totally get why publishers are annoyed and why they keep suing. But copyright is the wrong tool for the job. Hopefully, more courts will make this clear and we can get past all of these lawsuits.

Filed Under: ai, cmi, copyright, dmca, generative ai, reading
Companies: alternet, openai, raw story

VGHF, Libraries Lose Again On DMCA Exemption Request To Preserve Old Video Games

from the mine-mine-mine dept

Another lobbyist win over common sense, it seems. Earlier this year, we discussed a group of video game preservationists, led by the Video Game History Foundation, seeking DMCA exemptions that would allow groups to curate, preserve, and make available for streaming antiquated video games for purposes of study. The chief opposition to the request came from the Electronic Software Association (ESA), a lobbying group that has staunchly opposed any carveouts in copyright law that would allow for these sorts of preservation and study efforts.

Now, if there was one key takeaway from that last post, it’s the following. The ESA and groups like it are very good at saying “no”, but absolutely terrible at providing any alternatives it would support for doing this sort of preservation work. The video game space is one in which the overwhelming majority of titles published have not been preserved in any meaningful way. If those titles are allowed to simply disappear into the ether, it is a flat negation of the bargain that is copyright law to begin with, which is for a limited monopoly on creative output with that output eventually going into the public domain. Disappeared content cannot enter the public domain.

Unfortunately, thanks to those lobbying efforts that offer all roadblocks and no solutions, the US Copyright Office has denied once again the request for these copyright carveouts.

In announcing its decision, the Register of Copyrights for the Library of Congress sided with the Entertainment Software Association and others who argued that the proposed remote access could serve as a legal loophole for a free-to-access “online arcade” that could harm the market for classic gaming re-releases. This argument resonated with the Copyright Office despite a VGHF study that found 87 percent of those older game titles are currently out of print.

“While proponents are correct that some older games will not have a reissue market, they concede there is a ‘healthy’ market for other reissued games and that the industry has been making ‘greater concerted efforts’ to reissue games,” the Register writes in her decision. “Further, while the Register appreciates that proponents have suggested broad safeguards that could deter recreational uses of video games in some cases, she believes that such requirements are not specific enough to conclude that they would prevent market harms.”

The Copyright Office went on to note that, while this carveout exists already for purely functional software, the expressive nature of video games makes them different. But that’s fairly silly. There are already carveouts to copyright law for expressive works, specifically when it comes to retaining them for preservation and study efforts. That’s essentially how, you know, libraries work. This all comes down to opening those avenues up remotely, via streaming or remote sharing purposes. Why it should be just fine for researchers to hop on a airplane to sit in a university library and study these games, but it’s suddenly verboten to do so remotely is flatly beyond me, especially if there are safeguards in place to keep from this all turning into some free-for-all remote arcade.

And then there is the additional confusion of the Copyright Office arguing that part of its concern is over the association of emulation software with piracy. In a particularly laughable bit within its decision, the Copyright Office cited as its source of this association the founder of the VGHF himself, and the citation appears to have been taken entirely out of context.

In an odd footnote, the Register also notes that emulation of classic game consoles, while not infringing in its own right, has been “historically associated with piracy,” thus “rais[ing] a potential concern” for any emulated remote access to library game catalogs. That footnote paradoxically cites Video Game History Foundation (VGHF) founder and director Frank Cifaldi’s 2016 Game Developers Conference talk on the demonization of emulation and its importance to video game preservation.

“The moment I became the Joker is when someone in charge of copyright law watched my GDC talk about how it’s wrong to associate emulation with piracy and their takeaway was ’emulation is associated with piracy,'” Cifaldi quipped in a social media post.

It’s valid to wonder aloud whether the Copyright Office has any freaking idea what in the hell it’s talking about at this point. Or whether, as at least one proponent of the carveouts quipped, the government was even taking the request all that seriously.

Lawyer Kendra Albert, who argued vociferously in favor of the proposed exemption earlier this year, wrote on social media that they were “gutted by the result… Speaking on behalf of only myself, and not any of my clients, I do believe we made the best case we could that scholarly access to video games that are not commercially available does not harm the market. I do not believe that this evidence was seriously engaged with by the Copyright Office.”

Again, silly. Researchers in other mediums, such as books and films, already have access digitally to their subjects of study in many cases. For some reason, despite its acknowledgement that video games are likewise works of expressive art, the Copyright Office has simply decided it’s to be different with gaming.

Because reasons, I guess.

Filed Under: archival, copyright office, dmca, preservation, triennial review, video games
Companies: esa, video game history foundation

Vietnamese Duo Hit With Injunction After 117,000 Bogus DMCA Claims

from the maybe-just-make-better-t-shirts? dept

While we still lament the fact that the DMCA’s Section 512(f) has no real teeth to punish people for filing bogus DMCA takedown notices, at least some companies are still trying to use it against the most egregious offenders. Last year, Google went after two people in Vietnam, who Google accused of creating at least 65 Google accounts and then using them to send an astounding 117,000 bogus copyright claims.

Apparently, this was the strategy used by the two individuals, Nguyen Van Duc and Pham Van Thien, to try to remove competitors hawking similar t-shirts to the ones they were selling:

Over the last few years and continuing to the present, Defendants—led by two individuals, Defendants Nguyen and Pham—have created at least 65 Google accounts so they could submit thousands of fraudulent notices of copyright infringement against more than 117,000 third-party website URLs. Defendants appear to be connected with websites selling printed t-shirts, and their unlawful conduct aims to remove competing third-party sellers from Google Search results. Defendants have maliciously and illegally exploited Google’s policies and procedures under the DMCA to sabotage and harm their competitors.

Perhaps not surprisingly, the defendants chose to ignore the lawsuit. The court let Google serve them both via email and SMS, and after reviewing all the details, determined that Google kinda had a point about these jackasses. And now, the judge has entered a default judgment, enjoining the defendants from sending more bogus copyright notices.

IT IS ORDERED that Defendants and their agents, employees, successors, and assigns, and all other persons acting in concert with or at the discretion of Defendants, are hereby permanently enjoined from the following:

1. Submitting any notifications of copyright infringement or takedown requests to Google based on false assertions of right of copyright ownership.

2. Creating or attempting to create any Google accounts.

3. Using any Google products or services to promote any of Defendants’ websites or products.

4. Using any Google products or services to harm or attempt to harm any third parties, including without limitation Google’s Search Ads customers.

5. Assisting, aiding, or abetting any other person or entity in engaging or performing any of the activities described in subparagraphs (1) through (4) above.

Some might argue that this is all kinda pointless. The defendants ignored the case entirely. They had to be served via email, and the judgment is a default. But, still, it’s important to call out those who are abusing the legal system in such a way and establish that such activities will not be tolerated. So even if this particular result doesn’t lead to much, it’s a useful signal reminding people who are drawn to such abuses to maybe think again.

Filed Under: 512(f), bogus copyright claims, bogus dmca takedowns, copyright, dmca, search
Companies: google