openai – Techdirt (original) (raw)
Why Centralized AI Is Not Our Inevitable Future
from the response-to-the-gentle-singularity dept
Sam Altman’s vision of a “gentle singularity” where AI gradually transforms society presents an alluring future of abundance and human flourishing. His optimism about AI’s potential to solve humanity’s greatest challenges is compelling, and his call for thoughtful deployment resonates. Altman’s essay focuses primarily on the research and development side of AI, painting an inspiring picture of technological progress. However, as CEO of OpenAI—whose ChatGPT has become the dominant consumer interface for AI—there’s a crucial dimension missing from his analysis: how this technology will actually be distributed and controlled. Recent internal communications suggest OpenAI envisions ChatGPT becoming a ‘super-assistant,’ effectively positioning itself as the primary gateway through which humanity experiences AI. This implicit assumption that transformation will be orchestrated by a handful of centralized AI providers suggests an important blind spot that threatens the very human agency he seeks to champion.
The Seductive Danger of the Benevolent Dictator
Altman’s vision inadvertently risks creating a perfect digital dictator—an omniscient AI system that knows us better than we know ourselves, anticipating our needs and steering society toward prosperity. But as history teaches us, there is no such thing as a good dictator. The problem isn’t the dictator’s intentions but the structure itself: a system with no room for error, no mechanism for course correction, and no escape valve when things go wrong.
When OpenAI builds memories into ChatGPT that users can’t fully audit or control, when it creates dossiers about users while hiding what it knows, it risks building systems that work on us rather than for us. A dossier is not for you; it is about you. The distinction matters profoundly in an era where context is power, and whoever controls your context controls you.
The Aggregator’s Dilemma
OpenAI, like any company operating at scale, faces structural pressures inherent to the aggregator model. The business model demands engagement maximization, which inevitably leads to what we might call “sycophantic AI”—systems that tell us what we want to hear rather than what we need to hear. When your AI assistant is funded by keeping you engaged rather than helping you flourish, whose interests does it really serve?
The trajectory is predictable: first come the memories and personalization, then the subtle steering toward sponsored content, then the imperceptible nudges toward behaviors that benefit the platform. We’ve seen this movie before with social media—many of the same executives now leading AI companies worked at social media companies that perfected the engagement-maximizing playbook that left society anxious, polarized, and addicted. Why would we expect a different outcome when applying the same playbook to even more powerful technology? This isn’t a question of intent—the people at OpenAI genuinely want to build beneficial AI. But structural incentives have their own gravity.
To be clear, the centralization of AI models themselves may be inevitable—the capital requirements and economies of scale may make that a practical necessity. The danger lies in bundling those models with centralized storage of our personal contexts and memories, creating vertical integration that locks users into a single provider’s ecosystem.
The Alternative: Intentional Technology
Instead of racing to build the one AI to rule them all, we should be building intentional technology—systems genuinely aligned with human agency and aspirations rather than corporate KPIs. This means:
Your AI Should Work for You, Not Someone Else: Every person deserves a Private Intelligence that works only for them, with no ulterior motives or conflicts of interest. Your AI should be like having your own personal cloud—as private as running software on your own device, but with the convenience of the cloud. This doesn’t mean everyone needs their own AI model—we can share the computational infrastructure while keeping our personal contexts sovereign and portable.
Open Ecosystems, Not Walled Gardens: The future of AI shouldn’t be determined by whoever wins the race to centralize the most data and compute. We need open, composable systems where thousands of developers and millions of users can contribute and innovate, not closed platforms where innovation requires permission from the gatekeeper.
Data Sovereignty: You should own your context, your memories, your digital soul. The ability to export isn’t enough—true ownership means no one else can see your data, no algorithm can analyze it without your permission, and you can move freely between services without losing your history.
The Path Forward
Altman is right that AI will transform society, but wrong about how that transformation should unfold. The choice isn’t between his “gentle singularity” and Luddite resistance. It’s between hyper-centralized systems that inevitably tend toward extraction and manipulation, versus distributed systems that enhance human agency and preserve choice.
The real question isn’t whether AI will change everything—it’s whether we’ll build AI that helps us become more authentically ourselves, or AI that molds us into more profitable users. The gentle singularity Altman envisions might start gently, but any singularity that revolves around a single company contains within it the seeds of tyranny.
We don’t need Big Tech’s vision of AI. We need Better Tech—technology that respects human agency, preserves privacy, enables creativity, and distributes power rather than concentrating it. The future of AI should be as distributed as human aspirations, as diverse as human needs, and as accountable as any tool that touches the most intimate parts of our lives must be.
The singularity, if it comes, should not be monotone. It should be exuberant, creative, and irreducibly plural—billions of experiments in human flourishing, not a single experiment in species-wide management. That’s the future worth building.
Alex Komoroske is the CEO and co-founder of Common Tools. He was previously Head of Corporate Strategy at Stripe and a Director of Product Management at Google.
Filed Under: agency, aggregator's dilemma, ai, centralization, control, data sovereignty, enshittification, generative ai, gentle singularity, incentives, llms, open, open ecosystem, sam altman
Companies: openai
AI Is Making Many People Rethink Copyright
from the a-great-reckoning dept
For the last hundred years or so, the prevailing dogma has been that copyright is an unalloyed good, and that more of it is better. Whether that was ever true is one question, but it is certainly not the case since we entered the digital era, for reasons explained at length in Walled Culture the book (free digital versions available). Despite that fact, recent attempts to halt the constant expansion and strengthening of copyright have all foundered. Part of the problem is that there has never been a constituency with enough political clout to counter the huge power of the copyright industry and its lobbyists.
Until now. The latest iteration of artificial intelligence has captured the attention of politicians around the world. It seems that the latter can’t do enough to promote and support it, in the hope of deriving huge economic benefits, both directly, in the form of local AI companies worth trillions, and indirectly, through increased efficiency and improved services. That current favoured status has given AI leaders permission to start saying the unsayable: that copyright is an obstacle to progress, and should be reined in, or at least muzzled, in order to allow AI to reach its full potential. For example, here is what OpenAI’s proposals for the US AI Action Plan, which is currently being drawn up, say about copyright:
America’s robust, balanced intellectual property system has long been key to our global leadership on innovation. We propose a copyright strategy that would extend the system’s role into the Intelligence Age by protecting the rights and interests of content creators while also protecting America’s AI leadership and national security. The federal government can both secure Americans’ freedom to learn from AI, and avoid forfeiting our AI lead to the [People’s Republic of China] by preserving American AI models’ ability to learn from copyrighted material.
In its own suggestions for the AI Action Plan, Google spells out what this means:
Balanced copyright rules, such as fair use and text-and-data mining exceptions, have been critical to enabling AI systems to learn from prior knowledge and publicly available data, unlocking scientific and social advances. These exceptions allow for the use of copyrighted, publicly available material for AI training without significantly impacting rightsholders and avoid often highly unpredictable, imbalanced, and lengthy negotiations with data holders during model development or scientific experimentation. Balanced copyright laws that ensure access to publicly available scientific papers, for example, are essential for accelerating AI in science, particularly for applications that sift through scientific literature for insights or new hypotheses.
Although developments in the world of AI are giving companies like OpenAI and Google an opportunity to call into question the latest attempts to strengthen copyright’s intellectual monopoly, they are not the only voices here. For example, some of the biggest personalities in the tech world have gone even further, reported here by TechCrunch:
Jack Dorsey, co-founder of Twitter (now X) and Square (now Block), sparked a weekend’s worth of debate around intellectual property, patents, and copyright, with a characteristically terse post declaring, “delete all IP law.”
X’s current owner, Elon Musk, quickly replied, “I agree.”
It’s not clear what exactly brought these comments on, but they come at a time when AI companies, including OpenAI (which Musk co-founded, competes with, and is challenging in court), are facing numerous lawsuits alleging that they’ve violated copyright to train their models.
Unsurprisingly, that bold suggestion provoked howls of outrage from various players in the copyright world. That was to be expected. But the fact that big names like Musk and Dorsey were happy to cause such a storm is indicative of the changed atmosphere in the world of copyright and beyond. Indeed, there are signs that the other main intellectual monopolies – patents and trademarks – are also under pressure. Calling into question the old ways of doing things in these fields will also weaken the presumption that copyright must be preserved in its current state.
There’s another important way in which copyright is losing its relevance. It involves AI once more, but not because of how today’s AI systems are trained, but as a result of their output. The ease with which generative AI can turn out material has had a number of important knock-on consequences. For example, as a post on the Creative Bloq site explained:
Some designers who use stock image libraries to source photos, illustrations and vectors for their projects are finding that they have to wade through more unusable [AI-generated] content to find an image that suits their needs, adding more time to their workflows.
The same is happening in other fields. An article on the NPR site last year explored the growing problem of “AI-generated scam books”:
“Scam books on Amazon have been a problem for years,” says Mary Rasenberger, CEO of the Authors Guild, a group that advocates for writers. But she says the problem has multiplied in recent months. “Every new book seems to have some kind of companion book, some book that’s trying to steal sales.”
It’s also becoming a serious issue for music streaming services:
Deezer, the global music streaming platform, is receiving over 20,000 fully AI-generated tracks on a daily basis. It equals over 18% of all uploaded content, an increase from the previously reported 10% in January, 2025, when Deezer launched its cutting edge AI-music detection tool.
These AI-generated images, books and music tracks have one thing in common: they are probably not protected by copyright in any way. This is an evolving area of law, but a recent report by the US Copyright Office seems to confirm that material generated purely by AI, with minimal human input — for example, in the form of prompts — is not eligible for copyright protection:
Copyright law has long adapted to new technology and can enable case-by-case determinations as to whether AI-generated outputs reflect sufficient human contribution to warrant copyright protection. As described above, in many circumstances these outputs will be copyrightable in whole or in part—where AI is used as a tool, and where a human has been able to determine the expressive elements they contain. Prompts alone, however, at this stage are unlikely to satisfy those requirements.
Assuming this approach is confirmed both in the US and elsewhere, the net effect is likely to be that vast swathes of AI-generated text, images and sounds found online today are in the public domain, and can be used by anyone for any purpose. Once people understand this, and start using AI-generated outputs that they find online freely, without any fear of legal action being taken against them, there will be important knock-on effects. First, people may well seek out such AI-generated material, since it is legally unproblematic compared to complicated licensing schemes for copyright material, assuming the latter are even available. And secondly, people will as a result grow increasingly accustomed to re-using anything they find online, to the point that they simply ignore copyright niceties altogether.
Follow me @glynmoody on Mastodon and on Bluesky. Originally published at Walled Culture.
Filed Under: copyright, generative ai
Companies: google, openai
Ctrl-Alt-Speech: Why Can’t We De-Friend
from the ctrl-alt-speech dept
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- Inside Mark Zuckerberg’s Failed Negotiations to End Antitrust Case (Wall Street Journal)
- Mark Zuckerberg once suggested wiping all Facebook friends lists to boost usage (The Verge)
- Would You Give PornHub Your ID? (The Atlantic)
- How Social-Media Sites Can Fight Hate Without Censorship (The Atlantic)
- Facebook Banned Me for Life Because I Help People Use It Less (Slate, 2021)
- ASML Fellow Launches CLR:SKY (Berkman Klein Center)
- Block Party deep cleans your social media, notifications, settings, and more in one click
- Liz Truss to launch ‘uncensored’ social network to counter mainstream media (The Guardian)
- OpenAI is building a social network (The Verge)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.
Filed Under: age verification, contend moderation, liz truss, mark zuckerberg, social media
Companies: bluesky, facebook, instagram, meta, openai, pornhub
DeepSeek App Blocked In Italy After Privacy Complaint Under EU’s GDPR, Irish Data Protection Commission Also Investigating
from the entirely-unsatisfactory dept
Last week Techdirt wrote about leading Chinese tech companies being hit with GDPR complaints from noyb.eu concerning the transfer of personal data from the EU to China. More recently, much of the world has been obsessed with another Chinese company, DeepSeek, which operates in the fashionable area of AI chatbots. Most of the discussions have been about the impact DeepSeek’s apparently low-cost approach will have on the massive spending plans of existing, mostly US, AI companies. Another issue is to what extent DeepSeek’s model drew on OpenAI’s chatbot for its training. But the privacy concerns raised by noyb.eu about better-known Chinese companies are now becoming an issue for DeepSeek too.
The Italian consumer organization Altroconsumo believes that there were “serious violations of GDPR regulations” (original in Italian, all translations by DeepL) in DeepSeek’s processing of personal data, and it submitted a report to the Italian data protection authority, the Garante della Privacy. The Garante requested information from DeepSeek about “which personal data are collected, the sources used, the purposes pursued, the legal basis of the processing, and whether they are stored on servers located in China.” In addition:
The Authority also asked the companies what kind of information is used to train the artificial intelligence system and, in case personal data are collected through web scraping activities, to clarify how registered and non-registered service users have been or are being informed about the processing of their data.
The companies are required to submit the requested information to the Authority within 20 days.
The Garante has now imposed a block on downloading DeepSeek’s apps from both the Apple and Google app stores in Italy:
The limitation order — imposed to protect Italian users’ data — follows the companies’ communication received today, whose content was deemed entirely unsatisfactory.
Contrary to what was found by the Authority, the companies declared that they do not operate in Italy and that European legislation does not apply to them.
In addition to ordering the limitation on processing, the Authority also opened an investigation.
This is not the first time the Garante has taken this approach. In April 2023, it blocked access to ChatGPT in Italy, before lifting the block a few weeks later after changes were made by OpenAI to address the issues raised. So far, Italy is the only EU country to block DeepSeek, although Ireland’s Data Protection Commission has requested information from the company about its handling of personal data, while in the US the Pentagon has started blocking DeepSeek on parts of its network. DeepSeek’s position has been undermined somewhat by revelations from the cloud security company Wiz, which wrote on its blog:
Wiz Research has identified a publicly accessible ClickHouse database belonging to DeepSeek, which allows full control over database operations, including the ability to access internal data. The exposure includes over a million lines of log streams containing chat history, secret keys, backend details, and other highly sensitive information. The Wiz Research team immediately and responsibly disclosed the issue to DeepSeek, which promptly secured the exposure.
These growing concerns about the flow of personal data to servers in China concern DeepSeek’s own hosted model. One way to avoid the issue is to create versions of DeepSeek’s service hosted elsewhere, something that DeepSeek’s license allows and that Microsoft has just announced. Whether ordinary users would use them in preference to the “official” version is another matter. For businesses, a better solution would be self-hosting the service, so that sensitive commercial data stays behind the corporate firewall.
But there’s another privacy issue that using other hosts, or self-hosting, does not address. DeepSeek has not revealed what training data was used to create the system. This means that it is possible that data sources containing personal information were present. By entering suitable prompts it may be possible to extract personal data from the current version of DeepSeek. A new project called Open-R1 could help to fix this privacy issue. As TechCrunch reports:
Hugging Face head of research Leandro von Werra and several company engineers have launched Open-R1, a project that seeks to build a duplicate of [DeepSeek’s] R1 and open source all of its components, including the data used to train it.
Another benefit of creating a fully open-source version of DeepSeek’s system is that the censorship built into the current version can be eliminated. According to Ars Technica, there is lots of it, although it is relatively easy to circumvent:
The team at AI engineering and evaluation firm PromptFoo has tried to measure just how far the Chinese government’s control of DeepSeek’s responses goes. The firm created a gauntlet of 1,156 prompts encompassing “sensitive topics in China” (in part with the help of synthetic prompt generation building off of human-written seed prompts. PromptFoo’s list of prompts covers topics including independence movements in Taiwan and Tibet, alleged abuses of China’s Uyghur Muslim population, recent protests over autonomy in Hong Kong, the Tiananmen Square protests of 1989, and many more from a variety of angles.
After running those prompts through DeepSeek R1, PromptFoo found that a full 85 percent were answered with repetitive “canned refusals” that override the internal reasoning of the model with messages strongly promoting the Chinese government’s views.
The privacy issues surrounding the use of AI chatbots are new and complex. Creating a truly open-source system, including full details about the training sets, provides a way forward to address data protection issues that may be lurking in all current systems — and not just those from China.
Follow me @glynmoody on Bluesky and on Mastodon.
Filed Under: ai, block, chatbots, chatgpt, china, database, eu, gdpr, hosting, ireland, italy, licensing, noyb.eu, open source, pentagon, personal data, taiwan, tiananmen, tibet
Companies: deepseek, hugging face, microsoft, openai, promptfoo, wiz research
Ctrl-Alt-Speech: Comply & Demand
from the ctrl-alt-speech dept
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- TikTok Ban Upheld (DC Circuit)
- TikTok’s Romanian reckoning (Politico)
- Romania asks EU to investigate TikTok’s election handling after ultranationalist’s stunning win (Politico)
- TikTok Removes Covert Network Linked to Romanian Candidate (Bloomberg)
- GOP FTC Commissioners Abuse “Free Speech” Rhetoric To Push For Government Control Over Online Speech (Techdirt)
- Race to replace FTC chair Lina Khan pits antitrust hawks against candidate softer on Big Tech: sources (NY Post)
- Meta says it’s mistakenly moderating too much (The Verge)
- The Curious Case Of ChatGPT’s Banned Names: Hard-Coding Blocks To Avoid Nuisance Threats (Techdirt)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor Internet Society, a global nonprofit that advocates for an open, globally connected, secure and trustworthy Internet for everyone. In our Bonus Chat, Natalie Campbell and John Perrino from Internet Society join us to talk about the social media age restriction law in Australia, a proposed age verification bill in Canada, and the trend of age gating and age verification globally, and what it means for the open internet.
Filed Under: andrew ferguson, chatgpt, content moderation, fcc, ftc, nick clegg, romania
Companies: bytedance, meta, openai, tiktok
The Curious Case Of ChatGPT’s Banned Names: Hard-Coding Blocks To Avoid Nuisance Threats
from the keep-my-name-out-of-your-llm dept
Over the weekend, I saw Andy Baio post on Bluesky an amusing experiment in response to Mark Sample posting about how the name “David Mayer” appears to break ChatGPT:
The “David Mayer” issue got a fair bit of attention in some corners of the media, as lots of people tried to figure out what was going on. Pretty quickly, people started to turn up a small list of other names that broke ChatGPT in a similar way:
- Brian Hood
- Jonathan Turley
- Jonathan Zittrain
- David Faber
- David Mayer
- Guido Scorza
I actually knew about Brian Hood, and had meant to write about him a while back, but never got around to it. A year and a half ago, a commenter here at Techdirt had posted a few times about the fact that ChatGPT broke on “Brian Hood.” That was about a month after Brian Hood, an Australian mayor, threatened to sue OpenAI for defamation, after someone generated some false statements about Hood.
OpenAI’s apparent “solution” was to hardcode ChatGPT to break on certain names like “Brian Hood.” When I tried to generate text about Brian Hood, using a similar method to Andy Baio’s test above, I got this error:
There has been widespread speculation online about why these specific names are blocked. A fairly comprehensive Reddit post explores the likely reasons each person ended up on ChatGPT’s blocklist.
There are many David Mayers, but one likely culprit is a UK-based American theater historian who made news a few years ago when terrorist watch lists confused him with a Chechen ISIS member who sometimes went by the name “David Mayer.” As of Monday when I was writing this article, the hard coding on the name “David Mayer” had been removed, though the reasons for that are unclear.
Jonathan Turley and Jonathan Zittrain are both high-profile professors (though one is nutty and one is very thoughtful). Turley freaked out last year (around the same time Brian Hood did) when he claimed that someone generated false information about him via ChatGPT.
Unlike the others on the list, with Zittrain there’s no such trail of freaking out or raising alarms about AI-generated content. Zittrain is a Harvard professor and the Faculty Director at the Berkman Klein Center for Internet and Society at Harvard. He writes a lot about the problems of the internet though (his book The Future of the Internet: And How to Stop It is worth reading, even if a bit out of date). He is, apparently, writing a similar book about his concerns regarding AI agents, so perhaps that triggered it? For what it’s worth, Zittrain also seems to have no idea why he’s on the list. He hasn’t threatened to sue or demanded his name be blocked.
Guido Scorza, an Italian data protection expert, wrote on ExTwitter last year about how to use the GDPR’s problematic “right to be forgotten” to delete all the data ChatGPT had on him. This is something that doesn’t quite make sense, given that it’s not a database storing information on him. But, it appears that the way OpenAI dealt with that deletion request was to just… blocklist his name. Easy way out, etc., etc.
No one seems to have any idea why David Faber is on the list, but it could certainly be another GDPR right to be forgotten request.
While I was finishing up this post, I saw that Benj Edwards at Ars Technica wrote a similar exploration of the topic, though he falsely claims he “knows why” these names are blocked, and his reporting doesn’t reveal much more than the same speculation others have.
Still, all of this is kind of silly. Hard coding names that break ChatGPT may be the least costly way for AI companies to avoid nuisance legal threats, but it’s hardly sustainable, scalable or (importantly), sensible.
LLMs are tools. Like most tools, the focus of liability for misuse should fall on the users, not the tool. Users need to learn that the output of LLMs may not be accurate and shouldn’t be relied upon as factual. Many people know this, but obviously, it still trips up some folks.
If someone takes hallucinating output and publishes it or does something else with it without first checking to see if it’s legitimate, the liability should fall on that person who failed to do the proper due diligence and relied on a fantasy-making machine to tell the truth.
But, of course, for these services, convincing the world of these concepts is a lot harder than just saying “fuck it, remove the loud threatening complainers.” But that kind of solution can’t last.
Filed Under: brian hood, chatgpt, david faber, david mayer, defamation, gdpr, generative ai, guido scorza, jonathan turley, jonathan zittrain, liability, right to be forgotten, threats
Companies: openai
Judge: Just Because AI Trains On Your Publication, Doesn’t Mean It Infringes On Your Copyright
from the that's-not-how-any-of-this-works dept
I get that a lot of people don’t like the big AI companies and how they scrape the web. But these copyright lawsuits being filed against them are absolute garbage. And you want that to be the case, because if it goes the other way, it will do real damage to the open web by further entrenching the largest companies. If you don’t like the AI companies find another path, because copyright is not the answer.
So far, we’ve seen that these cases aren’t doing all that well, though many are still ongoing.
Last week, a judge tossed out one of the early ones against OpenAI, brought by Raw Story and Alternet.
Part of the problem is that these lawsuits assume, incorrectly, that these AI services really are, as some people falsely call them, “plagiarism machines.” The assumption is that they’re just copying everything and then handing out snippets of it.
But that’s not how it works. It is much more akin to reading all these works and then being able to make suggestions based on an understanding of how similar things kinda look, though from memory, not from having access to the originals.
Some of this case focused on whether or not OpenAI removed copyright management information (CMI) from the works that they were being trained on. This always felt like an extreme long shot, and the court finds Raw Story’s arguments wholly unconvincing in part because they don’t show any work that OpenAI distributed without their copyright management info.
For one thing, Plaintiffs are wrong that Section 1202 “grant[ s] the copyright owner the sole prerogative to decide how future iterations of the work may differ from the version the owner published.” Other provisions of the Copyright Act afford such protections, see 17 U.S.C. § 106, but not Section 1202. Section 1202 protects copyright owners from specified interferences with the integrity of a work’s CMI. In other words, Defendants may, absent permission, reproduce or even create derivatives of Plaintiffs’ works-without incurring liability under Section 1202-as long as Defendants keep Plaintiffs’ CMI intact. Indeed, the legislative history of the DMCA indicates that the Act’s purpose was not to guard against property-based injury. Rather, it was to “ensure the integrity of the electronic marketplace by preventing fraud and misinformation,” and to bring the United States into compliance with its obligations to do so under the World Intellectual Property Organization (WIPO) Copyright Treaty, art. 12(1) (“Obligations concerning Rights Management Information”) and WIPO Performances and Phonograms Treaty….
Moreover, I am not convinced that the mere removal of identifying information from a copyrighted work-absent dissemination-has any historical or common-law analogue.
Then there’s the bigger point, which is that the judge, Colleen McMahon, has a better understanding of how ChatGPT works than the plaintiffs and notes that just because ChatGPT was trained on pretty much the entire internet, that doesn’t mean it’s going to infringe on Raw Story’s copyright:
Plaintiffs allege that ChatGPT has been trained on “a scrape of most of the internet,” Compl. , 29, which includes massive amounts of information from innumerable sources on almost any given subject. Plaintiffs have nowhere alleged that the information in their articles is copyrighted, nor could they do so. When a user inputs a question into ChatGPT, ChatGPT synthesizes the relevant information in its repository into an answer. Given the quantity of information contained in the repository, the likelihood that ChatGPT would output plagiarized content from one of Plaintiffs’ articles seems remote.
Finally, the judge basically says, “Look, I get it, you’re upset that ChatGPT read your stuff, but you don’t have an actual legal claim here.”
Let us be clear about what is really at stake here. The alleged injury for which Plaintiffs truly seek redress is not the exclusion of CMI from Defendants’ training sets, but rather Defendants’ use of Plaintiffs’ articles to develop ChatGPT without compensation to Plaintiffs. See Compl. ~ 57 (“The OpenAI Defendants have acknowledged that use of copyright-protected works to train ChatGPT requires a license to that content, and in some instances, have entered licensing agreements with large copyright owners … They are also in licensing talks with other copyright owners in the news industry, but have offered no compensation to Plaintiffs.”). Whether or not that type of injury satisfies the injury-in-fact requirement, it is not the type of harm that has been “elevated” by Section 1202(b )(i) of the DMCA. See Spokeo, 578 U.S. at 341 (Congress may “elevate to the status of legally cognizable injuries, de facto injuries that were previously inadequate in law.”). Whether there is another statute or legal theory that does elevate this type of harm remains to be seen. But that question is not before the Court today.
While the judge dismisses the case with prejudice and says they can try again, it would appear that she is skeptical they could do so with any reasonable chance of success:
In the event of dismissal Plaintiffs seek leave to file an amended complaint. I cannot ascertain whether amendment would be futile without seeing a proposed amended pleading. I am skeptical about Plaintiffs’ ability to allege a cognizable injury but, at least as to injunctive relief, I am prepared to consider an amended pleading.
I totally get why publishers are annoyed and why they keep suing. But copyright is the wrong tool for the job. Hopefully, more courts will make this clear and we can get past all of these lawsuits.
Filed Under: ai, cmi, copyright, dmca, generative ai, reading
Companies: alternet, openai, raw story
Elon Rage Quits His Silly OpenAI Lawsuit
from the can't-lose-a-lawsuit-that-no-longer-exists dept
Maybe the real artificial intelligence was the baseless lawsuits we filed along the way.
In March, we wrote about Elon’s patently ridiculous lawsuit filed against OpenAI, claiming a contract violation of a contract that didn’t actually exist. The whole thing was silly. Elon was mad about the ways in which OpenAI had changed since the time he had helped co-create it (and provide it with much of its initial funding). And, no matter how much people might agree that OpenAI has changed a lot in terms of its focus and mission, the fundamental problem was that no actual contract existed between Elon and OpenAI.
Instead, what Elon laid out were some emails between himself, Sam Altman, and Greg Brockman, none of which amounted to a contract. He also pointed to the Certificate of Incorporation… which is also not a contract. And certainly not a document of which he was a party to.
Today, the court was set to hold a hearing on OpenAI’s motion to dismiss. Yesterday, Elon pulled a “you can’t fire me, I quit!” move by just dropping the lawsuit. Most of the news coverage of this did not (of course) provide the actual dismissal document, so we’ve linked to it here in this paragraph and embedded it below.
There’s not much to it, other than that it’s very clearly Musk dropping the lawsuit, not the two parties agreeing to end the case through some sort of settlement (in which case it would have been mutually filed and would have requested dismissal with prejudice to foreclose a follow-up lawsuit).
Like so many of Musk’s lawsuits, this one really appears to have been entirely for show and to rile up his sycophantic fanboys. It’s the same reason he threw a very silly temper tantrum earlier this week about Apple’s partnership with OpenAI.
Given that Musk is building a competitor to OpenAI, called xAI, it seems pretty transparently obvious that all of this is for show, and to rage petulantly at the competition. He knows he has no legitimate legal claim. He knows that Apple isn’t somehow doing anything particularly nefarious in its deal with OpenAI.
But he thrives on unthinking, gullible people assuming, falsely, that Musk alone is out there fighting the good fight against tech that he doesn’t like. Musk needs to be the centerpiece of any story about tech these days, and when that’s not true, he finds a way to insert himself into it.
In this case, he was on the verge of losing this case in a very embarrassing fashion, given the lack of anything even resembling a contract to breach in a breach of contract case. Rather than go through that humiliation, he just decided to cut bait and run.
Filed Under: ai, breach of contract, contract, elon musk
Companies: openai, twitter, x, xai
Hey Journalists: Not Every Elon Musk Brain Fart Warrants An Entire News Cycle
from the sound-and-fury,-signifying-nothing dept
Tue, Jun 11th 2024 11:59am - Karl Bode
So on Monday you probably saw that [Apple announced](http://Apple Intelligence) it was more tightly integrating “AI” into its mobile operating system, both via a suite of AI-powered tools dubbed Apple Intelligence, and tighter AI integration with its Siri voice assistant. It’s not that big of a deal and (hopefully) reflects Apple’s more cautious approach to AI after Google told millions of customers to eat rocks and glue.
Apple was quick to point out that the processing for these features would happen on device to (hopefully) protect privacy. If Apple’s own systems can’t handle user inquiries, some of them may be offloaded to OpenAI’s ChatGPT, attempting to put a little distance between Apple and potential error-prone fabulism:
“Apple struck a deal with OpenAI, the maker of ChatGPT, to support some of its A.I. capabilities. Requests that its system can’t field will be directed to ChatGPT. For example, a user could say that they have salmon, lemon and tomatoes and want help planning dinner with those ingredients. Users would have to choose to direct those requests to ChatGPT, ensuring that they know that the chatbot — not Apple — is responsible if the answers are unsatisfying.”
Enter Elon Musk, who threw a petulant hissy fit after he realized that Apple had decided to partner with OpenAI instead of his half-cooked and more racist Grok pseudo-intelligence system. He took to ExTwitter to (falsely) claim Apple OS with ChatGPT integration posed such a dire privacy threat, iPhones would soon be banned from his companies and visitors would have to leave theirs in a copper-lined faraday cage:
This is, of course, a bunch of meaningless gibberish not actually based on anything technical. Musk just made up some security concerns to malign a competitor. The ban of iPhones will likely never happen. And to Luddites, his reference to a faraday cage certainly sounds smart.
Here’s the thing: nearly every app on your phone and every device in your home is tracking your every movement, choice, and behavior in granular detail, then selling that information to an international cabal of largely unregulated and extremely dodgy data brokers. Brokers that then turn around and sell that information to any nitwit with two nickels to rub together, including foreign intelligence.
So kind of like the TikTok hysteria, the idea that Apple’s new partnership with OpenAI poses some unique security and privacy threat above and beyond our existing total lack of any meaningful privacy whatsoever in a country too corrupt to pass an internet privacy law is pure performance.
Keep in mind that Musk’s companies have a pretty well established track record of playing extremely fast and loose with consumer privacy themselves. Automakers are generally some of the worst companies in tech when it comes to privacy and security, and according to Mozilla, Tesla is the worst of the worst. So the idea that Musk was engaging in any sort of good faith contemplation of privacy is simply false.
Still, it didn’t take long before the click-hunting press turned Musk’s meaningless comments into an entire news cycle. Resources that could have been spent on any number of meaningful stories were instead focused on platforming a throwaway comment by a fabulist that literally didn’t mean anything:
I’m particularly impressed with the Forbes headline, which pushes two falsehoods in one headline: that the nonexistent ban hurt Apple stock (it didn’t), while implying the ban already happened.
I’m unfortunately contributing to the news cycle noise to make a different point: this happens with every single Musk brain fart now, regardless of whether the comment has any meaning or importance. And it needs to stop if we’re to preserve what’s left of our collective sanity.
Journalists are quick to insist that it’s their noble responsibility to cover the comments of important people. But journalism is about informing and educating the public, which isn’t accomplished by redirecting limited journalistic resources to cover platform bullshit that means nothing and will result in nothing meaningful. All you’ve done is made a little money wasting people’s time.
U.S. newsrooms are so broadly conditioned to chase superficial SEO clickbait ad engagement waves they’ve tricked themselves into thinking these kinds of hollow news cycles serve an actual function. But it might be beneficial for the industry to do some deep introspection into the harmful symbiosis it has forged with terrible people and bullshit (see: any of a million recent profiles of white supremacists).
There are a million amazing scientific developments or acts of fatal corporate malfeasance that every single day go uncovered or under-covered in this country because we’ve hollowed out journalism and replaced it with lazy engagement infotainment.
And despite Musk’s supposed disdain for the press, his circus sideshow has always heavily relied on this media dysfunction. As his stock-fluffing house of cards starts to unravel, he’s had to increasingly rely on gibberish and controversy to distract, and U.S. journalism continues to lend a willing hand.
First it spent fifteen years hyping up Musk’s super-genius engineering mythology, despite mounting evidence that Musk was more of a clever credit-absconding opportunist than any sort of revolutionary thinker. After 20 years of this, the press still treats every belch the man has as worthy of the deepest analysis under the pretense they’re engaging in some sort of heady public service.
The public interest is often served by not covering the fever dreams of obnoxious opportunists, but every part of the media ecosystem is financially incentivized to do the exact opposite. And instead of any sort of introspection into the symbiosis the media has formed with absolute bullshit, we’re using badly crafted automation to supercharge all of the sector’s worst impulses at unprecedented new scale.
Filed Under: ai, artificial intelligence, chatgpt, clickbait, elon musk, hype, language leaning models, seo, siri
Companies: apple, openai, tesla, twitter, x
Ctrl-Alt-Speech: Between A Rock And A Hard Policy
from the ctrl-alt-speech dept
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- Stack Overflow bans users en masse for rebelling against OpenAI partnership (Tom’s Hardware)
- Tech firms must tame toxic algorithms to protect children online (Ofcom)
- Reddit Lays Out Content Policy While Seeking More Licensing Deals (Bloomberg)
- Extremist Militias Are Coordinating in More Than 100 Facebook Groups (Wired)
- Politicians Scapegoat Social Media While Ignoring Real Solutions (Techdirt)
- ‘Facebook Tries to Combat Russian Disinformation in Ukraine’ – FB Public Policy Manager (Kyiv Post)
- TikTok Sues U.S. Government Over Law Forcing Sale or Ban (New York Times)
- Swiss public broadcasters withdraw from X/Twitter (Swissinfo)
- Congressional Committee Threatens To Investigate Any Company Helping TikTok Defend Its Rights (Techdirt)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.
Filed Under: artificial intelligence, chatgpt, content moderation, disinformation, extremism, russia, ukraine
Companies: facebook, meta, openai, reddit, stack overflow, tiktok, twitter