The OpenAI Endgame (original) (raw)

Since the New York Times sued OpenAI for infringing its copyrights by using Times content for training, everyone involved with AI has been wondering about the consequences. How will this lawsuit play out? And, more importantly, how will the outcome affect the way we train and use large language models?

There are two components to this suit. First, it was possible to get ChatGPT to reproduce some Times articles very close to verbatim. That’s fairly clearly copyright infringement, though there are still important questions that could influence the outcome of the case. Reproducing the New York Times clearly isn’t the intent of ChatGPT, and OpenAI appears to have modified ChatGPT’s guardrails to make generating infringing content more difficult, though probably not impossible. Is this enough to limit any damages? It’s not clear that anybody has used ChatGPT to avoid paying for a NYT subscription. Second, the examples in a case like this are always cherry-picked. While the Times can clearly show that OpenAI can reproduce some articles, can it reproduce any article from the _Times_’ archive? Could I get ChatGPT to produce an article from page 37 of the September 18, 1947 issue? Or, for that matter, an article from the Chicago Tribune or the Boston Globe? Is the entire corpus available (I doubt it), or just certain random articles? I don’t know, and given that OpenAI has modified GPT to reduce the possibility of infringement, it’s almost certainly too late to do that experiment. The courts will have to decide whether inadvertent, inconsequential, or unpredictable reproduction meets the legal definition of copyright infringement.

Learn faster. Dig deeper. See farther.

The more important claim is that training a model on copyrighted content is infringement, whether or not the model is capable of reproducing that training data in its output. An inept and clumsy version of this claim was made by Sarah Silverman and others in a suit that was dismissed. The Authors’ Guild has its own version of this lawsuit, and it is working on a licensing model that would allow its members to opt in to a single licensing agreement. The outcome of this case could have many side-effects, since it essentially would allow publishers to charge not just for the texts they produce, but for how those texts are used.

It is difficult to predict what the outcome will be, though easy enough guess. Here’s mine. OpenAI will settle with the New York Times out of court, and we won’t get a ruling. This settlement will have important consequences: it will set a de-facto price on training data. And that price will no doubt be high. Perhaps not as high as the Times would like (there are rumors that OpenAI has offered something in the range of $1 million to $5 million), but sufficiently high enough to deter OpenAI’s competitors. 1Misnot,inandofitself,aterriblyhighprice,andtheTimesreportedlythinksthatit’swaytoolow;butrealizethatOpenAIwillhavetopayasimilaramounttoalmosteverymajornewspaperpublisherworldwideinadditiontoorganizationsliketheAuthorsGuild,technicaljournalpublishers,magazinepublishers,andmanyothercontentowners.Thetotalbillislikelytobecloseto1M is not, in and of itself, a terribly high price, and the Times reportedly thinks that it’s way too low; but realize that OpenAI will have to pay a similar amount to almost every major newspaper publisher worldwide in addition to organizations like the Authors Guild, technical journal publishers, magazine publishers, and many other content owners. The total bill is likely to be close to 1Misnot,inandofitself,aterriblyhighprice,andtheTimesreportedlythinksthatitswaytoolow;butrealizethatOpenAIwillhavetopayasimilaramounttoalmosteverymajornewspaperpublisherworldwideinadditiontoorganizationsliketheAuthorsGuild,technicaljournalpublishers,magazinepublishers,andmanyothercontentowners.Thetotalbillislikelytobecloseto1 billion, if not more, and as models need to be updated, at least some of it will be a recurring cost. I suspect that OpenAI would have difficulty going higher, even given Microsoft’s investments—and, whatever else you may think of this strategy—OpenAI has to think about the total cost. I doubt that they are close to profitable; they appear to be running on an Uber-like business plan, in which they spend heavily to buy the market without regard for running a sustainable business. But even with that business model, billion-dollar expenses have to raise the eyebrows of partners like Microsoft.

The Times, on the other hand, appears to be making a common mistake: overvaluing its data. Yes, it has a large archive—but what is the value of old news? Furthermore, in almost any application but especially in AI, the value of data isn’t the data itself; it’s the correlations between different datasets. The Times doesn’t own those correlations any more than I own the correlations between my browsing data and Tim O’Reilly’s. But those correlations are precisely what’s valuable to OpenAI and others building data-driven products.

Having set the price of copyrighted training data to $1B or thereabouts, other model developers will need to pay similar amounts to license their training data: Google, Microsoft (for whatever independently developed models they have), Facebook, Amazon, and Apple. Those companies can afford it. Smaller startups (including companies like Anthropic and Cohere) will be priced out, along with every open source effort. By settling, OpenAI will eliminate much of their competition. And the good news for OpenAI is that even if they don’t settle, they still might lose the case. They’d probably end up paying more, but the effect on their competition would be the same. Not only that, the Times and other publishers would be responsible for enforcing this “agreement.” They’d be responsible for negotiating with other groups that want to use their content and suing those they can’t agree with. OpenAI keeps its hands clean, and its legal budget unspent. They can win by losing—and if so, do they have any real incentive to win?

Unfortunately, OpenAI is right in claiming that a good model can’t be trained without copyrighted data (although Sam Altman, OpenAI’s CEO, has also said the opposite). Yes, we have substantial libraries of public domain literature, plus Wikipedia, plus papers in ArXiv, but if a language model trained on that data would produce text that sounds like a cross between 19th century novels and scientific papers, that’s not a pleasant thought. The problem isn’t just text generation; will a language model whose training data has been limited to copyright-free sources require prompts to be written in an early-20th or 19th century style? Newspapers and other copyrighted material are an excellent source of well-edited grammatically correct modern language. It is unreasonable to believe that a good model for modern languages can be built from sources that have fallen out of copyright.

Requiring model-building organizations to purchase the rights to their training data would inevitably leave generative AI in the hands of a small number of unassailable monopolies. (We won’t address what can or can’t be done with copyrighted material, but we will say that copyright law says nothing at all about the source of the material: you can buy it legally, borrow it from a friend, steal it, find it in the trash—none of this has any bearing on copyright infringement.) One of the participants at the WEF roundtable The Expanding Universe of Generative Models reported that Altman has said that he doesn’t see the need for more than one foundation model. That’s not unexpected, given my guess that his strategy is built around minimizing competition. But this is chilling: if all AI applications go through one of a small group of monopolists, can we trust those monopolists to deal honestly with issues of bias? AI developers have said a lot about “alignment,” but discussions of alignment always seem to sidestep more immediate issues like race and gender-based bias. Will it be possible to develop specialized applications (for example, O’Reilly Answers) that require training on a specific dataset? I’m sure the monopolists would say “of course, those can be built by fine tuning our foundation models”; but do we know whether that’s the best way to build those applications? Or whether smaller companies will be able to afford to build those applications, once the monopolists have succeeded in buying the market? Remember: Uber was once inexpensive.

If model development is limited to a few wealthy companies, its future will be bleak. The outcome of copyright lawsuits won’t just apply to the current generation of Transformer-based models; they will apply to any model that needs training data. Limiting model building to a small number of companies will eliminate most academic research. It would certainly be possible for most research universities to build a training corpus on content they acquired legitimately. Any good library will have the Times and other newspapers on microfilm, which can be converted to text with OCR. But if the law specifies how copyrighted material can be used, research applications based on material a university has legitimately purchased may not be possible. It won’t be possible to develop open source models like Mistral and Mixtral—the funding to acquire training data won’t be there—which means that the smaller models that don’t require a massive server farm with power-hungry GPUs won’t exist. Many of these smaller models can run on a modern laptop, which makes them ideal platforms for developing AI-powered applications. Will that be possible in the future? Or will innovation only be possible through the entrenched monopolies?

Open source AI has been the victim of a lot of fear-mongering lately. However, the idea that open source AI will be used irresponsibly to develop hostile applications that are inimical to human well-being gets the problem precisely wrong. Yes, open source will be used irresponsibly—as has every tool that has ever been invented. However, we know that hostile applications will be developed, and are already being developed: in military laboratories, in government laboratories, and at any number of companies. Open source gives us a chance to see what is going on behind those locked doors: to understand AI’s capabilities and possibly even to anticipate abuse of AI and prepare defenses. Handicapping open source AI doesn’t “protect” us from anything; it prevents us from becoming aware of threats and developing countermeasures.

Transparency is important, and proprietary models will always lag open source models in transparency. Open source has always been about source code, rather than data; but that is changing. OpenAI’s GPT-4 scores surprisingly well on Stanford’s Foundation Model Transparency Index, but still lags behind the leading open source models (Meta’s LLaMA and BigScience’s BLOOM). However, it isn’t the total score that’s important; it’s the “upstream” score, which includes sources of training data, and on this the proprietary models aren’t close. Without data transparency, how will it be possible to understand biases that are built in to any model? Understanding those biases will be important to addressing the harms that models are doing now, not hypothetical harms that might arise from sci-fi superintelligence. Limiting AI development to a few wealthy players who make private agreements with publishers ensures that training data will never be open.

What will AI be in the future? Will there be a proliferation of models? Will AI users, both corporate and individuals, be able to build tools that serve them? Or will we be stuck with a small number of AI models running in the cloud and being billed by the transaction, where we never really understand what the model is doing or what its capabilities are? That’s what the endgame to the legal battle between OpenAI and the Times is all about.