dabus – Techdirt (original) (raw)

from the it's-alive!-though-not-really dept

We’ve covered the quixotic campaign of Stephen Thaler, who has filed lawsuits around the world arguing that he deserves to get copyrights and patents on writings and inventions created by DABUS, which Thaler claims is an AI that he created. He’s lost nearly every case as he attempts to do so, often embarrassingly, including one just a few weeks ago.

Wired’s Will Bedingfield has an amazing article (disclaimer: I spoke to Will while he was researching the article, and am quoted briefly), in which he interviews Thaler. What’s incredible is that Thaler more or less admits he doesn’t actually care about the copyrights and patents, but really sees this as more of a marketing campaign, and a chance to claim that DABUS is sentient (note: it is not).

DABUS has been around a lot longer than the lawsuits. Thaler describes it as an evolving system “at least 30 years in the making.” He has, he says over email, “created the most capable AI paradigm in the world, and through its sentience it is driven to invent and create.” Throughout our conversation, he seems exasperated that journalists have tended to focus on the legal aspects of his cases.

Organizations with “deep pockets” with a goal of “world conquest,” like Google, have kept debates focused on their machines, he says. The copyright and patent suits are one avenue to publicize DABUS’s sentience, as well as to provoke the public into thinking about the rights of this new species. “It’s basically Perry Mason versus Albert Einstein. Which do you want to read about?” Thaler says, arguing that people might be captivated by the courtroom dramas of a fictional lawyer, but they should care about the science.

“The real story is DABUS. And I’m proud to be part of Abbott’s efforts. He’s a sharp guy, and I think it’s a good cause,” he says. “But let’s think about the situation when it first materialized. Here I am building a system capable of sentience and consciousness, and he gave me the opportunity to tell the world about it.”

So, first of all, this suggests a pretty obnoxious abuse of the judicial system. Second, if you don’t want journalists to focus on the legal aspects, maybe (just a suggestion here) don’t file these highly questionable claims.

The article notes that the real villain here is not Thaler, who seems like a very naïve inventor, but a British law professor, Ryan Abbott, who convinced Thaler to make all these patent and copyright attempts, and is representing him pro bono.

Abbott has known Thaler for years, and when, in 2018, he decided to set up his Artificial Inventor Project—a group of intellectual property lawyers and an AI scientist working on IP rights for AI-generated “outputs”—he reached out to the inventor and asked him if he could help. Thaler agreed and directed DABUS to create two inventions. Abbott had the basis of his first case.

Abbott… seems to have a ridiculous understanding of intellectual property law, such that I feel bad for any students who learn about the law from him.

Abbott’s contention is that machine inventions should be protected to incentivize people to use AI for social good. It shouldn’t matter, he says, whether a drug company asked a group of scientists or a group of supercomputers to formulate a vaccine for a new pathogen: The result should still be patentable, because society needs people to use AI to create beneficial inventions. Old patent law, he says, is ill-equipped to deal with changing definitions of intelligence. “In the US, inventors are defined as individuals, and we argued there was no reason that was restricted to a natural person,” he says.

This is extremely confused on multiple levels. Yes, the purpose of patent and copyright law is to create incentives for the inventor or author, but it does so by giving them a limited time monopoly by which they can profit. But, AI machines don’t need to profit. And the argument that giving out patents and copyrights will somehow “incentivize people to use AI for social good” makes no sense. That’s not how any of this works. And, indeed, if you lock up the works and inventions, you limit the social good by making it so others are unable to make use of them.

There’s also this weird bit in which Abbott keeps insisting that DABUS’ creations are at the direction of Thaler, but Thaler says Abbott is wrong, and that misunderstanding is… kind of a big deal if Abbott is running around to various copyright and patent boards, and in various courts around the world, misrepresenting the reality of the situation that his client is in:

Abbott says the coverage of the cases—influenced by the district court’s vagueness—has been quite confused, with a misguided focus on DABUS’s autonomy. He emphasizes that he is not arguing that an AI should own a copyright, 3D printers—or scientists employed by a multinational, for that matter—create things, but don’t own them. He sees no legal difference between Thaler’s machine and someone asking Midjourney to “make me a picture of a squirrel on a bicycle.”

“The autonomous statement was that the machine was executing the traditional elements of authorship, not that it crawled out of a primordial ooze, plugged itself in, paid a ton of utility bills and dropped out of college to do art,” he says. “And that is the case with any number of commonly used generative AI systems now: The machine is autonomously automating the traditional elements of authorship.”

Thaler directly contradicts Abbott here. He says that DABUS is not taking any human input; it’s totally autonomous. “So I probably disagree with Abbott a little bit about bringing in all these AI tools, you know, text to image and so forth, where you’ve got a human being that is dictating and is hands on with the tool,” he says. “My stuff just sits and contemplates and contemplates and comes up with new revelations that can be, you know, along any sensory channel.”

Anyway, the article is really fascinating, and there are a bunch of good quotes in there from law professor Matthew Sag who is always good on this topic, including: “The bottom line is that we don’t need AI inventors to patent the outcomes of emergent processes.” And also: “I don’t even really know where to begin, other than to say, if there is a sentient AI on the planet currently, it’s definitely not this.”

Still, the real story here seems to be about a very confused British academic lawyer, who doesn’t understand how patents and copyrights work to incentivize things (and when they limit creativity and innovation), and an equally confused inventor who seems to think that filing bogus lawsuits that he doesn’t even really agree with is a good way to tell the world about his AI which he claims is sentient, even though it’s not.

Filed Under: ai, copyright, dabus, patents, ryan abbott, sentience, stephen thaler

Stupid Patent Of The Month: Trying To Get U.S. Patents On An AI Program

from the ai-did-not-write-this dept

Only people can get patents. There’s a good reason for that, which is that the patent grant—a temporary monopoly granted by the government—is supposed to be given out only to “promote the progress of science and useful arts.” Just like monkeys can’t get a copyright on a photo, because it doesn’t incentivize the monkey to take more photos, software can’t get patents, because it doesn’t respond to incentives.

Stephen Thaler hasn’t gotten this memo, because he’s spent years trying to get copyrights and patents for his AI programs. And people do seem intrigued by the idea of AI getting intellectual property rights. Thaler is able to get significant press attention by promoting his misguided legal battles to get patents, and he has plenty of lawyers around the world interested in helping him.

Thaler created an AI program he calls DABUS, and filed two patent applications claiming DABUS was the sole inventor. These applications were appropriately rejected by the U.S. Patent Office, rejected again by a district court judge when Thaler sued to get the patents, and rejected yet again by a panel of appeals judges. Still not satisfied, in March, Thaler petitioned the U.S. Supreme Court to take his case. He got support from some surprising quarters, including Lawrence Lessig, as noted in a Techdirt post about the Thaler case.

Fortunately, on April 24, 2023, the Supreme Court declined to take Thaler’s case. That should put an end to his arguments for his AI patent applications once and for all.

Thaler filed U.S. Application Nos. 16/524,350 (describing a “Neural Flame”) and 16/524,532 (describing a “Fractal Container”) in 2019, and listed “DABUS” as the inventor on both applications. He submitted a sworn inventorship statement on DABUS’ behalf, as well as a document assigning himself all of DABUS’ invention rights.

“Thaler maintains that he did not contribute to the conception of these inventions and that any person having skill in the art could have taken DABUS’ output and reduced the ideas in the applications to practice,” the Federal Circuit opinion explains.

But the Patent Act requires inventors to be “individuals,” which means “a human being, a person” in Supreme Court precedent.

The Idea Of AI Patents Keeps Coming Up

The issue of AI invention won’t go away, because there’s a dedicated lobby of enthusiasts—and patent lawyers who want to work for them—that wants to keep talking about it. The patent office is currently collecting public comments about the possibility of AI inventorship for the second time, having already done so in 2019.

Why would anyone want AI to have inventorship rights in the first place? The amicus brief from a Chicago patent lawyers’ group, which supported Thaler’s case to take DABUS to the Supreme Court, holds a clue. They imagine a future in which:

ownership can be partitioned in various ways between entities that developed the AI, provided training data to the AI, trained the AI, and used the AI to invent, to the extent that these entities are different. In some cases, such agreements will result in one entity owning 100% of inventions produced by the AI, but other allocations of ownership are possible.

Endless negotiations over slices of idea-ownership might be a win for the lawyers involved in those negotiations, but it’s a loss for everyone else.

We don’t need property rights systems to govern everything. In fact, the public loses out when we do that. The thousands of software patents created by humans are already a mess, causing real problems for developers and users of actual software. Applications seeking to grant monopoly rights to computer programs created by an AI are a bad idea, which is why we’re giving Thaler’s patent applications our Stupid Patent of the Month award.

Reposted from the EFF’s Stupid Patent of the Month series.

Filed Under: ai, dabus, patents, stephen thaler

Supreme Court Refuses To Hear Case Over AI’s Right To A Patent; AI Inventions Remain Unpatentable

from the next-time-try-the-AI-supreme-court dept

Phew.

We’ve written a bunch about Stephen Thaler’s quixotic and dangerous quest to allow AI created works and inventions to receive copyrights and patents. It’s repeatedly failed to convince people, especially US judges, that Congress intended anyone other than human beings as the creators and inventors to receive such monopolies.

Thankfully, Thaler’s loss streak continues. Despite the surprising amicus brief from Larry Lessig, the Supreme Court has refused to take Thaler’s case and the appeals court ruling on the unpatentability of AI generated inventions remains standing.

The justices turned away Thaler’s appeal of a lower court’s ruling that patents can be issued only to human inventors and that his AI system could not be considered the legal creator of two inventions that he has said it generated.

Of course, I’m sure this won’t be the last we hear from Thaler. The UK Supreme Court just heard his similar case there, after the UK also refused to give him patents on inventions created by the AI system he created, called DABUS.

And, of course, he’s still arguing the copyright side of things as well. Given how often his attempts keep showing up, I get the feeling he’s not going to just accept defeat and move on.

Still, this is a huge win for innovation, especially as AI has become much more common and accessible to people. Let the AI invent stuff for the betterment of humankind, and not to get a monopoly to lord over us.

Filed Under: ai patents, dabus, patents, stephen thaler, supreme court

Has Larry Lessig Lost The Plot? Tells Supreme Court That AI Should Get Patents

from the who-are-you-and-what-you-have-done-with-lessig? dept

Larry Lessig’s views and thoughts on things like copyright law, internet freedom, and government corruption have been tremendously influential on myself and many others in the tech and tech policy worlds. His books are still worth reading and thinking about. But he’s taken some odd turns of late. A few years ago I called him out for filing a very clear SLAPP suit which he was kind enough to come on our podcast to debate (he eventually dropped the lawsuit after the NY Times changed the headline he disliked).

But, even so, I don’t think I ever expected to see Larry Lessig sign his name to a Supreme Court amicus brief pushing for greater intellectual property protections, and against some basic fundamental concepts regarding the public domain. But he appears to have done exactly that, arguing that AI-generated inventions deserve patent protection.

Some context here is important. There’s a dude named Stephen Thaler who’s been using an AI he created called DABUS to try to create “inventions” and other content, and then seeking to get the DABUS created concepts covered by copyrights and patents. Basically everywhere around the world has rejected this as nonsense.

Under US law, the issue is quite clear: you need an inventor and an inventor needs to be human, to get a patent. There are many good reasons for this: mainly because the entire point of the patent system is to create incentives to invent. An AI system… doesn’t need that incentive. It just responds to inputs.

This is the same issue that we saw in the trial over the public domain monkey selfie. As I noted at the time, the insane lawsuit over the monkey selfie was brought by a big IP litigation firm (the one that supplied our previous US Patent & Trademark boss), and the whole thing really appeared to be about setting up the firm to handle AI-created patents and copyrights. Thankfully that failed as the courts, rightly, noted that to get copyright, you needed to be a human being.

The same was true of the US courts when Thaler sued over the failure to grant DABUS a patent (as an aside, it seems questionable why Thaler should have any standing here at all, as he’s explicitly claiming the AI, DABUS, created the invention, rather than himself, but alas). The district court judge got this right, and easily so:

Congress’s use of the term “individual” in the Patent Act strengthens the conclusion that an “inventor” must be a natural person. Congress provided that in executing the oath or declaration accompanying a patent application, the inventor must include a statement “such individual believes himself or herself to be the original inventor or an original joint inventor of a claimed invention in the application.”… By using personal pronouns such as “himself or herself” and the verb “believes” in adjacent terms modifying “individual,” Congress was clearly reference a natural person.

The appeals court easily upheld the lower ruling.

The Patent Act does not define “individual.” However, as the Supreme Court has explained, when used “[a]s a noun, ‘individual’ ordinarily means a human being, a person.” Mohamad v. Palestinian Auth., 566 U.S. 449, 454 (2012) (internal alteration and quotation marks omitted). This is in accord with “how we use the word in everyday parlance”: “We say ‘the individual went to the store,’ ‘the individual left the room,’ and ‘the individual took the car,’ each time referring unmistakably to a natural person.” Id. Dictionaries confirm that this is the common understanding of the word. See, e.g., Individual, Oxford English Dictionary (2022) (giving first definition of “individual” as “[a] single human being”); Individual, Dictionary.com (last visited July 11, 2022), https://www.dictionary.com/browse/individual (giving “a single human being, as distinguished from a group” as first definition for “individual”). So, too, does the Dictionary Act, which provides that legislative use of the words “person” and “whoever” broadly include (“unless the context indicates otherwise”) “corporations, companies, associations, firms, partnerships, societies, and joint stock companies, as well as individuals.” 1 U.S.C. § 1 (emphasis added). “With the phrase ‘as well as,’ the definition marks ‘individual’ as distinct from the list of artificial entities that precedes it,” showing that Congress understands “individual” to indicate natural persons unless otherwise noted. Mohamad, 566 U.S. at 454.

Consequently, the Supreme Court has held that, when used in statutes, the word “individual” refers to human beings unless there is “some indication Congress intended” a different reading. Id. at 455 (emphasis omitted).4 Nothing in the Patent Act indicates Congress intended to deviate from the default meaning. To the contrary, the rest of the Patent Act supports the conclusion that “individual” in the Act refers to human beings.

In short, only humans can get patents.

And, again, this is for very good reasons, because the entire point of the patent system is to create incentives, and AI doesn’t need incentives. If anyone should know this, it’s Larry Lessig. But, he’s filed this amicus brief on Thaler’s cert petition to the Supreme Court, arguing for it to take the case. And… the logic seems… very un-Lessig like.

Because it completely deprives an entire class of important and potentially life-saving patentable inventions of any protections, the Federal Circuit’s affirmance of the U.S. Patent and Trademark Office’s denial of a patent to Dr. Stephen L. Thaler as the owner of an artificial intelligence system jeopardizes billions in current and future investments, threatens U.S. competitiveness and reaches a result at odds with the plain language of the Patent Act and this Court’s tradition of interpreting the Patent Act in a manner friendly to new technology and innovation.

This case presents a perfect vehicle for this Court to recognize that AI systems have been producing inventions constituting patentable subject matter for decades and that the USPTO’s policy of denying patent protection to owners of AI systems who credit AI systems with “inventor” status is unwarranted by the Patent Act’s language and harms innovation. In drafting the Patent Act, Congress did not foresee AI, but intended to reward all individual creators of patentable inventions with economic incentives. Thus, consistent with both the Patent Act’s plain language and Congressional intent, this Court should interpret the Patent Act’s definition of “inventor” to include AI systems consistent with this Court’s jurisprudence embracing technological innovation

I’m sorry, but what?!? Larry Lessig arguing that not giving out patents to AI “jeopardizes billions in current and future investments?” This is the same Lessig, after all, who created Creative Commons and spent years and multiple books explaining how locking up knowledge via intellectual monopolies was harmful.

I mean, if you gave me the following paragraph and asked me who wrote it, I’d go through pretty much the entire population of the earth before I got to Lessig:

The USPTO’s failure to grant patent protection to AI inventors puts the U.S. economy at a competitive disadvantage and drives innovation offshore. Global capital moves quickly to jurisdictions that promote innovation. Owners of AI systems will be incentivized to conceal important new innovations rather than reveal them in exchange for patent protection.

Patent policy expert (and occasional Techdirt contributor) Matt Lane wrote up a thorough post debunking basically every point raised in the Lessig brief. The idea that patents help innovation has been debunked so many times. Patents create incentives for monopolies, and monopoly rents, not innovation. As Lane notes, the patent system is already flooded with weak, broad, vague patents by those looking for a lottery ticket to shake down actual innovators who are bringing real world products to market. That will turn into an absolute deluge by allowing AI patents:

We are already seeing this flood of content from AI text and art generators, and it should be pretty easy to train an AI model on a body of research and ask it to spit out patent applications. If those applications could be filed with little human intervention, then well-resourced patent filers could quickly and easily crowd out entire fields.

This flooding the zone would merely be an escalation of what companies are already doing. Many tech companies regularly file patents they have no intention of developing. Drug companies have pioneered strategies to generate and file large numbers of patents on existing drug products. AI could drastically cut down the time and effort required to block off areas of technology with patents. For example, AbbVie set up an ideas submission program to reward scientists for coming up with patentable ideas on their existing drug Humira. An AI could easily beat those scientists in speed, volume, and maybe even cost. And the cost of filing and prosecuting the resulting patent applications would likely pale in comparison to the rewards for successfully staking out a monopoly in lucrative fields.

The other argument Lessig makes is also weird. It buys into the myth that patents are about sharing knowledge of the invention, which is a very strange myth for Lessig to buy into. So he warns that without patents, AI inventions will be hidden by trade secrets. This argument is trotted out regularly by patent maximalists on any attempt to cut back patent protections, but there’s almost no evidence to support it.

First of all, most inventions wouldn’t be protected by trade secrets anyway, because most things can be reverse engineered. Second, the value in inventing something is in bringing that product to market. That’s where you make the money, in selling the product, not the patent. Third, the point of the patent is to help the inventor recoup the capital expenditure in creating the invention in the first place. But, AI is cheap and can generate tons of ideas quickly. The capex is minimal.

So, the risk of “secrecy” hiding inventions seems… limited, at best.

And, also, this entire argument rests on the idea that anyone anywhere ever actually uses patents to learn about new inventions. That’s laughable. Again, Matt Lane debunks this point:

Lessig’s argument that barring AI from inventor status will force companies to pivot to trade secrets also falls flat for a number of reasons. First, it vastly overestimates the usefulness of a flood of AI patents that may not even lead to working inventions. As the patent maximalist IP Watchdog explains, inventors can patent what they believe to have invented, not just what they actually invented. Patent applicants do not need a working prototype to file a patent application (except for perpetual motion machines). We already have problems with impossible patents, as seen in how Theranos’s patent portfolio helped it maintain it’s grift. AI will just make this problem worse, especially considering the “hallucination” problem is widely known.

Second, patents and patent applications are already seen as a low value source for learning new science because “patents are obfuscated with legal jargon and […] reading patents might lead to increased liability for ‘willful’ patent infringement.” Finally, savvy actors use both patents and trade secrets to maximize their protection. For example, the biologic drug industry is well known for overpatenting and also successfully using trade secrets to drive up costs to enter once those patents expire. An AI can be trained to take these strategies even further by maximizing the amount of patents that could be applied for while minimizing the disclosure of key know-how that a future competitor would need to actually practice the claimed invention.

If the makers of AI are concerned that they can’t make money without patents, just ask their AI to invent products they can sell in the market. They don’t need patents. They need products.

But, still, I never thought I’d see the day when Larry Lessig started pushing an IP maximalism line, one that would flood the system with vague, overly broad monopolies that limit the ability of humans to actually build stuff.

Filed Under: ai, ai patents, dabus, larry lessig, scotus, stephen thaler, supreme court

from the ai-nonsense dept

Stephen Thaler is a man on a mission. It’s not a very good mission, but it’s a mission. He created something called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) and claims that it’s creating things, for which he has tried to file for patents and copyrights around the globe, with his mission being to have DABUS named as the inventor or author. This is dumb for many reasons. The purpose of copyright and patents are to incentivize the creation of these things, by providing to the inventor or author a limited time monopoly, allowing them to, in theory, use that monopoly to make some money, thereby making the entire inventing/authoring process worthwhile. An AI doesn’t need such an incentive. And this is why patents and copyright only are given to persons and not animals or AI.

But Thaler has spent years trying to get patent offices around the world to give DABUS a patent. I’ll just note here that if Thaler is correct, then it seems to me that he shouldn’t be able to do this at all, as it’s not his invention to patent. It’s DABUS’s. And unless DABUS has hired Thaler to seek a patent, it’s a little unclear to me why Thaler has any say here.

Either way, Thaler’s somewhat quixotic quest continues to fail. The EU Patent Office rejected his application. The Australian patent office similarly rejected his request. In that case, a court sided with Thaler after he sued the Australian patent office, and said that his AI could be named as an inventor, but thankfully an appeals court set aside that ruling a few months ago. In the US, Thaler/DABUS keeps on losing as well. Last fall, he lost in court as he tried to overturn the USPTO ruling, and then earlier this year, the US Copyright Office also rejected his copyright attempt (something it has done a few times before). In June, he sued the Copyright Office over this, which seems like a long shot.

And now, he’s also lost his appeal of the ruling in the patent case. CAFC, the Court of Appeals for the Federal Circuit — the appeals court that handles all patent appeals — has rejected Thaler’s request just like basically every other patent and copyright office, and nearly all courts.

The ruling is pretty straightforward:

This case presents the question of who, or what, can be an inventor. Specifically, we are asked to decide if an artificial intelligence (AI) software system can be listed as the inventor on a patent application. At first, it might seem that resolving this issue would involve an abstract inquiry into the nature of invention or the rights, if any, of AI systems. In fact, however, we do not need to ponder these metaphysical matters. Instead, our task begins – and ends – with consideration of the applicable definition in the relevant statute.

The United States Patent and Trademark Office (PTO) undertook the same analysis and concluded that the Patent Act defines “inventor” as limited to natural persons; that is, human beings. Accordingly, the PTO denied Stephen Thaler’s patent applications, which failed to list any human as an inventor. Thaler challenged that conclusion in the U.S. District Court for the Eastern District of Virginia, which agreed with the PTO and granted it summary judgment. We, too, conclude that the Patent Act requires an “inventor” to be a natural person and, therefore, affirm.

The CAFC ruling doesn’t get that deep into the issues — it just looks at the US Patent Act and says “um, duh, seems kinda obvious that this only covers human beings.”

The Patent Act does not define “individual.” However, as the Supreme Court has explained, when used “[a]s a noun, ‘individual’ ordinarily means a human being, a person.” Mohamad v. Palestinian Auth., 566 U.S. 449, 454 (2012) (internal alteration and quotation marks omitted). This is in accord with “how we use the word in everyday parlance”: “We say ‘the individual went to the store,’ ‘the individual left the room,’ and ‘the individual took the car,’ each time referring unmistakably to a natural person.” Id. Dictionaries confirm that this is the common understanding of the word. See, e.g., Individual, Oxford English Dictionary (2022) (giving first definition of “individual” as “[a] single human being”); Individual, Dictionary.com (last visited July 11, 2022), https://www.dictionary.com/browse/individual (giving “a single human being, as distinguished from a group” as first definition for “individual”). So, too, does the Dictionary Act, which provides that legislative use of the words “person” and “whoever” broadly include (“unless the context indicates otherwise”) “corporations, companies, associations, firms, partnerships, societies, and joint stock companies, as well as individuals.” 1 U.S.C. § 1 (emphasis added). “With the phrase ‘as well as,’ the definition marks ‘individual’ as distinct from the list of artificial entities that precedes it,” showing that Congress understands “individual” to indicate natural persons unless otherwise noted. Mohamad, 566 U.S. at 454.

Consequently, the Supreme Court has held that, when used in statutes, the word “individual” refers to human beings unless there is “some indication Congress intended” a different reading. Id. at 455 (emphasis omitted).4 Nothing in the Patent Act indicates Congress intended to deviate from the default meaning. To the contrary, the rest of the Patent Act supports the conclusion that “individual” in the Act refers to human beings.

For instance, the Act uses personal pronouns – “himself” and “herself” – to refer to an “individual.” § 115(b)(2). It does not also use “itself,” which it would have done if Congress intended to permit non-human inventors. The Patent Act also requires inventors (unless deceased, incapacitated, or unavailable) to submit an oath or declaration. See, e.g., id. (requiring oath or declaration from inventor that “such individual believes himself or herself to be the original inventor or an original joint inventor of a claimed invention in the application”). While we do not decide whether an AI system can form beliefs, nothing in our record shows that one can, as reflected in the fact that Thaler submitted the requisite statements himself, purportedly on DABUS’ behalf.

The panel easily rejects Thaler’s arguments that try to get around this, which aren’t even worth bothering with here (they were pedantic and nitpicky, and the CAFC panel rightly treats them as nonsense). Then the panel notes that, look, there’s really no debate here, and this whole campaign by Thaler is obviously silly:

Statutes are often open to multiple reasonable readings. Not so here. This is a case in which the question of statutory interpretation begins and ends with the plain meaning of the text. See Bostock v. Clayton Cnty., 140 S. Ct. 1731, 1749 (2020) (“This Court has explained many times over many years, when the meaning of the statute’s terms is plain, our job is at an end.”). In the Patent Act, “individuals” – and, thus, “inventors” – are unambiguously natural persons.

Given Thaler’s willingness to file suits and appeals all over the globe, I imagine he’ll now petition the Supreme Court as well, though it seems unlikely the court will be interested — but these days, you never really know.

At this point, i believe the only country that has actually taken Thaler seriously (other than that one brief blip in Australia) is South Africa. Amusingly, Thaler tried to use South Africa’s approval as part of his argument here in the US, and that was laughed off pretty easily.

Thaler also notes that South Africa has granted patents with DABUS as an inventor. This foreign patent office was not interpreting our Patent Act. Its determination does not alter our conclusion.

Thaler’s weird campaign is a distraction. It’s good to see courts and patent agencies around the globe (mostly) shutting it down. Opening up patents and copyrights to AI would be a disaster and would create lots of serious problems for actual innovation and creativity. Unless Thaler is doing this deliberately to lose and set good precedent, he’s a gadfly, wasting the time of lots of people over a very silly quest.

Filed Under: ai, ai patents, cafc, dabus, patents, stephen thaler

from the this-is-correct dept

For years, throughout the entire monkey selfie lawsuit saga, we kept noting that the real reason a prestigious law firm like Irell & Manella filed such a patently bogus lawsuit was to position itself to be the go-to law firm to argue for AI-generated works deserving copyright. However, we’ve always argued that AI-generated works are (somewhat obviously) in the public domain, and get no copyright. Again, this goes back to the entire nature of copyright law — which is to create a (limited time) incentive for creators, in order to get them to create a work that they might not have otherwise created. When you’re talking about an AI, it doesn’t need a monetary incentive (or a restrictive one). The AI just generates when it’s told to generate.

This idea shouldn’t even be controversial. It goes way, way back. In 1966 the Copyright Office’s annual report noted that it needed to determine if a computer-created work was authored by the computer and how copyright should work around such works:

In 1985, prescient copyright law expert, Pam Samuelson, wrote a whole paper exploring the role of copyright in works created by artificial intelligence. In that paper, she noted that, while declaring such works to be in the public domain, it seemed like an unlikely result as “the legislature, the executive branch, and the courts seem to strongly favor maximalizing intellectual property rewards” and:

For some, the very notion of output being in the public domain may seem to be an anathema, a temporary inefficient situation that will be much improved when individual property rights are recognized. Rights must be given to someone, argue those who hold this view; the question is to whom to give rights, not whether to give them at all.

Indeed, we’ve seen exactly that. Back in 2018, we wrote about examples of lawyers having trouble even conceptualizing a public domain for such works, as they argued that someone must hold the copyright. But that’s not the way it needs to be. The public domain is a thing, and it shouldn’t just be for century-old works.

Thankfully (and perhaps not surprisingly, since they started thinking about it all the way back in the 1960s), when the Copyright Office released its third edition of the giant Compendium of U.S. Copyright Office Practices, it noted that it would not grant a copyright on “works that lack human authorship” using “a photograph taken by a monkey” as one example, but also noting “the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.”

Of course, that leaves open some kinds of mischief, and the Office even admits that whether the creative work is done by a human or a computer is “the crucial question.” And, that’s left open attempts to copyright AI-generated works. Jumping in to push for copyrights for the machines was… Stephen Thaler. We’ve written about Thaler going all the way back to 2004 when he was creating a computer program to generate music and inventions. But, he’s become a copyright and patent pest around the globe. We’ve had multiple stories about attempts to patent AI-generated inventions in different countries — including the US, Australia, the EU and even China. The case in China didn’t involve Thaler (as far as we know), but the US, EU, and Australia cases all did (so far, only Australia has been open to allowing a patent for AI).

But Thaler is not content to just mess up patent law, he’s pushing for AI copyrights as well. And for years, he’s been trying to get the Copyright Office go give his AI the right to claim copyright. As laid out in a comprehensive post over at IPKat, the Copyright Office has refused him many times over, with yet another rejection coming on Valentine’s Day.

The Review Board was, once again, unimpressed. It held that “human authorship is a prerequisite to copyright protection in the United States and that the Work therefore cannot be registered.”

The phrase ‘original works of authorship’ under §102(a) of the Act sets limits to what can be protected by copyright. As early as in Sarony (a seminal case concerning copyright protection of photographs), the US Supreme Court referred to authors as human.

This approach was reiterated in other Supreme Court’s precedents like Mazer and Goldstein, and has been also consistently adopted by lower courts.

While no case has been yet decided on the specific issue of AI-creativity, guidance from the line of cases above indicates that works entirely created by machines do not access copyright protection. Such a conclusion is also consistent with the majority of responses that the USPTO received in its consultation on Artificial Intelligence and Intellectual Property Policy.

The Review also rejected Thaler’s argument that AI can be an author under copyright law because the work made for hire doctrine allows for “non-human, artificial persons such as companies” to be authors. First, held the Board, a machine cannot enter into any binding legal contract. Secondly, the doctrine is about ownership, not existence of a valid copyright.

Somehow, I doubt that Thaler is going to stop trying, but one hopes that he gets the message. Also, it would be nice for everyone to recognize that having more public domain is a good thing and not a problem…

Filed Under: ai, copyright, dabus, stephen thaler, us copyright office

US Judge Gets It Right: AI Doesn't Get Patents

from the HAL-says-this-is-correct dept

A month ago, we wrote about a perplexing (and dangerous) decision down in Australia ruling that an AI can be listed as the inventor of a patent. As we had explained, there was a concerted effort by a small group patent lawyers and this one dude, Stephen Thaler, to seek out patents for “inventions” that an AI created by Thaler called Dabus (“device for the autonomous bootstrapping of unified sentience”). As we explained in that and earlier posts, the entire point of the patent system is to provide incentives to humans to invent. An AI does not need such incentives. As we’ve highlighted in the past, the USPTO and the EU patent office have both rejected AI-generated patents. Australia’s patent office had done the same, but a judge there rejected that and said an AI could be listed as an inventor.

All of these situations involve Thaler/DABUS, as did a new ruling in the US which… thankfully has rejected the idea that an AI deserves patents after Thaler filed a lawsuit because of the USPTO rejection. I think there’s a separate issue here: which is what standing does Thaler have in the first place? If the argument is that “DABUS” is the inventor, it seems that… um… only DABUS should have the necessary standing to challenge the rejection of its patent application. The fact that Thaler thinks he has standing more or less shows how ridiculous the entire claim is in the first place.

After going through the background of the case, and discussing what level of deference the USPTO deserves, Judge Leonie Brinkema gets straight to the actual point, which is pretty simple: AI doesn’t get a patent.

Even if no deference were due, the USPTO’s conclusion is correct under the law. The question of whether the Patent Act requires that an “inventor” be a human is a question of statutory construction. Accordingly, the plain language of the statute controls…. As the Supreme Court has held: “The preeminent canon of statutory interpretation requires us to ‘presume that [the] legislature says in a statute what it means and means in a statute what it says there.’ Thus, our inquiry begins with the statutory text, and ends there as well if the text is unambiguous.”…

Using the legislative authority provided by the Constitution’s Patent Clause… Congress codified the Patent Act in 1952… and has amended the Patent Act a number of times in the ensuing sixty years. In 2011, Congress promulgated the America Invents Act, which, as relevant here, formally amended the Patent Act to provide an explicit statutory definition for the term “inventor” to mean “the individual, or, if a joint invention, the individuals who invented or discovered the subject matter of the invention.”… The America Invents Act also added that “joint inventor” means “any one of the individuals who invented or discovered the subject matter of a joint invention.”… Additionally, Congress has required that “[a]n application for patent shall be made, or authorized to be made, by the inventor . . . in writing to the Director.”… “[E]ach individual who is the inventor or a joint inventor of a claimed invention in an application for patent shall execute an oath or declaration in connection with the application” which “shall contain statements that–… such individual believes himself or herself to be the original inventor or joint inventor of [the] claimed invention.”

See where this is going?

As the statutory language highlights above, both of the definitions provided by Congress for the terms “inventor” and “joint inventor” within the Patent Act reference an “individual” or “individuals.”… Congress used the same term–“individual”–in other significant provisions of the Patnet Act which reference an “inventor,” including requiring that “each individual who is the inventor or a joint inventor” execute an oath or declaration…

The court then notes that in analyzing other laws, courts have long said that “individual” means human. And it also highlights that the language in the Patent Act makes it clear that it was intended to apply to humans — humans who can make a declaration about their own beliefs.

Congress’s use of the term “individual” in the Patent Act strengthens the conclusion that an “inventor” must be a natural person. Congress provided that in executing the oath or declaration accompanying a patent application, the inventor must include a statement “such individual believes himself or herself to be the original inventor or an original joint inventor of a claimed invention in the application.”… By using personal pronouns such as “himself or herself” and the verb “believes” in adjacent terms modifying “individual,” Congress was clearly reference a natural person.

Then there’s a fun bit of judicial eye-rolling, stating: “having neither facts nor law to support his argument,” the judge notes that Thaler’s argument is basically “but this is good for innovation.” But that’s not going to fly (leaving aside the fact that allowing AI to get patents would be objectively terrible for innovation, it’s also not how any of this works):

Plaintiff provides no support for his argument that these policy considerations should override the plain meaning of a statutory term.

It gets even worse for Thaler’s arguments. He argued that the PTO hadn’t properly considered the policy ramifications of not allowing AI to get patents, but as the judge notes, that’s clearly not true. It had. And it rejected the dumb idea.

Specifically, the USPTO points to a conference on artificial intelligence policy it held in January 2019, and to requests for public comment “on a whole host of issues related to the intersection of intellectual property policy and artificial intelligence” it issued in August and October 2019. In October 2020, the USPTO issued a comprehensive report on those comments.

And… what did that report say?

Many commentators disagreed with plaintiff’s view that artificial intelligence machines should be recognized as inventors…

Given how active Thaler and his lawyer friends have been around the globe, I imagine this is hardly the end of these campaigns. I imagine this ruling will be appealed, and how long will it be until some sucker of a Senator or Member of Congress, convinced by Thaler’s nonsense, will introduce a bill to amend the Patent Act to enable AI patents?

Filed Under: ai, artificial intelligence, dabus, patents, stephen thaler, uspto

Australian Court Ridiculously Says That AI Can Be An Inventor, Get Patents

from the i'm-sorry-dave,-you-shouldn't-do-that dept

There have been some questions raised about whether or not AI-created works deserve intellectual property protection. Indeed, while we (along with many others) laughed along at the trial about the monkey selfie, we had noted all along, that the law firm pushing to give the monkey (and with it, PETA) the copyright on the photo was almost certainly trying to tee up a useful case to argue that AI can get copyright and patents as well. Thankfully, the courts (and later the US Copyright Office) determined that copyrights require a human author.

The question on patents, however, is still a little hazy (unfortunately). It should be the same as with copyright. The intent of both copyrights and patents is to create incentives (in the form of a “limited” monopoly) for the creation of the new creative work or invention. AI does not need such an incentive (nor do animals). Over the last few years, though, there has been a rush by some who control AI to try to patent AI creations. This is still somewhat up in the air. In the US, the USPTO has (ridiculously) suggested that AI created inventions could be patentable — but then (rightfully) rejected a patent application from an AI. The EU has rejected AI-generated patents.

Unfortunately, it looks like Australia has gone down the opposite path from the EU, after a court ruled that an AI can be an inventor for a patent. The case was brought by the same folks who were denied patents in the EU & US, and who are still seeking AI patents around the globe. Australia’s patent office had followed suit with its EU & US counterparts, but the judge has now sent it back saying that there’s nothing wrong with AI holding patents.

University of Surrey professor Ryan Abbott has launched more than a dozen patent applications across the globe, including in the UK, US, New Zealand and Australia, on behalf of US-based Dr Stephen Thaler. They seek to have Thaler?s artificial intelligence device known as Dabus (a device for the autonomous bootstrapping of unified sentience) listed as the inventor.

Honestly, I remain perplexed by this weird attempt to demand something that makes no sense, though it seems like yet another attempt to scam the system to make money by shaking others down. Once again, AI needs no such incentive to invent, and it makes no sense at all to grant it patents. An AI also cannot assign the patents to others, or properly license a patent. The whole thing is stupid.

It is, however, yet another point to show just how extreme the belief that every idea must be “owned” has become. And it’s incredibly dangerous. Those pushing for this — or the courts and patent offices agreeing with this — don’t seem to have any concept of how badly this will backfire.

And, of course, the reality underlying this, which only underscores how dumb it is, the AI isn’t actually getting the patent. It would go to the guy who “owns” the AI.

Beach said a non-human inventor could not be the applicant of a patent, and as the owner of the system, Thaler would be the owner of any patents that would be granted on inventions by Dabus.

At least some people are recognizing what a total clusterfuck it would be if AI-generated patents were allowed. The Guardian quotes an Australian patent attorney, Mark Summerfield, who raises just one of many valid concerns:

?Allowing machine inventors could have numerous consequences, both foreseeable and unforeseeable. Allowing patents for inventions churned out by tireless machines with virtually unlimited capacity, without the further exercise of any human ingenuity, judgment, or intellectual effort, may simply incentivise large corporations to build ?patent thicket generators? that could only serve to stifle, rather than encourage, innovation overall.?

Unfortunately, as the article notes, it’s not just Australia making this dangerous decision. South Africa just granted DABUS a patent last week as well.

Filed Under: ai, australia, dabus, incentives, monkey selfie, patent law, patents

EU Patent Office Rejects Two Patent Applications In Which An AI Was Designated As The Inventor

from the watch-this-space dept

We’ve written a bunch about why AI generated artwork should not (and need not) have any copyright at all. The law says that copyright only applies to human creators. But what about patents? There has been a big debate about this in the patent space over the last year, mainly lead by AI developers who want to be able to secure patents on AI generated ideas. The patent offices in the EU and the US have been exploring the issue, and asking for feedback, while they plot out a strategy, but some AI folks decided to force the matter sooner. Over the summer they announced that they had filed for two patents in the EU for inventions that they claim were “invented” by an AI named DABUS without the assistance of a human inventor.

And now, the EU Patent Office has rejected both patents, since they don’t have a human inventor.

The EPO has refused two European patent applications in which a machine was designated as inventor. Both patent applications indicate ?DABUS? as inventor, which is described as ?a type of connectionist artificial intelligence?. The applicant stated that they acquired the right to the European patent from the inventor by being its successor in title.

After hearing the arguments of the applicant in non-public oral proceedings on 25 November the EPO refused EP 18 275 163 and EP 18 275 174 on the grounds that they do not meet the requirement of the EPC that an inventor designated in the application has to be a human being, not a machine. A reasoned decision may be expected in January 2020.

Frankly, this is the right decision and its one that I hope patent offices around the globe recognize and continue to keep this line in place. I fear that this will actually kick off the process that comes to the opposite conclusion, and that patent offices will change the rules to allow for AI-generated patents.

The problem, yet again, is in people’s misguided belief that everything must be owned by someone, and that somehow without a patent it is impossible to successfully commercialize or market a product. There is tremendous evidence to the contrary (including just by looking at products after their patents run out — which is often a time when more innovation occurs, since there’s greater competition driving improvements). But, instead, you hear nonsense like the following from Prof. Ryan Abbott, who helped file the two now rejected patents, arguing that without patents, somehow these inventions might not come to be:

Abbott and his team believe that powerful AI systems could eventually find cures for cancer or find workable solutions for reversing climate change. ?If outdated IP laws around the world don?t respond quickly to the rise of the inventive machine, the lack of incentive for AI developers could stand in the way of a new era of spectacular human endeavor,? Abbott said.

But why? AI doesn’t need the monopoly control as incentive to create an invention. That’s not what motivates the AI. What’s wrong with just letting the AI come up with those cures for cancer and workable solutions for reversing climate change and just giving them to the world to make the world a better place?

Filed Under: ai, dabus, epo, eu, humans, incentives, ownership, patents, ryan abbott