incentives – Techdirt (original) (raw)

The Problems Of The NCMEC CyberTipline Apply To All Stakeholders

from the no-easy-answers dept

The failures of the NCMEC CyberTipline to combat child sexual abuse material (CSAM) as well as it could are extremely frustrating. But as you look at the details, you realize there just aren’t any particularly easy fixes. While there are a few areas that could improve things at the margin, the deeper you look, the more challenging the whole setup is. There aren’t any easy answers.

And that sucks, because Congress and the media often expect easy answers to complex problems. And that might not be possible.

This is the second post about the Stanford Internet Observatory’s report on the NCMEC CyberTipline, which is the somewhat useful, but tragically limited, main way that investigations of child sexual abuse material (CSAM) online is done. In the first post, we discussed the structure of the system, and how the incentive structure regarding law enforcement is a big part of what’s making the system less impactful than it otherwise might be.

In this post, I want to dig in a little more about the specific challenges in making the CyberTipline work better.

The Constitution

I’m not saying that the Constitution is a problem, but it represents a challenge here. In the first post, I briefly mentioned Jeff Kosseff’s important article about how the Fourth Amendment and the structure of NCMEC makes things tricky, but it’s worth digging in a bit here to understand the details.

The US government set up NCMEC as a private non-profit in part because if it were a government agency doing this work, there would be significant concerns about whether or not the evidence it gets was collected with or without a warrant under the Fourth Amendment. If it’s a government agency, then the law cannot require companies to hand over the info without a warrant.

So, Congress did a kind of two-step dance here: they set up this “private” non-profit, and then created a law that requires companies that come across CSAM online to report it to the organization. And all of this seems to rely on a kind of fiction that if we pretend NCMEC isn’t a government agent, then there’s no 4th Amendment issue.

From the Stanford report:

The government agent doctrine explains why Section 2258A allows, but does not require, online platforms to search for CSAM. Indeed, the statute includes an express disclaimer that it does not require any affirmative searching or monitoring. Many U.S. platforms nevertheless proactively monitor their services for CSAM, yielding millions of CyberTipline reports per year. Those searches’ legality hinges on their voluntariness. The Fourth Amendment prohibits unreasonable searches and seizures by the government; warrantless searches are typically considered unreasonable. The Fourth Amendment doesn’t generally bind private parties, however the government may not sidestep the Fourth Amendment by making a private entity conduct a search that it could not constitutionally do itself. If a private party acts as the government’s “instrument or agent” rather than “on his own initiative” in conducting a search, then the Fourth Amendment does apply to the search. That’s the case where a statute either mandates a private party to search or “so strongly encourages a private party to conduct a search that the search is not primarily the result of private initiative.” And it’s also true in situations where, with the government’s knowledge or acquiescence, a private actor carries out a search primarily to assist the government rather than to further its own purposes, though this is a case-by-case analysis for which the factors evaluated vary by court.

Without a warrant, searches by government agents are generally unconstitutional. The usual remedy for an unconstitutional search is for a court to throw out all evidence obtained as a result of it (the so-called “exclusionary rule”). If a platform acts as a government agent when searching a user’s account for CSAM, there is a risk that the resulting evidence could not be introduced against the user in court, making a conviction (or plea bargain) harder for the prosecution to obtain. This is why Section 2258A does not and could not require online platforms to search for CSAM: it would be unconstitutional and self-defeating.

In CSAM cases involving CyberTipline reports, defendants have tried unsuccessfully to characterize platforms as government agents whose searches were compelled by Section 2258A and/or by particular government agencies or investigators. But courts, pointing to the statute’s express disclaimer language (and, often, the testimony of investigators and platform employees), have repeatedly held that platforms are not government agents and their CSAM searches were voluntary choices motivated mainly by their own business interests in keeping such repellent material off their services.

So, it’s quite important that the service providers that are finding and reporting CSAM are not seen as agents of the government. It would destroy the ability to use that evidence in prosecuting cases. That’s important. And, as the report notes, it’s also why it would be a terrible idea to require social media to proactively try to hunt down CSAM. If the government required it, it would effectively light all that evidence on fire and prevent using it for prosecution.

That said, the courts (including in a ruling by Neil Gorsuch while he was on the appeals court) have made it clear that, while platforms may not be government agents, it’s pretty damn clear that NCMEC and the CyberTipline are. And that creates some difficulties.

In a landmark case called Ackerman, one federal appeals court held that NCMEC is a “governmental entity or agent.” Writing for the Tenth Circuit panel, then-judge Neil Gorsuch concluded that NCMEC counts as a government entity in light of NCMEC’s authorizing statutes and the functions Congress gave it to perform, particularly its CyberTipline functions. Even if NCMEC isn’t itself a governmental entity, the court continued, it acted as an agent of the government in opening and viewing the defendant’s email and four attached images that the online platform had (as required) reported to NCMEC. The court ruled that those actions by NCMEC were a warrantless search that rendered the images inadmissible as evidence. Ackerman followed a trial court-level decision, Keith, which had also deemed NCMEC a government agent: its review of reported images served law enforcement interests, it operated the CyberTipline for public not private interests, and the government exerts control over NCMEC including its funding and legal obligations. As an appellate-level decision, Ackerman carries more weight than Keith, but both have proved influential.

The private search doctrine is the other Fourth Amendment doctrine commonly raised in CSAM cases. It determines what the government or its agents may view without a warrant upon receiving a CyberTipline report from a platform. As said, the Fourth Amendment generally does not apply to searches by private parties. “If a private party conducted an initial search independent of any agency relationship with the government,” the private search doctrine allows law enforcement (or NCMEC) to repeat the same search so long as they do not exceed the original private search’s scope. Thus, if a platform reports CSAM that its searches had flagged, NCMEC and law enforcement may open and view the files without a warrant so long as someone at the platform had done so already. The CyberTipline form lets the reporting platform indicate which attached files it has reviewed, if any, and which files were publicly available.

For files that were not opened by the platform (such as where a CyberTipline submission is automated without any human review), Ackerman and a 2021 Ninth Circuit case called Wilson hold that the private search exception does not apply, meaning the government or its agents (i.e., NCMEC) may not open the unopened files without a warrant. Wilson disagreed with the position, adopted by two other appeals-court decisions, that investigators’ warrantless opening of unopened files is permissible if the files are hash matches for files that had previously been viewed and confirmed as CSAM by platform personnel. Ackerman concluded by predicting that law enforcement “will struggle not at all to obtain warrants to open emails when the facts in hand suggest, as they surely did here, that a crime against a child has taken place.”

To sum up: Online platforms’ compliance with their CyberTipline reporting obligations does not convert them into government agents so long as they act voluntarily in searching their platforms for CSAM. That voluntariness is crucial to maintaining the legal viability of the millions of reports platforms make to the CyberTipline each year. This imperative shapes the interactions between platforms and U.S.-based legislatures, law enforcement, and NCMEC. Government authorities must avoid crossing the line into telling or impermissibly pressuring platforms to search for CSAM or what to search for and report. Similarly, platforms have an incentive to maintain their CSAM searches’ independence from government influence and to justify those searches on rationales “separate from assisting law enforcement.” When platforms (voluntarily) report suspected CSAM to the CyberTipline, Ackerman and Wilson interpret the private search doctrine to let law enforcement and NCMEC warrantlessly open and view only user files that had first been opened by platform personnel before submitting the tip or were publicly available.

This is all pretty important in making sure that the whole system stays on the right side of the 4th Amendment. As much as some people really want to force social media companies to proactively search for and report CSAM, mandating that creates real problems under the 4th Amendment.

As for the NCMEC and law enforcement side of things, the requirement to get a warrant for unopened communications remains important. But, as noted below, sometimes law enforcement doesn’t want to get a warrant. If you’ve been reading Techdirt for any length of time, this shouldn’t surprise you. We see all sorts of areas where law enforcement refuses to take that basic step of getting a warrant.

Understanding that framing is important to understanding the rest of this, including exploring where each of the stakeholders fall down. Let’s start with the biggest problem of all: where law enforcement fails.

Law Enforcement

In the first article on this report, we noted that the incentive structure has made it such that law enforcement often tries to evade this entire process. It doesn’t want to go through the process of getting warrants some of the time. It doesn’t want to associate with the ICAC task forces because they feel like it puts too much of a burden on them, and if they don’t take care of it, someone else on the task force will. And sometimes they don’t want to deal with CyberTipline reports because they’re afraid that if they’re too slow after getting a report, they might face liability.

Most of these issues seem to boil down to law enforcement not wanting to do its job.

But the report details some of the other challenges for law enforcement. And it starts with just how many reports are coming in:

Almost across the board law enforcement expressed stress over their inability to fully investigate all CyberTipline reports due to constraints in time and resources. An ICAC Task Force officer said “You have a stack [of CyberTipline reports] on your desk and you have to be ok with not getting to it all today. There is a kid in there, it’s really quite horrible.” A single Task Force detective focused on internet crimes against children may be personally responsible for 2,000 CyberTipline reports each year. That detective is responsible for working through all of their tips and either sending them out to affiliates or investigating them personally. This process involves reading the tip, assessing whether a crime was committed, and determining jurisdiction; just determining jurisdiction might necessitate multiple subpoenas. Some reports are sent out to affiliates and some are fully investigated by detectives at the Task Force.

An officer at a Task Force with a relatively high CyberTipline report arrest rate said “we are stretched incredibly thin like everyone.” An officer in a local police department said they were personally responsible for 240 reports a year, and that all of them were actionable. When asked if they felt overwhelmed by this volume, they said yes. While some tips involve self-generated content requiring only outreach to the child, many necessitate numerous search warrants. Another officer, operating in a city with a population of 100,000, reported receiving 18–50 CyberTipline reports annually, actively investigating around 12 at any given time. “You have to manage that between other egregious crimes like homicides,” they said. This report will not extensively cover the issue of volume and law enforcement capacity, as this challenge is already well-documented and detailed in the 2021 U.S. Department of Homeland Security commissioned report, in Cullen et al., and in a 2020 Government Accountability Office report. “People think this is a one-in-a-million thing,” a Task Force officer said. “What they don’t know is that this is a crime of secrecy, and could be happening at four of your neighbors’ houses.”

And of course, making social media platforms more liable doesn’t help to fix much here. At best, it makes it worse because it encourages even more reporting by the platforms, which only further overloads law enforcement.

Given all those reports the cops are receiving, you’d hope they had a good system for managing them. But your hope would not be fulfilled:

Law enforcement pick a certain percentage of reports to investigate. The selection is not done in a very scientific way—one respondent described it as “They hold their finger up in the air to feel the wind.” An ICAC Task Force officer said triage is more of an art than a science. They said that with experience you get a feel for whether a case will have legs, but that you can never be certain, and yet you still have to prioritize something.

That seems less than ideal.

Another problem, though, is that a lot of the reports are not prosecutable at all. Because of the incentives discussed in the first post, apparently certain known memes get reported to the CyberTipline quite frequently, and police feel they just clog up the system. But because the platforms fear significant liability if they don’t report those memes, they keep reporting them.

U.S. law requires that platforms report this content if they find it, and that NCMEC send every report to law enforcement. When NCMEC knows a report contains viral content or memes they will label it “informational,” a category that U.S. law enforcement typically interpret as meaning the report can be ignored, but not all such reports get labeled “informational.” Additionally there are an abundance of “age difficult” reports that are unlikely to lead to prosecution. Law enforcement may have policies requiring some level of investigation or at least processing into all noninformational reports. Consequently, officers often feel inundated with reports unlikely to result in prosecution. In this scenario, neither the platforms, NCMEC, nor law enforcement agencies feel comfortable explicitly ignoring certain types of reports. An employee from a platform that is relatively new to NCMEC reporting expressed the belief that “It’s best to over-report, that’s what we think.”

At best, this seems to annoy law enforcement, but it’s a function of how the system works:

An officer expressed frustration over platforms submitting CyberTipline reports that, in their view, obviously involve adults: “Tech companies have the ability to […] determine with a high level of certainty if it’s an adult, and they need to stop sending [tips of adults].” This respondent also expressed a desire that NCMEC do more filtering in this regard. While NCMEC could probably do this to some extent, they are again limited by the fact that they cannot view an image if the platform did not check the “reviewed” box (Figure 5.3 on page 26). NCMEC’s inability to use cloud services also makes it difficult for them to use machine learning age classifiers. When we asked NCMEC about the hurdles they face, they raised the “firehose of I’ll just report everything” problem.

Again, this all seems pretty messy. Of course you want companies to report anything they find that might be CSAM. And, of course, you want NCMEC to pass them on to law enforcement. But the end result is overwhelmed law enforcement with no clear process for triage and dealing with a lot of reports that were sent in an abundance of caution but which are not at all useful to law enforcement.

And, of course, there are other challenges that policymakers probably don’t think about. For example: how do you deal with hacked accounts? How much information is it right for the company to share with law enforcement?

One law enforcement officer provided an interesting example of a type of report he found frustrating: he said he frequently gets reports from one platform where an account was hacked and then used to share CSAM. This platform provided the dates of multiple password changes in the report, which the officer interpreted as indicating the account had been hacked. Despite this, they felt obligated to investigate the original account holder. In a recent incident they described, they were correct that the account had been hacked. They expressed that if the platform explicitly stated their suspicion in the narrative section of the report, such as by saying something like “we think this account may have been hacked,” they would then feel comfortable de-prioritizing these tips. We subsequently learned from another respondent that this platform provides time stamps for password changes for all of their reports, putting the burden on law enforcement to assess whether the password changes were of normal frequency, or whether they reflected suspicious activity.

With that said, the officer raised a valid issue: whether platforms should include their interpretation of the information they are reporting. One platform employee we interviewed who had previously worked in law enforcement acknowledged that they would have found the platform’s unwillingness to explicitly state their hunch frustrating as well. However, in their current role they also would not have been comfortable sharing a hunch in a tip: “I have preached to the team that anything they report to NCMEC, including contextual information, needs to be 100% accurate and devoid of personal interpretation as much as possible, in part because it may be quoted in legal process and case reports down the line.” They said if a platform states one thing in a tip, but law enforcement discovers that is not the case, that could make it more difficult for law enforcement to prosecute, and could even ruin their case. Relatedly, a former platform employee said some platforms believe if they provide detailed information in their reports courts may find the reports inadmissible. Another platform employee said they avoid sharing such hunches for fear of it creating “some degree of liability [even if ] not legal liability” if they get it wrong

The report details how local prosecutors are also loathe to bring cases, because it’s tricky to find a jury who can handle a CSAM case:

It is not just police chiefs who may shy away from CSAM cases. An assistant U.S. attorney said that potential jurors will disqualify themselves from jury duty to avoid having to think about and potentially view CSAM. As a result, it can take longer than normal to find a sufficient number of jurors, deterring prosecutors from taking such cases to trial. There is a tricky balance to strike in how much content to show jurors, but viewing content may be necessary. While there are many tools to mitigate the effect of viewing CSAM for law enforcement and platform moderators, in this case the goal is to ensure that those viewing the content understand the horror. The assistant U.S. attorney said that they receive victim consent before showing the content in the context of a trial. Judges may also not want to view content, and may not need to if the content is not contested, but seeing it can be important as it may shape sentencing decisions.

There are also issues outside the US with law enforcement. As noted in the first article, NCMEC has become the de facto global reporting center, because so many companies are based in the US and report there. And the CyberTipline tries to share out to foreign law enforcement too, but that’s difficult:

For example, in the European Union, companies’ legal ability to voluntarily scan for CSAM required the passage of a special exception to the EU’s so-called “ePrivacy Directive”. Plus, against a background where companies are supposed to retain personal data no longer than reasonably necessary, EU member states’ data retention laws have repeatedly been struck down on privacy grounds by the courts for retention periods as short as four or ten weeks (as in Germany) and as long as a year (as in France). As a result, even if a CyberTipline report had an IP address that was linked to a specific individual and their physical address at the time of the report, it may not be possible to retrieve that information after some amount of time.

Law enforcement agencies abroad have varying approaches to CyberTipline reports and triage. Some law enforcement agencies will say if they get 500 CyberTipline reports a year, that will be 500 cases. Another country might receive 40,000 CyberTipline reports that led to just 150 search warrants. In some countries the rate of tips leading to arrests is lower than in the U.S. Some countries may find that many of their CyberTipline reports are not violations of domestic law. The age of consent may be lower than in the U.S., for example. In 2021 Belgium received about 15,000 CyberTipline reports, but only 40% contained content that violated Belgium law

And in lower income countries, the problems can be even worse, including confusion about how the entire CyberTipline process works.

We interviewed two individuals in Mexico who outlined a litany of obstacles to investigating CyberTipline reports even where a child is known to be in imminent danger. Mexican federal law enforcement have a small team of people who work to process the reports (in 2023 Mexico received 717,468 tips), and there is little rotation. There are people on this team who have been viewing CyberTipline reports day in and day out for a decade. One respondent suggested that recent laws in Mexico have resulted in most CyberTipline reports needing to be investigated at the state level, but many states lack the know-how to investigate these tips. Mexico also has rules that require only specific professionals to assess the age of individuals in media, and it can take months to receive assessments from these individuals, which is required even if the image is of a toddler

The investigator also noted that judges often will not admit CyberTipline reports as evidence because they were provided proactively and not via a court order as part of an investigation. They may not understand that legally U.S. platforms must report content to NCMEC and that the tips are not an extrajudicial invasion of privacy. As a result, officers may need a court order to obtain information that they already have in the CyberTipline report, confusing platforms who receive requests for data they put in a report a year ago. This issue is not unique to Mexico; NCMEC staff told us that they see “jaws drop” in other countries during trainings when they inform participants about U.S. federal law that requires platforms to report CSAM.

NCMEC Itself

The report also details some of the limitations of NCMEC and the CyberTipline itself, some of which are legally required (and where it seems like the law should be updated).

There appears to be a big issue with repeat reports, where NCMEC needs to “deconflict” them, but has limited technology to do so:

Improvements to the entity matching process would improve CyberTipline report prioritization processes and detection, but implementation is not always as straightforward as it might appear. The current automated entity matching process is based solely on exact matches. Introducing fuzzy matching, which would catch similarity between, for example, bobsmithlovescats1 and bobsmithlovescats2, could be useful in identifying situations where a user, after suspension, creates a new account with an only slightly altered username. With a more expansive entity matching system, a law enforcement officer proposed that tips could gain higher priority if certain identifiers are found across multiple tips. This process, however, may also require an analyst in the loop to assess whether a fuzzy match is meaningful.

It is common to hear of instances where detectives received dozens of separate tips for the same offender. For instance, the Belgium Federal Police noted receiving over 500 distinct CyberTipline reports about a single offender within a span of five months. This situation can arise when a platform automatically submits a tip each time a user attempts to upload CSAM; if the same individual tries to upload the same CSAM 60 times, it could result in 60 separate tips. Complications also arise if the offender uses a Virtual Private Network (VPN); the tips may be distributed across different law enforcement agencies. One respondent told us that a major challenge is ensuring that all tips concerning the same offender are directed to the same agency and that the detective handling them is aware that these numerous tips pertain to a single individual.

As the report notes, there are a variety of challenges, both economic and legal, in enabling NCMEC to upgrade its technology:

First, NCMEC operates with a limited budget and as a nonprofit they may not be able to compete with industry salaries for qualified technical staff. The status quo may be “understandable given resource constraints, but the pace at which industry moves is a mismatch with NCMEC’s pace.” Additionally, NCMEC must also balance prioritizing improving the CyberTipline’s technical infrastructure with the need to maintain the existing infrastructure, review tips, or execute other non-Tipline projects at the organization. Finally, NCMEC is feeding information to law enforcement, which work within bureaucracies that are also slow to update their technology. A change in how NCMEC reports CyberTipline information may also require law enforcement agencies to change or adjust their systems for receiving that information.

NCMEC also faces another technical constraint not shared with most technology companies: because the CyberTipline processes harmful and illegal content, it cannot be housed on commercially available cloud services. While NCMEC has limited legal liability for hosting CSAM, other entities currently do not, which constrains NCMEC’s ability to work with outside vendors. Inability to transfer data to cloud services makes some of NCMEC’s work more resource intensive and therefore stymies some technical developments. Cloud services provide access to proprietary machine learning models, hardware-accelerated machine learning training and inference, on-demand resource availability and easier to use services. For example, with CyberTipline files in the cloud, NCMEC could more easily conduct facial recognition at scale and match photos from the missing children side of their work with CyberTipline files. Access to cloud services could potentially allow for scaled detection of AI-generated images and more generally make it easier for NCMEC to take advantage of existing machine learning classifiers. Moving millions of CSAM files to cloud services is not without risks, and reasonable people disagree about whether the benefits outweigh the risks. For example, using a cloud facial recognition service would mean that a third party service likely has access to the image. There are a number of pending bills in Congress that, if passed, would enable NCMEC to use cloud services for the CyberTipline while providing the necessary legal protections to the cloud hosting providers.

Platforms

And, yes, there are some concerns about the platforms. But while public discussion seems to focus almost exclusively on where people think that platforms have failed to take this issue seriously, the report suggests the failures of platforms are much more limited.

The report notes that it’s a bit tricky to get platforms up and running with CyberTipline reporting, and that even as NCMEC will do some onboarding, it’s very limited to avoid some of the 4th Amendment concerns talked about above.

And, again, some of the problem with onboarding is due to outdated tech on NCMEC’s side. I mean… XML? Really?

Once NCMEC provides a platform with an API key and the corresponding manual, integrating their workflow with the reporting API can still present challenges. The API is XML-based, which requires considerably more code to integrate with than simpler JSON-based APIs and may be unfamiliar to younger developers. NCMEC is aware that this is an issue. “Surprisingly large companies are using the manual form,” one respondent said. One respondent at a small platform had a more moderate view; he thought the API was fine and the documentation “good.” But another respondent called the API “crap.”

There are also challenges under the law about what needs to be reported. As noted above and in the first article, that can often lead to over-reporting. But it can also make things difficult for companies trying to make determinations.

Platforms will additionally face policy decisions. While prohibiting illegal content is a standard approach, platforms often lack specific guidelines for moderators on how to interpret nuanced legal terms such as “lascivious exhibition.” This term is crucial for differentiating between, for example, an innocent photo of a baby in a bathtub, and a similar photo that appears designed to show the baby in a way that would be sexually arousing to a certain type of viewer. Trust and safety employees will need to develop these policies and train moderators.

And, of course, as has been widely discussed elsewhere, it’s not great that platforms have to hire human beings and expose them to this kind of content.

However, the biggest issue on reporting seems to not be a company’s unwillingness to do so, but how much information they pass along. And again, here, the issue is not so much unwillingness of the companies to be cooperative, but the incentives.

Memes and viral content pose a huge challenge for CyberTipline stakeholders. In the best case scenario, a platform checks the “Potential Meme” box and NCMEC automatically sends the report to an ICAC Task Force as “informational,” which appears to mean that no one at the Task Force needs to look at the report.

In practice, a platform may not check the “Potential Meme” box (possibly due to fixable process issues or minor changes in the image that change the hash value) and also not check the “File Viewed by Company” box. In this case NCMEC is unable to view the file, due to the Ackerman and Wilson decisions as discussed in Chapter 3. A Task Force could view the file without a search warrant and realize it is a meme, but even in that scenario it takes several minutes to close out the report. At many Task Forces there are multiple fields that have to be entered to close the report, and if Task Forces are receiving hundreds of reports of memes this becomes hugely time consuming. Sometimes, however, law enforcement may not realize the report is a meme until they have invested valuable time into getting a search warrant to view the report.

NCMEC recently introduced the ability for platforms to “batch report” memes after receiving confirmation from NCMEC that that meme is not actionable. This lets NCMEC label the whole batch as informational, which reduces the burden on law enforcement

We heard about an example where a platform classified a meme as CSAM, but NCMEC (and at least one law enforcement officer we spoke to about this meme) did not classify it as CSAM. NCMEC told the platform they did not classify the meme as CSAM, but according to NCMEC the platform said because they do consider it CSAM they were going to continue to report it. Because the platform is not consistently checking the “Potential Meme” box, law enforcement are still receiving it at scale and spending substantial time closing out these reports.

There is a related challenge when a platform neglects to mark content as “viral”. Most viral images are shared in outrage, not with an intent to harm. However, these viral images can be very graphic. The omission of the “viral” label can lead law enforcement to mistakenly prioritize these cases, unaware that the surge in reports stems from multiple individuals sharing the same image in dismay.

We spoke to one platform employee about the general challenge of a platform deeming a meme CSAM while NCMEC or law enforcement agencies disagree. They noted that everyone is doing their best to apply the Dost test. Additionally, there is no mechanism to get an assurance that a file is not CSAM: “No one blesses you and says you’ve done what you need to do. It’s a very unsettling place to be.” They added that different juries might come to different conclusions about what counts as CSAM, and if a platform fails to report a file that is later deemed CSAM the platform could be fined $300,000 and face significant public backlash: “The incentive is to make smart, conservative decisions.”

This is all pretty fascinating, and suggests that while there may be ways to improve things, it’s difficult to structure things right and make the incentives align properly.

And, again, the same incentives pressure the platforms to just overreport, no matter what:

Once a platform integrates with NCMEC’s CyberTipline reporting API, they are incentivized to overreport. Consider an explicit image of a 22-year-old who looks like they could be 17: if a platform identified the content internally but did not file a report and it turned out to be a 17-year-old, they may have broken the law. In such cases, they will err on the side of caution and report the image. Platform incentives are to report any content that they think is violative of the law, even if it has a low probability of prosecution. This conservative approach will also lead to reports from what Meta describes as “non-malicious users”—for instance, individuals sharing CSAM in outrage. Although such reports could theoretically yield new findings, such as uncovering previously unknown content, it is more likely that they overload the system with extraneous reports

All in all, the real lesson to be taken from this report is that this shit is super complicated, like all of trust & safety, and tradeoffs abound. But here it’s way more fraught than in most cases, both in terms of the seriousness of the issue, the potential for real harm, and the potentially destructive criminal penalties involved.

The report has some recommendations, though they mostly seem to deal with things at the margins: increase funding for NCMEC, allow it to update its technology (and hire the staff to do so), and have some more information to help platforms get set up.

Of course, what’s notable is that this does not include things like “make platforms liable for any mistake they make.” This is because, as the report shows, most platforms seem to take this stuff pretty seriously already, and the liability is already very clear, to the point that they are often over-reporting to avoid it, and that’s actually making the results worse, because they’re overwhelming both NCMEC and law enforcement.

All in all, this report is a hugely important contribution to this discussion, and provides a ton of real-world information about the CyberTipline that were basically only known to people working on it, leaving many observers, media and policymakers in the dark.

It would be nice if Congress reads this report and understands the issues. However, when it comes to things like CSAM, expecting anyone to bother with reading a big report and understanding the tradeoffs and nuances is probably asking too much.

Filed Under: csam, cybertipline, incentives, overreporting
Companies: ncmec

Our Online Child Abuse Reporting System Is Overwhelmed, Because The Incentives Are Screwed Up & No One Seems To Be Able To Fix Them

from the mismatched-incentives-are-the-root-of-all-problems dept

The system meant to stop online child exploitation is failing — and misaligned incentives are to blame. Unfortunately, today’s political solutions, like KOSA and STOP CSAM, don’t even begin to grapple with any of this. Instead, they prefer to put in place solutions that could make the incentives even worse.

The Stanford Internet Observatory has spent the last few months doing a very deep dive on how the CyberTipline works (and where it struggles). It has released a big and important report detailing its findings. In writing up this post about it, I kept adding more and more, to the point that I finally decided it made sense to split it up into two separate posts to keep things manageable.

This first post covers the higher level issue: what the system is, why it works the way it does, and how the incentive structure of the system is completely messed up (even if it was done with good intentions), and how that’s contributed to the problem. A follow-up post will cover the more specific challenges facing NCMEC itself, law enforcement, and the internet platforms themselves (who often take the blame for CSAM, when that seems extremely misguided).

There is a lot of misinformation out there about the best way to fight and stop the creation and spread of child sexual abuse material (CSAM). It’s unfortunate because it’s a very real and very serious problem. Yet the discussion about it is often so disconnected from reality as to be not just unhelpful, but potentially harmful.

In the US, the system that was set up is the CyberTipline, which is run by NCMEC, the National Center on Missing and Exploited Children. It’s a private, non-profit; however, it has a close connection with the US government, which helped create it. At times, there has been some confusion about whether or not NCMEC is a government agent. The entire setup of it was designed to keep it as non-governmental, to avoid any 4th Amendment issues with the information it collects, but courts haven’t always seen it that way, which makes it tricky (even as the 4th Amendment is important).

And while the system was designed for the US, it has become a defacto global system, since so many of the companies are US based, and NCMEC will, when it can, send relevant details to foreign law enforcement as well (though, as the report details, that doesn’t always work well).

The main role CyberTipline has taken on is coordination. It takes in reports of CSAM (mostly, but not entirely, from internet platforms) and then, when relevant, hands off the necessary details to the (hopefully) correct law enforcement agency to handle things.

Companies that host user-generated content have certain legal requirements to report CSAM to the CyberTipline. As we discussed in a recent podcast, this role as a “mandatory reporter” is important in providing useful information to allow law enforcement to step in and actually stop abusive behavior. Because of the “government agent” issue, it would be unconstitutional to require social media platforms to proactively search for and identify CSAM (though many do use tools to do this). However, if they do find some, they must report it.

Unfortunately, the mandatory reporting has also allowed the media and politicians to use the number of reports sent in by social media companies in a misleading manner, suggesting that the mere fact that these companies find and report to NCMEC means that they’re not doing enough to stop CSAM on their platforms.

This is problematic because it creates a dangerous incentive, suggesting that internet services should actually not report CSAM they found, as politicians and the media will falsely portray a lot of reports as being a sign of a failure by the platforms to take this seriously. The reality is that the failure to take things seriously comes from the small number of platforms (Hi Telegram!) who don’t report CSAM at all.

Some of us from the outside have thought that the real issue was that NCMEC and law enforcement had been unsuccessful on the receiving end to take those reports and do enough that was productive with them. It seemed convenient for the media and politicians to just blame social media companies for doing what they’re supposed to do (reporting CSAM), ignoring that what happened on the back end of the system might be the real problem. That’s why things like Senator Ron Wyden’s Invest in Child Safety Act seemed like a better approach than things like KOSA or the STOP CSAM Act.

That’s because the approach of KOSA/STOP CSAM and some other bills is basically to add liability to social media companies. (These companies already do a ton to prevent CSAM from appearing on the platform and alert law enforcement via the CyberTipline when they do find stuff.) But that’s useless if those receiving the reports aren’t able to do much with them.

What becomes clear from this report is that while there are absolutely failures on the law enforcement side, some of that is effectively baked into the incentive structure of the system.

In short, the report shows that the CyberTipline is very helpful in engaging law enforcement to stop some child sexual abuse, but it’s not as helpful as it might otherwise be:

Estimates of how many CyberTipline reports lead to arrests in the U.S. range from 5% to 7.6%

This number may sound low, but I’ve been told it’s not as bad as it sounds. First of all, when a large number of the reports are for content that is overseas and not in the US, it’s more difficult for law enforcement here to do much about it (though, again, the report details some suggestions on how to improve this). Second, some of the content may be very old, where the victim was identified years (or even decades) ago, and where there’s less that law enforcement can do today. Third, there is a question of prioritization, with it being a higher priority to target those directly abusing children. But, still, as the report notes, almost everyone thinks that the arrest number could go higher if there were more resources in place:

Empirically, it is unknown what percent of reports, if fully investigated, would lead to the discovery of a person conducting hands-on abuse of a child. On the one hand, as an employee of a U.S. federal department said, “Not all tips need to lead to prosecution […] it’s like a 911 system.”10 On the other hand, there is a sense from our respondents—who hold a wide array of beliefs about law enforcement—that this number should be higher. There is a perception that more than 5% of reports, if fully investigated, would lead to the discovery of hands-on abuse.

The report definitely suggests that if NCMEC had more resources dedicated to the CyberTipline, it could be more effective:

NCMEC has faced challenges in rapidly implementing technological improvements that would aid law enforcement in triage. NCMEC faces resource constraints that impact salaries, leading to difficulties in retaining personnel who are often poached by industry trust and safety teams.

There appear to be opportunities to enrich CyberTipline reports with external data that could help law enforcement more accurately triage tips, but NCMEC lacks sufficient technical staff to implement these infrastructure improvements in a timely manner. Data privacy concerns also affect the speed of this work.

But, before we get into the specific areas where things can be improved in the follow-up post, I thought it was important to highlight how the incentives of this system contribute to the problem, where there isn’t necessarily an easy solution.

While companies (Meta, mainly, since it represents, by a very wide margin, the largest number of reports to the CyberTipline) keep getting blamed for failing to stop CSAM because of its large number of reports, most companies have very strong incentives to report anything they find. This is because the cost for not reporting something they should have reported is massive (criminal penalties), whereas the cost for over-reporting is nothing to the companies. That means, there’s an issue with overreporting.

Of course, there is a real cost here. CyberTipline employees get overwhelmed, and that can mean that reports that should get prioritized and passed on to law enforcement don’t. So you can argue that while the cost of over-reporting is “nothing” to the companies, the cost to victims and society at large can be quite large.

That’s an important mismatch.

But the broken incentives go further as well. When NCMEC hands off reports to law enforcement, they often go through a local ICAC (Internet Crimes Against Children) task force, who will help triage it and find the right state or local law enforcement agency to handle the report. Different law enforcement agencies who are “affiliated” with ICACs receive special training on how to handle reports from the CyberTipline. But, apparently, at least some of them feel that it’s just too much work, or (in some cases) too burdensome to investigate. That means that some law enforcement agencies are choosing not to affiliate with their local ICACs to avoid this added work. Even worse, some law enforcement agencies have “unaffiliated” themselves with the local ICAC because they just don’t want to deal with it.

In some cases, there are even reports of law enforcement unaffiliating with an ICAC out of a fear of facing liability for not investigating an abused child quickly enough.

A former Task Force officer described the barriers to training more local Task Force affiliates. In some cases local law enforcement perceive that becoming a Task Force affiliate is expensive, but in fact the training is free. In other cases local law enforcement are hesitant to become a Task Force affiliate because they will be sent CyberTipline reports to investigate, and they may already feel like they have enough on their plate. Still other Task Force affiliates may choose to unaffiliate, perceiving that the CyberTipline reports they were previously investigating will still get investigated at the Task Force, which further burdens the Task Force. Unaffiliating may also reduce fear of liability for failing to promptly investigate a report that would have led to the discovery of a child actively being abused, but the alternative is that the report may never be investigated at all.

[….]

This liability fear stems from a case where six months lapsed between the regional Task Force receiving NCMEC’s report and the city’s police department arresting a suspect (the abused children’s foster parent). In the interim, neither of the law enforcement agencies notified child protective services about the abuse as required by state law. The resulting lawsuit against the two police departments and the state was settled for $10.5 million. Rather than face expensive liability for failing to prioritize CyberTipline reports ahead of all other open cases, even homicide or missing children, the agency might instead opt to unaffiliate from the ICAC Task Force.

This is… infuriating. Cops choosing to not affiliate (i.e., get the necessary training to help) or removing themselves from an ICAC task force because they’re afraid if they don’t help save kids from abuse that they might get sued is ridiculous. It’s yet another example of cops running away, rather than doing the job they’re supposed to be doing, but which they claim they have no obligation to do.

That’s just one problem of many in the report, which we’ll get into in the second post. But, on the whole, it seems pretty clear that with the incentives this out of whack, something like KOSA or STOP CSAM aren’t going to be of much help. Actually tackling the underlying issues, the funding, the technology, and (most of all) the incentive structures, is necessary.

Filed Under: csam, cybertipline, icac, incentives, kosa, law enforcement, liability, stop csam
Companies: ncmec

What Will Be The Impact Of The AI & Streaming Data Language In The New WGA Contract?

from the consequences dept

As you’ve likely heard, earlier this week the WGA worked out a tentative agreement with the Alliance of Motion Picture and Television Producers (AMPTP) on a new contract that ended their months-long strike. By all accounts, this looks like a big win for the WGA, which is fantastic and long overdue.

The AMPTP seemed to recognize it had no leg to stand on and seemed to hope that its best strategy was to “wait out” the writers. That doesn’t appear to have worked very well. The new pay rates and guarantees seem like a big win for writers. The WGA negotiating team appears to have done a fantastic job on those fundamental negotiating points, and it’s a clear (well deserved) win for the writers.

Throughout the strike there was a lot of attention paid to the AI demands (perhaps even more attention was paid to that than the underlying economic questions), and I’m not quite sure how I feel about where things came down on that front.

As some have pointed out, in the end, the AI agreement can be read as a near complete capitulation to the writers, as it says that the producers can’t use AI to write a basic script and then hand it off to a human writer at a lower payscale to clean up. However, it does (and this is a really good thing) allow the writers themselves to figure out how to make use of AI for themselves as a productivity tool, which… makes sense. Empower the writers to figure out if it’s a useful tool, rather than thinking that the AI is going to produce anything worthwhile on its own.

One interpretation of all of this is that somewhere in the ~150 day strike, the producers had enough time to play around with AI and realize that it just isn’t able to replace writers like they appeared to hope it would do originally. As we pointed out here, though, AI can be a super useful tool in the writers hands to avoid having to deal with the drudgery part of the job, allowing them to spend more time on the actual creative act of writing. And so the framing of the agreement, at least, where it’s about empowering the writers to use the tools where necessary seems good.

But there was something that bugged me about the language of it, which writer/director/actor (and Techdirt podcast guest) Alex Winter points out in a new Wired piece: while the agreement is framed in a way that seems beneficial to the writers, it requires them to really trust the studios, as there appear to be lots of ways that they might get around what’s in the agreement. And the producers aren’t necessarily the most trustworthy folks out there. As Alex notes, the studios had been experimenting with AI prior to this and he’s not sure if they’ll just drop those initiatives. It might just be that they won’t tell writers what the AI did.

It’s hard to imagine that the studios will tell artists the truth when being asked to dismantle their AI initiatives, and attribution is all but impossible to prove with machine-learning outputs.

The other tidbit that a lot of people are celebrating is the agreement that streaming platforms will now share specific data on “the total number of hours streamed,” which has mostly been a secret. This was another big demand of the writers, mainly as part of their effort to get some sort of residuals setup going for streaming.

But, as a very interesting episode of the Search Engine podcast recently discussed, in the early days of streaming, the fact that streaming platforms didn’t share viewer data was seen as a benefit to many writers/actors/directors. It meant that they weren’t competing over numbers all the time, and weren’t focused on making something that would appeal to the widest possible audience.

That meant that a much wider variety of content was greenlit for some of those platforms, as they (especially Netflix, but also Amazon) wanted to have a really diverse set of creative shows and movies to entice people to pay the monthly subscription fee to see whatever they wanted. In that scenario, the specific numbers for any particular movie or show don’t matter as much, so long as there was enough diverse content available on the platform that it made users feel comfortable coughing up their monthly subscription fee. Indeed, that actually created incentives for more niche, quirky, diverse, wacky, experimental content, with no one ever needing to be concerned with “how is it performing?”

So, there is some reasonable fear that now that they will have to share the viewer data (privately to the WGA, not publicly), that could change. The incentive structure gets messed up a bit. There will be more incentives to create mass market content, and less ability to just create cool, different content that might appeal to a niche audience enough to get people to sign up for the monthly payment.

Now, a (reasonable!) retort to that is that the “we need all this diverse content!” made sense in the early landgrab days, but perhaps makes a lot less sense with the streaming market reaching some sort of saturation level where users are beginning to bail, and Wall St. is demanding more profits and less investment out of these platforms. If we’re already seeing streaming platforms pulling shows off the platforms for the tax breaks, perhaps this move away from supporting the weird and the wacky and the diverse was already going away no matter what.

Overall, though, it’s nice to see the writers get a strong contract that improves the underlying economics in ways that are important to their ability to make a living writing. I’m less sure that the AI language will be that impactful, though, and I’m curious to see how the incentives on the streaming side play out.

The one other bit I’m curious about: I’m kind of wondering if this experience will cause writers/actors/directors to increasingly look to route around the producers. Yes, for big productions they’re still necessary, but as tools for high quality moviemaking become increasingly cheaper and more widely accessible, I’m wondering if we’ll see a rise in more high quality self-produced works (or community produced works) that are then streamed not through the big subscription services, but elsewhere (YouTube, obviously, but it wouldn’t surprise me to see services like a “Substack-for-video” kind of thing pop up at some point).

After all, the producers can’t screw over the actual creative folks… if they’re not involved any more.

Filed Under: ai, data, incentives, movies, production, streaming, writers, writers strike
Companies: amptp, wga

from the copyrighting-out-loud dept

To hear the recording industry tell the story, copyright is the only thing protecting musicians from poverty and despair. Of course, that’s always been a myth. Copyright was designed to benefit the middlemen and gatekeepers, such as the record labels, over the artists themselves. That’s why the labels have a long history of never paying artists.

But over the last few years, Ed Sheeran has been highlighting the ways in which (beyond the “who gets paid” aspect of all of this) modern copyright is stifling rather than incentivizing music creation — directly in contrast to what we’re told it’s supposed to be doing.

We’ve talked about Sheeran before, as he’s been sued repeatedly by people claiming that his songs sound too much like other songs. Sheeran has always taken a much more open approach to copyright and music, noting that kids pirating his music is how he became famous in the first place. He’s also stood up for kids who had accounts shut down via copyright claims for playing his music.

But the lawsuits have been where he’s really highlighted the absurdity of modern copyright law. After winning one of the lawsuits a year ago, he put out a heartfelt statement on how ridiculous the whole thing was. A key part:

There’s only so many notes and very few chords used in pop music. Coincidence is bound to happen if 60,000 songs are being released every day on Spotify—that’s 22 million songs a year—and there’s only 12 notes that are available.

In the aftermath of this, Sheeran has said that he’s now filming all of his recent songwriting sessions, just in case he needs to provide evidence that he and his songwriting partners came up with a song on their own, which is depressing in its own right.

In the latest case, which just concluded last week, Sheeran said that if he lost he’d probably quit music altogether, as it’s just not worth it.

…when asked what he would do if the court ruled against him, Sheeran said, “If that happens, I’m done. I’m stopping… To have someone come in and say, ‘We don’t believe you, you must have stole it’… [I] find insulting…”

He went on, “I find it really insulting to work my whole life as a singer-songwriter and diminish it.”

Doesn’t seem like copyright helping to create incentives for new works, does it? It sure sounds like copyright stifling creativity and artistry. Elsewhere, he’s noted similar things, talking about how songwriters know there are only so many notes, and certain songs are going to sound somewhat similar to one another. He notes that actual songwriters all seem to get this.

“I feel like in the songwriting community, everyone sort of knows that there’s four chords primarily that are used and there’s eight notes. And we work with what we’ve got, with doing that.”

[….]

“I had a song that I wrote for Keith Urban, and it sort of sounded like a Coldplay song,” Sheeran added, referring the country singer’s 2018 record “Parallel Line.” “So I emailed Chris Martin and I said, ‘This sounds like your tune. Can we clear it?’ And he went, ‘Don’t be ridiculous. No.’”

He added: “And on the song I made sure they put, ‘I think it sounds like “Everglow,” Coldplay.’ But he was just like, ‘Nah, I know how songs are written. And I know you didn’t go into the studio and go, I want to write this.’”

Of course, with this latest lawsuit it wasn’t actually a songwriter suing. It was a private equity firm that had purchased the rights from one of the songwriters (not Marvin Gaye) of Marvin Gaye’s hit song “Let’s Get it On.”

The claim over Thinking Out Loud was originally lodged in 2018, not by Gaye’s family but by investment banker David Pullman and a company called Structured Asset Sales, which has acquired a portion of the estate of Let’s Get It On co-writer Ed Townsend.

Thankfully, Sheeran won the case as the jury sided with him over Structured Asset Sales. Sheeran, once again, used the attention to highlight just how broken copyright is if these lawsuits are what’s coming out of it:

“I’m obviously very happy with the outcome of the case, and it looks like I’m not having to retire from my day job after all. But at the same time I’m unbelievably frustrated that baseless claims like this are able to go to court.

“We’ve spent the last eight years talking about two songs with dramatically different lyrics, melodies, and four chords which are also different, and used by songwriters every day all over the world. These chords are common building blocks used long before Let’s Get it On was written, and will be used to make music long after we’re all gone.

“They are in a songwriters’ alphabet, our toolkit, and should be there for all of us to use. No one owns them or the way that they are played, in the same way that no one owns the color blue.”

He concluded the speech by saying he would never allow himself to be a “piggybank for anyone to shake.”

Good for him, though one hopes he’ll also help push for better copyright laws that would stop this kind of nonsense, and help lead to a broader rethinking of copyright in our time.

And… apparently, right after winning, Sheeran released his latest album (Subtract) based on a bunch of other challenges and traumatic experiences he’s gone through recently. It’s unfortunate that bogus copyright trials leading him to consider dropping out of the music world entirely added to the trauma.

Filed Under: copyright, creativing, ed sheeran, ed townsend, incentives, marvin gaye, songwriting
Companies: structured asset sales

from the thought-provoking dept

To say that AI-generated art is controversial would be something of an understatement. The appearance last year of free tools like Stable Diffusion has not just thrown the world of art into turmoil, it has raised profound questions about the nature of human creativity. AI art also involves thorny issues of copyright that have piqued the interest of lawyers, who sense an opportunity to sue tech companies for large sums.

Most AI art programs draw on billions of existing images to formulate internal rules about shapes, colours and styles. Many, perhaps most, of those images will be under copyright. There are already several court cases that will help to decide the legality of this approach, including an important new one in the US brought by Getty Images against Stability AI, the company behind Stable Diffusion. But whatever the outcomes of these, it seems likely that AI-generated art will continue to exist in some form, given its huge potential, and the interest it has generated among the business world and general public.

Similarly, the copyright status of the end-result of using AI to produce new images is ill-defined. In February 2022, the US copyright Office ruled that an AI can’t copyright its art because it didn’t include an element of “human authorship”. However, more recently, an artist has received US copyright registration on a graphic novel that features AI-generated artwork.

In this context, it is sometimes forgotten that copyright for the fine arts is relatively new. Modern copyright dates from the 1710 Statute of Anne, which applied to “books and other writings”. Although the special class of engravings received protection in 1735, it was not until 1862 that the fine arts were eligible for copyright in the UK; for the US, it was only in 1870.

Significantly, one category of copyrightable subject matter explicitly mentioned in the US law was “chromo” — color lithographs. Copyright became an issue for art once it was possible to make large numbers of high-quality color facsimiles of original works. Before such technology was cheaply available, it was only through artists’ copies of their own works, plus often highly popular engravings, that a painting or drawing could be shared more widely.

Since the nineteenth century, copyright has been strengthened in numerous ways. For example the term of copyright is now typically for the life of the creator plus 70 years. At the same time, technologies for making copies have progressed greatly. When analogue material is converted into digital form, it is possible to make perfect copies of these files for vanishingly small cost. The rise of the Internet allows any number of copies to be sent around the world, again for effectively no cost.

This has led to a fundamental clash between copyright and the Internet. Where for 300 years the former has revolved around preventing unauthorized copies being made, the latter technology is based on the constant generation and free flow of copies of digital files, and cannot function without them.

Although nobody ever talks about that deep mismatch, in legal terms the situation is clear: everybody online is breaking copyright law hundreds, perhaps thousands, of times a day. Back in 2007, John Tehranian, a professor at Southwestern Law School in Los Angeles, calculated that typical Internet users would be liable for $4.544 billion in potential damages each year as a result of the unavoidable copyright infringements that they committed online. A law that is routinely ignored by billions of people online every day is clearly a bad law.

Unfortunately, the response of the copyright world to this problem has been to call for more stringent laws in the forlorn hope that this will somehow stop people making digital copies. The most recent example of this wishful thinking is the EU’s Copyright Directive. Of particular relevance to the world of visual arts is a requirement that major online sites must operate a filter to prevent unauthorized copies of copyright material being uploaded by users.

The volume of uploads today is so great — in 2020, 500 hours of video were uploaded to YouTube every minute — that such filters will need to be automated. However, it is impossible to encapsulate the complexity of copyright law in an algorithm. Even experts struggle to distinguish between copyright infringement and the transformative re-interpretation of an existing work, as the current case involving Andy Warhol’s use of a photograph for a series of images of the musician Prince demonstrates. Inevitably, the EU’s new automated filters will err on the side of caution, and over-block material. As a result, perfectly legal images that build on the work of others are likely to be blocked, with knock-on harm for artistic creativity and freedom of expression.

If more stringent copyright laws are not only doomed to fail — policing the entire Internet is not possible — but produce serious collateral damage to basic human rights, perhaps the resolution of the incompatibility between copyright and the Internet is to row back or even abolish the former. That may be bold, but it wouldn’t be a huge problem for the fine arts world, where the core artistic output is often a physical object of some kind. Copyright is largely irrelevant for such analogue items, since they cannot be copied in any meaningful way. Although digital versions can often be made, they are not substitutes for the original.

There are, of course, many born-digital works of art, but it is precisely this class of creativity that is now under threat from AI-generated art. In the future, it is likely that many types of digital images produced today by humans will be replaced by the output of AI systems, particularly in a commercial setting, where economics, not aesthetics, are paramount.

Artists may argue that such algorithmic work is inferior to the human kind. That may be true at present, but such AI systems have already made huge advances in just a few years, as recent developments have shown. In the not-too-distant future, their work is likely to be indistinguishable from that of human practitioners for most everyday uses, not just in terms of quality and creativity, but even to the point of being able to mimic any artist’s style without copying any element directly.

However, there is a different approach to art that AI generated works will be unable to match until AI itself possesses deeply human attributes such as empathy, and is able to nurture social relationships. It’s exemplified by the artist Anne Rea. Her approach is based on establishing a rapport with people who commission and pay her in advance. She is quoted in Art Business News as saying:

I’d much rather cultivate a relationship with a patron. Get paid up front. Not allow any discounting. Keep all of the money. And through that relationship, get repeat purchases and referrals to their friends and family. That’s a smarter way to go.

Rea’s success harks back to an older model for supporting artists through patronage. Significantly, a large proportion of the world’s greatest artistic masterpieces come from this time, before copyright was invented. As AI art begins to encroach on digital creativity, and copyright threatens to shut down free expression online, perhaps it’s time to explore this older approach that is immune to both. More on this idea can be found in Walled Culture the book, available as a free ebook or in analogue form from leading online bookshops.

Follow me @glynmoody on Mastodon or Twitter. Originally posted to WalledCulture.

Filed Under: ai, copyright, creativity, generative art, incentives

People Are Lying To The Media About EARN IT; The Media Has To Stop Parroting Their False Claims

from the that's-not-how-any-of-this-works dept

Update: After this post went up, Tech Review appears to have done a major edit to that article, and added a correction about the completely false claim regarding Section 230 protecting CSAM. The article still has problems, but is no longer quite as egregiously wrong. The post below is about the original article.

MIT’s Tech Review has an article this week which is presented as a news article claiming (questionably) that “the US now hosts more child sexual abuse material (CSAM) online than any other country,” and claiming that unless we pass the EARN IT Act, “the problem will only grow.” The problem is that the article is rife with false or misleading claims that the reporter didn’t apparently fact check.

The biggest problem with the article is that it blames this turn of events on two things: a bunch of “prolific CSAM sites” moving their servers from the Netherlands to the US and then… Section 230.

The second is that internet platforms in the US are protected by Section 230 of the Communications Decency Act, which means they can’t be sued if a user uploads something illegal. While there are exceptions for copyright violations and material related to adult sex work, there is no exception for CSAM.

So, this is the claim that many people make, but a reporter in a respectable publication should not be making it, because it’s just flat out wrong. Incredibly, the reporter points out that there are “exceptions” for copyright violations, but she fails to note that the exception that she names, 230(e)(2), comes after another exception, 230(e)(1), which literally says:

(1) No effect on criminal law

Nothing in this section shall be construed to impair the enforcement of section 223 or 231 of this title, chapter 71 (relating to obscenity) or 110 (relating to sexual exploitation of children) of title 18, or any other Federal criminal statute.

It’s almost as if the reporter just accepted the claim that there was no exception for CSAM and didn’t bother to, you know, look at the actual law. Child sexual abuse material violates federal law. Section 230 directly exempts all federal law. The idea that 230 does not have an exception for CSAM is just flat out wrong. It’s not a question of interpretation. It’s a question of facts and MIT’s Tech Review is lying to you.

The article then gets worse.

This gives tech companies little legal incentive to invest time, money, and resources in keeping it off their platforms, says Hany Farid, a professor of computer science at the University of California, Berkeley, and the co-developer of PhotoDNA, a technology that turns images into unique digital signatures, known as hashes, to identify CSAM.

People keep saying that companies have “little legal incentive” to deal with CSAM as if 18 USC 2258A doesn’t exist. But it does. And that law says pretty damn clearly that websites need to report CSAM on their platforms. If a website fails to do so, then it can be fined 150kforitsfirstviolationsandupto150k for its first violations and up to 150kforitsfirstviolationsandupto300k for each subsequent violation.

I’m not sure how anyone can look at that and say that there is no legal incentive to keep CSAM off their platform.

And, just to make an even clearer point, you will be hard pressed to find any legitimate internet service that wants that content on its website for fairly obvious reasons. One, it’s reprehensible content. Two, it’s a good way to have your entire service shut down when the DOJ goes after you. Three, it’s not good for any kind of regular business (especially ad-based) if you’re “the platform that allows” that kind of reprehensible content.

To claim that there is no incentive, legal or otherwise, is just flat out wrong.

Later in the article, the reporter does mention that companies must report the content, but then argues this is different because “they’re not required to actively search for it.” And this gets to the heart of the debate about EARN IT. The supporters of EARN IT insist that it’s not a “surveillance” bill, but then when you drill down into the details, they admit that what they’re mad at are just a few companies that are refusing to install these kinds of filtering technologies. Except that as we’ve detailed (and which the article does not even bother to contend with), if the US government is passing a law that mandates filters, that creates a massive 4th Amendment problem that will make it more difficult to actually go after CSAM purveyors legally (under the 4th Amendment the government can’t mandate a general search like this, and if it does, that will enable those prosecuted to suppress the evidence).

Also, we’ve gone through this over and over again. If the real problem is the failure of companies to find and report CSAM, then the real issue is why hasn’t the DOJ done anything about it? They already have the tools under both Section 230 (exempted from CSAM) and 2258A to bring a prosecution. But they have not. And EARN IT does nothing to better fund the DOJ or even ask why the DOJ never actually brings any of these prosecutions?

Incredibly, some of the “experts,” all of whom are among the people who will benefit from EARN IT passing (as the reporter apparently didn’t bother to even ask anyone else), kind of make this point clear, without even realizing it:

Besides “bad press” there isn’t much punishment for platforms that fail to remove CSAM quickly, says Lloyd Richardson, director of technology at the Canadian Centre for Child Protection. “I think you’d be hard pressed to find a country that’s levied a fine against an electronic service provider for slow or non-removal of CSAM,” he says.

Well, isn’t that the issue then? If the problem is that countries aren’t enforcing the law, shouldn’t we be asking why and how to get them to enforce the law? Instead, they want this new law, EARN IT, that doesn’t do anything to actually increase such enforcement, but rather will open up lots of websites to totally frivolous lawsuits if they dare do something like offer encrypted messaging to end users.

Incredibly, later in the article, the reporter admits that (as mentioned in the beginning of the article), the reason so many websites that host this kind of abusive materials moved out of the Netherlands was… because the government finally got serious about enforcing the laws it had. But then it immediately says but since the content just moved to the US, that wasn’t really effective and “the solution, child protection experts argue, will come in the form of legislation.”

But, again, this is already illegal. We already have laws. The issue is not legislation. The issue is enforcement.

Also, finally at the end, the reporter mentions that “privacy and human rights advocates” don’t like EARN IT, but misrepresents their actual arguments, and presents it as a false dichotomy between tech companies “prioritizing the privacy of those distributing CSAM on their platforms over the safety of those victimized by it.” That’s just rage-inducingly wrong.

Companies are rightly prioritizing encryption to protect the privacy of everyone, and encryption is especially important to marginalized and at risk people who need to be able to reach out for help in a way that is not compromised. And, again, any major internet company already takes this stuff extremely seriously, as they have to under existing law.

Also, as mentioned earlier, the article never once mentions the 4th Amendment — and with it the fact that by forcing websites to scan, it actually will make it much, much harder to stop CSAM. Experts have explained this. Why didn’t the reporter speak to any actual experts?

The whole article repeatedly conflates the sketchy, fly-by-night, dark web purveyors with the big internet companies. EARN IT isn’t going to be used against those dark web forums. Just like FOSTA, it’s going to be used against random third parties who were incidentally used by some of those sketchy companies. We know this. We’ve seen it. Mailchimp and Salesforce have both been sued under FOSTA because some people tangentially associated with sex trafficking also used those services.

And with EARN IT, anyone who offers encryption is going to get hit with those kinds of lawsuits as well.

An honest account of EARN IT and what it does would have (1) not lied about what Section 230 does and does not protect, (2) not misrepresented the state of the law for websites in the US today, (3) would not have only quoted people who are heavily involved in the fight for EARN IT, (4) not have misrepresented the warnings of people highlighting EARN IT’s many problems, (5) not left out that the real problem is the lack of will by the DOJ to actually enforce existing law, (6) would have been willing to discuss the actual threats of undermining encryption, (7) would have been willing to discuss the actual problems of demanding universal surveillance/upload filters, and (8) not let someone get away with a bogus quote falsely claiming that companies care more about the privacy of CSAM purveyors than stopping CSAM. That last one is really infuriating, because there are many really good people trying to figure out how these companies can stop the spread of CSAM, and articles like this, full of lies and nonsense, demean all the work they’ve been putting in.

MIT’s Tech Review should know better, and it shouldn’t publish garbage like this.

Filed Under: csam, earn it, incentives, photodna, scanning, section 230, surveillance
Companies: tech review

Now It's Harvard Business Review Getting Section 230 Very, Very Wrong

from the c'mon-guys dept

It would be nice if we could have just a single week where some major “respected” publication could do the slightest bit of fact checking on their wacky articles on Section 230. It turns out that’s not happening this week. Harvard Business Review has now posted an article saying It’s Time to Update Section 230 written by two professors — Michael Smith of Carnegie Mellon and Marshall Van Alstyne at Boston University. For what it’s worth, I’ve actually been impressed with the work and research of both of these professors in the past — even though Smith runs a program funded by the MPAA, that publishes studies about the internet and piracy, his work has usually been careful and thorough. Van Alstyne, on the other hand, has published some great work on problems with intellectual property, and kindly came and spoke at an event we helped to run.

Unfortunately, this piece for HBR does not do either Smith or Von Alstyne any favors — mainly because it just gets so much wrong. It starts out, like so many of these pieces, with some mythmaking, that Section 230 was passed due to “naive” techno-optimism. This is just simply wrong, even if it sounds like a good story. It then (at least) does highlight some of the good that social media has created (Arab Spring, #MeToo, #BlackLivesMatter, and the ice bucket challenge). But then, of course, it pivots to all the “bad” stuff on the internet, and says that “Section 230 didn’t anticipate” how to deal with that.

So, let’s cut in and point out this is wrong. Section 230’s authors have made it abundantly clear over and over again that they absolutely did anticipate this very question. Indeed, the very history of Section 230 is the history of web platforms trying to figure out how to deal with the ever-changing, ever-evolving challenge of “bad” stuff online. And the way that 230 does that is by allowing websites to constantly experiment, innovate, and adapt without fear of liability. Without that, you create a much worse situation — one in which any “false” move by the website could lead to liability and ridiculously costly litigation. Section 230 has enabled a wide variety of experiments and innovations in content moderation to figure out how to keep platforms functioning for users, advertisers, and more. But, this article ignores all that and pretends otherwise. That’s doing a total disservice to readers, and presenting a false narrative.

The article goes through a basic recap of how Section 230 works — and concludes:

These provisions are good ? except for the parts that are bad.

Amusingly, that argument applies to lots of content moderation questions as well. Keep all the stuff that’s good, except for the parts that are bad. And it’s that very point that highlights why Section 230 is actually so important. Figuring out what’s “good” and what’s “bad” is inherently subjective, and that’s part of the genius of Section 230, is that it allows companies to experiment with different alternatives in figuring out how best to deal with things for their own community, rather than trying to comply with some impossible standard.

They then admit that there are other, non-legal, incentives that have helped keep websites moderating in a reasonable way, though they imply that this doesn’t work any more (they don’t explain why or how):

When you grant platforms complete legal immunity for the content that their users post, you also reduce their incentives to proactively remove content causing social harm. Back in 1996, that didn?t seem to matter much: Even if social media platforms had minimal legal incentives to police their platform from harmful content, it seemed logical that they would do so out of economic self-interest, to protect their valuable brands.

Either way, from there, there article goes completely off the rails in ways that are kind of embarrassing for two widely known professors. For example, the following statement is entirely unsupported. It is disconnected from reality. Hilariously, it is the very “misinformation” that these two professors seem so upset about.

We?ve also learned that platforms don?t have strong enough incentives to protect their brands by policing their platforms. Indeed, we?ve discovered that providing socially harmful content can be economically valuable to platform owners while posing relatively little economic harm to their public image or brand name.

I know that this is out there in the air as part of the common narrative, but it’s bullshit. Pretty much every company of any size lives in fear of stories of “bad” content getting through on their platform, and causing some real world harm. It’s why companies have invested so much in hiring thousands of moderators, and trying to find any kind of technological solution that will help in combination with the ever growing ranks of human moderators (many of whom end up being traumatized by having to view so much “bad” content). The idea that Facebook’s business isn’t harmed by its failures on this front or that the “socially harmful content” is “valuable” to Facebook is simply not supported by reality. There are huge teams of people within Facebook pushing back against that entire narrative. Facebook also didn’t set up the massive (and massively expensive) Oversight Board out of the goodness of its heart.

What Smith and Van Alstyne apparently fail to consider is that this is not a problem of Facebook not having the right incentives. It’s a problem of it being impossible to do this well at scale, no matter what incentives are in place, combined with the fact that many of the “problems” they’re upset about actually being societal problems that governments are blaming on social media to hide their own failings in fixing education, social safety nets, criminal justice reform, healthcare, and more.

This paragraph just kills me:

Today there is a growing consensus that we need to update Section 230. Facebook?s Mark Zuckerberg even told Congress that it ?may make sense for there to be liability for some of the content,? and that Facebook ?would benefit from clearer guidance from elected officials.? Elected officials, on both sides of the aisle, seem to agree: As a candidate, Joe Biden told the New York Times that Section 230 should be ?revoked, immediately,? and Senator Lindsey Graham (R-SC) has said, ?Section 230 as it exists today has got to give.? In an interview with NPR, the former Congressmen Christopher Cox (R-CA), a co-author of Section 230, has called for rewriting Section 230, because ?the original purpose of this law was to help clean up the Internet, not to facilitate people doing bad things.?

First off, Facebook is embracing reforms to Section 230 because it can deal with them and it knows the upstart competitors it faces cannot. This is not a reason to support 230 reform. It’s a reason to be very, very worried about it. And yes, there is bipartisan anger at 230, but they leave out that it’s for the exact opposite reasons. Democrats are mad that social media doesn’t take down more constitutionally protected speech. Republicans are mad that websites are removing constitutionally protected conspiracy theories and nonsense. The paragraph in HBR implies, incorrectly, that there’s some agreement.

As for the Cox quote, incredibly, this was taken from a few years ago, in which Cox appeared to have a single reform suggestion: clarifying that the definition of an Information Content Provider covers companies that are actively involved in unlawful activity done by users. And, notably (again, skipped over by Smith and Van Alstyne) that interview occurred just after FOSTA was passed by Congress — and now it’s widely recognized how FOSTA has a complete disaster for the internet, and has put tons of people in harm’s way. That seems kinda relevant if we’re talking about how to update the law again.

But Smith and Van Alstyne don’t even mention it!

Instead, the fall back on tired, wrong, or debunked arguments.

Legal scholars have put forward a variety of proposals, almost all of which adopt a carrot-and-stick approach, by tying a platform?s safe-harbor protections to its use of reasonable content-moderation policies. A representative example appeared in 2017, in a Fordham Law Review article by Danielle Citron and Benjamin Wittes, who argued that Section 230 should be revised with the following (highlighted) changes: ?No provider or user of an interactive computer service

that takes reasonable steps to address known unlawful uses of its services that create serious harm to others shall be treated as the publisher or speaker of any information provided by another information content provider in any action arising out of the publication of content provided by that information content provider.?

Of course, as we’ve explained, this is a solution that only a law professor who has never had to run an actual website could love. The problems with the “takes reasonable steps” argument are myriad. For one, it would mean that websites would constantly need to go to court to defend their content moderation practices — a costly and ridiculous experience, especially when you have to defend it to people who don’t understand the intricacies and trade-offs of content moderation. I saw this first hand just a couple months ago, in watching a print-on-demand website lose a court fight, because the plaintiff insisted that any mistake in its content moderation practices proved its efforts weren’t “reasonable.”

At best such a setup would mean that all content moderation would become standardized, following exactly whatever plan was chosen by the first few companies to win such lawsuits. You’d wipe out pretty much any attempt at creating new, better, more innovative content moderation solutions, because the only way you could do that is if you were willing to spend a million dollars defending it in court. And that would mean that the biggest companies (once again) would control everything. Facebook could likely win such a case, screwing over tons of competitors, and then everyone else would have to adopt Facebook’s model (hell, I wouldn’t put it past Facebook to offer to “rent” its content moderation system out to others) in such a world. The rich get richer. The powerful get more powerful. And everyone else gets screwed.

The duty-of-care standard is a good one, and the courts are moving toward it by holding social media platforms responsible for how their sites are designed and implemented. Following any reasonable duty-of-care standard, Facebook should have known it needed to take stronger steps against user-generated content advocating the violent overthrow of the government.

This is also garbage and taken entirely out of context. It doesn’t mention just how much content there is to moderate. Facebook has billions of users, posting tons of stuff every day online. This supposes that Facebook can automatically determine “content advocating the violent overthrow of the government.” But it does nothing whatsoever to help define what that content actually looks like, or how to find it, or how to explain those rules to every content moderator around the globe in a manner in which they’ll treat content in a fair and equitable way. It doesn’t take into account context. Is it “advocating the violent overthrow of the government” when someone tells a joke hoping President Trump dies? Is it failing a duty of care standard for someone to suggest that… an authoritarian dictatorship should be overthrown? There are so many variables, and so many issues here that to just toss out the idea that it’s obvious a duty of care was not taken to allow for “content advocating the violent overthrow of a government” that is just shows how ridiculously naive and ignorant both Smith and Van Alstyne are about the actual issues, trade-offs, and challenges of content moderation.

They then try to address these kinds of arguments by setting up a very misleading strawman to knock down:

Not everybody believes in the need for reform. Some defenders of Section 230 argue that as currently written it enables innovation, because startups and other small businesses might not have sufficient resources to protect their sites with the same level of care that, say, Google can. But the duty-of-care standard would address this concern, because what is considered ?reasonable? protection for a billion-dollar corporation will naturally be very different from what is considered reasonable for a small startup.

Yeah, but you only find that out after you’re dead, spending a million dollars defending it in court.

And then… things go from just bad and informed, to actively spreading misinformation:

Another critique of Section 230 reform is that it will stifle free speech. But that?s simply not true: All of the duty-of-care proposals on the table today address content that is not protected by the First Amendment. There are no First Amendment protections for speech that induces harm (yelling ?fire? in a crowded theater), encourages illegal activity (advocating for the violent overthrow of the government), or that propagates certain types of obscenity (child sex-abuse material).

Yes, that’s right. They trotted out the fire in a crowded theater trope, which already is wrong, and then they apply it incorrectly. It’s flat out wrong to say that there is no 1st Amendment protection in speech that induce harm. Much such content is absolutely protected under the 1st Amendment. The actual exceptions to the 1st Amendment (which, you know, maybe someone at HBR should have looked up) in this area are for “incitement to imminent violence” or “fighting words,” both of which are very, very, very narrowly defined.

As for child sex-abuse material, that’s got nothing to do with Section 230. CSAM content already violates federal criminal law and Section 230 has always exempted federal criminal law.

In other words, this paragraph is straight up misinformation. The very kind of misinformation that Smith and Van Alstyne seem to think websites should be liable for hosting.

Technology firms should embrace this change. As social and commercial interaction increasingly move online, social-media platforms? low incentives to curb harm are reducing public trust, making it harder for society to benefit from these services, and harder for legitimate online businesses to profit from providing them.

This is, again, totally ignorant. They have embraced this change, because the incentives already exist. It’s why every major website has a “trust & safety” department that hires tons of people and does everything they can to properly moderate their websites. Because getting it wrong leads to tons of criticism from users, from the media, and from politicians — not to mention advertisers and customers.

Most legitimate platforms have little to fear from a restoration of the duty of care.

So long as you can afford the time, resources, and attention required to handle a massive trial to determine if you met the “duty of care.” So long as you can do that. And, I mean, it’s not like we don’t have examples of how this plays out in other arenas. I already talked about what I saw in court this summer in the trademark field (not covered by Section 230). And we have similar examples of what happens in the copyright space as well (not covered by Section 230). Perhaps Smith and Van Alstyne should go talk to the CEO of Veoh… oh wait, they can’t, because the company is dead, even though it won its lawsuit on this very issue a decade ago.

A duty of care standard only makes sense if you have no clue how any of this works in practice. It’s an academic solution that has no connection to reality.

Most online businesses also act responsibly, and so long as they exercise a reasonable duty of care, they are unlikely to face a risk of litigation.

I mean, this is just completely disconnected from reality as we’ve seen. That trial I witnessed in June is one of multiple cases brought by the same law firm against online marketplace providers, more or less trying to set up a business suing companies for failing to moderate trademark-related content to some arbitrary standard.

What good actors have to gain is a clearer delineation between their services and those of bad actors.

They already have that.

A duty of care standard will only hold accountable those who fail to meet the duty.

Except for all the companies it kills in litigation.

This article is embarrassingly bad. HBR, at the very least, should never have allowed the blatantly false information about how the 1st Amendment works, though all that really serves to do is discredit both Smith and Van Alstyne.

I don’t understand what makes otherwise reasonable people who clearly have zero experience with the complexities of social media content moderation to assume they’ve found the magic solution. There isn’t a magic solution. And your solution will make things worse. Pretty much all of them do.

Filed Under: 1st amendment, content moderation, duty of care, fire in a crowded theater, incentives, marshall van alstyne, michael smith, section 230

Australian Court Ridiculously Says That AI Can Be An Inventor, Get Patents

from the i'm-sorry-dave,-you-shouldn't-do-that dept

There have been some questions raised about whether or not AI-created works deserve intellectual property protection. Indeed, while we (along with many others) laughed along at the trial about the monkey selfie, we had noted all along, that the law firm pushing to give the monkey (and with it, PETA) the copyright on the photo was almost certainly trying to tee up a useful case to argue that AI can get copyright and patents as well. Thankfully, the courts (and later the US Copyright Office) determined that copyrights require a human author.

The question on patents, however, is still a little hazy (unfortunately). It should be the same as with copyright. The intent of both copyrights and patents is to create incentives (in the form of a “limited” monopoly) for the creation of the new creative work or invention. AI does not need such an incentive (nor do animals). Over the last few years, though, there has been a rush by some who control AI to try to patent AI creations. This is still somewhat up in the air. In the US, the USPTO has (ridiculously) suggested that AI created inventions could be patentable — but then (rightfully) rejected a patent application from an AI. The EU has rejected AI-generated patents.

Unfortunately, it looks like Australia has gone down the opposite path from the EU, after a court ruled that an AI can be an inventor for a patent. The case was brought by the same folks who were denied patents in the EU & US, and who are still seeking AI patents around the globe. Australia’s patent office had followed suit with its EU & US counterparts, but the judge has now sent it back saying that there’s nothing wrong with AI holding patents.

University of Surrey professor Ryan Abbott has launched more than a dozen patent applications across the globe, including in the UK, US, New Zealand and Australia, on behalf of US-based Dr Stephen Thaler. They seek to have Thaler?s artificial intelligence device known as Dabus (a device for the autonomous bootstrapping of unified sentience) listed as the inventor.

Honestly, I remain perplexed by this weird attempt to demand something that makes no sense, though it seems like yet another attempt to scam the system to make money by shaking others down. Once again, AI needs no such incentive to invent, and it makes no sense at all to grant it patents. An AI also cannot assign the patents to others, or properly license a patent. The whole thing is stupid.

It is, however, yet another point to show just how extreme the belief that every idea must be “owned” has become. And it’s incredibly dangerous. Those pushing for this — or the courts and patent offices agreeing with this — don’t seem to have any concept of how badly this will backfire.

And, of course, the reality underlying this, which only underscores how dumb it is, the AI isn’t actually getting the patent. It would go to the guy who “owns” the AI.

Beach said a non-human inventor could not be the applicant of a patent, and as the owner of the system, Thaler would be the owner of any patents that would be granted on inventions by Dabus.

At least some people are recognizing what a total clusterfuck it would be if AI-generated patents were allowed. The Guardian quotes an Australian patent attorney, Mark Summerfield, who raises just one of many valid concerns:

?Allowing machine inventors could have numerous consequences, both foreseeable and unforeseeable. Allowing patents for inventions churned out by tireless machines with virtually unlimited capacity, without the further exercise of any human ingenuity, judgment, or intellectual effort, may simply incentivise large corporations to build ?patent thicket generators? that could only serve to stifle, rather than encourage, innovation overall.?

Unfortunately, as the article notes, it’s not just Australia making this dangerous decision. South Africa just granted DABUS a patent last week as well.

Filed Under: ai, australia, dabus, incentives, monkey selfie, patent law, patents

10 Steps The Biden-Harris Administration Should Take To Bring Equity To Our Patent System

from the fixing-the-patent-system dept

This post is one of a series of posts we’re running this week in support of Patent Quality Week, exploring how better patent quality is key to stopping efforts that hinder innovation.

A couple weeks ago, President Biden signed an executive order focused on promoting competition in the interests of American businesses, workers, and consumers, emphasizing the need to tackle high prescription drug prices that harm 1 in 4 Americans. Earlier this year, the President also signed an executive order to increase racial equity across all federal agencies.

Few agencies are as ripe for this kind of transformation as the U.S. Patent and Trademark Office (PTO)—the federal agency that oversees patents, trademarks, and designs.

The PTO’s work, which has often escaped scrutiny, is directly linked to issues of equity and rising prescription drug costs. As health and economic justice attorneys, we’ve worked for nearly two decades to increase equity in drug development and access, and have spent countless hours learning from patients, patent offices, community leaders, public health professionals, policymakers, scientists, economists and more. Based on our learnings from those most directly affected by the patent system, we offer ten actions the federal government should take to answer the President’s calls to promote competition and advance equity across government that can transform the patent system in the public’s interest.

1. Amend the PTO’s mission to include equity

Equity doesn’t currently factor into the PTO’s decision making or operations, and that’s no accident. Advancing equity is not part of the agency’s mandate so equity concerns are not considered relevant. The PTO’s mandate relies instead on this theory of change: granting intellectual property rights will spur innovation and economic growth, and people will be better off. This assumption has gone virtually unchallenged in the last 40 years, with no critical look at whether the current model is producing its intended benefits for everyone. And as America grapples with a drug pricing crisis, and vaccine nationalism threatens the global Covid-19 recovery, the consequences of this framework are becoming increasingly clear . If President Biden is truly serious about embedding equity into every agency, we will see the new PTO director amend the agency’s mission accordingly. The agency’s ability (and willingness) to implement the recommendations that follow will depend in large part on equity officially becoming part of its mandate.

2. Collect demographic data

The patent system has a long history of denying Black people opportunities for economic mobility. Even today, research by economist Dr. Lisa Cook indicates that less than one percent of patent holders are Black. In addition to racial inequities, gender inequities are present at the PTO. Women represent only 18 percent of patent holders, and leading economists predict it will still take 118 years to achieve gender parity in the patent system. We know about these disparities from academic studies and not from the PTO, which doesn’t track demographic data. Earlier this year, the bipartisan IDEA Act passed out of the Senate’s Judiciary Committee and companion legislation is pending in the House. This legislation would require the PTO to collect demographic information about applicants. Previously introduced in 2019, the bill’s advancement in this Congress is promising, but doesn’t guarantee passage given the political gridlock plaguing DC. In the meantime — since it can’t fix what it doesn’t measure — the incoming PTO director could voluntarily start collecting this data.

3. Redefine the “customer”

The charter of the PTO’s Public Advisory Committee, which advises the director on patent and operational issues, states that the Committee must “represent the interests of diverse users of the USPTO.” But the PTO defines its users narrowly, as the entities or individuals using the system for patents and trademarks. As a result, the Committee is composed primarily of representatives from corporations. People with non-commercial perspectives—members of historically marginalized communities, public health experts, and patient advocates—who have a tremendous stake in how monopolies operate, for example, don’t traditionally have a voice in decision making. That naturally leads to a system in which the public interest is overshadowed by commercial concerns. The PTO should redefine its customer base to include not just those who are directly applying for patents and trademarks, but also those whose lives stand to be fundamentally altered by these decisions. The deadline recently passed on the PTO’s request for nominations for new Committee members; the time is especially ripe for it to bring in new voices to better represent the public’s perspectives.

4. Raise the bar for what gets patented

Over the last 30 years, more and more patents have been sought and granted for things that aren’t novel inventions. Recent controversies illustrate the point well: these patents are often sought and granted for products derived from ancestral knowledge from countries with predominantly Black and Brown populations—the Colombian sweetener panela, or baby wraps, for example. The PTO should not be granting patents for knowledge appropriated from beyond America’s borders.

The consequences of setting the bar too low have been dire. 13% of Americans report losing a loved one in the last five years due to high drug costs, and people of color are twice as likely to have lost someone. Patent monopolies, which are increasingly being used to block competition, are a root cause of this crisis, and between 2006 and 2016, the number of drug patents doubled. Our research demonstrates that the ten best-selling drugs in America each have on average 131 patent applications, and monopoly protection of up to 38 years. At the same time, nearly 8 out of 10 medicines associated with new drug patents are for existing medicines, like insulin or aspirin, rather than new ones. The longer the monopoly on a single medicine remains, the longer prices stay high or continue to rise.

It’s long past time to raise the bar so that only things that are truly inventive are rewarded with a patent. For example, combining existing drugs or switching dosages should not receive additional patent protection. The administration could recommend that Congress amend the patent law to prevent weak patents from being granted.

5. Change the PTO’s financial incentives

The majority of the PTO’s funding comes from fees paid only if a patent is granted, which means the agency’s revenue is directly linked to the number of patents it grants. This creates a financial incentive to grant as many patents as possible, even if claims to inventiveness are weak.

At least one study found that the PTO grants patents at higher rates when revenue is strained, suggesting that patent decisions are being influenced by factors other than inventiveness. Over the last decade, over 40 percent of patents challenged after having been granted are invalidated either in whole or in part. Research shows that the push to grant ever-more patents puts a strain on patent examiners, who have less and less time to conduct a thorough review (today, the average patent review time is just 19 hours). Over the last 27 years, the PTO has granted as many patents as it had in the previous 155 years. The proliferation of low-quality patents harms people in a range of different ways, including driving up prescription drug costs.

The administration could investigate the link between revenue shortfalls at the PTO and the volume of patents being granted, and evaluate alternative funding streams for the PTO so that the agency’s financial sustainability isn’t tied to the volume of patents that it grants.

6. Modernize laws that are not serving the greater public good

The Bayh-Dole Act, the Hatch-Waxman Act, and the Federal Courts Act were enacted to increase innovation and economic growth. But these laws have also enabled the corporatization of medical research in ways that are deeply harmful to the public. For example, publicly funded research in universities is regularly transferred to pharmaceutical companies with few, if any, conditions to assure access to the resulting medical products. The public ends up paying twice—with tax dollars used for publicly-funded research and development, and through the often exorbitant price paid at the pharmacy. These outdated laws and other legal rulings have resulted in everything from skyrocketing drug costs, to the non-consensual appropriation of tissue from Americans like Henrietta Lacks and John Moore. (Their stories, and the ethical questions they raise, have been explored in-depth by bioethicist Harriet Washington).

The administration should establish a White House task force to assess how societal harm has offset the desired gains from these 1980s-era laws. The task force should include dedicated staff with a mix of patent and equity expertise, including staff from the Federal Trade Commission, the White House Office of Science and Technology Policy, the National Economic Council, and the Council of Economic Advisors. Ultimately, the task force would produce a report that examines the underlying impacts of these laws, and provide recommendations for legislative and executive action that would reform the patent system to enhance benefits to society.

7. Reduce the cost of patent challenges

Challenging a patent can be prohibitively expensive. Filing fees alone cost upward of $41,500 per patent, compared to the significantly lower financial cost of filing patent challenges in Europe and elsewhere.

In a system heavily weighted in favor of commercial actors, legally challenging harmful patent monopolies that may have been incorrectly granted is one of the only avenues for creating equity in the market.

We know this from firsthand experience. Our organization has, in collaboration with patient advocacy groups around the world, successfully mounted legal challenges to unjust patents. These challenges have made the market more competitive, saved health systems billions of dollars, and made medicines more accessible to millions of people across Africa, Asia, and Latin America. Americans deserve the same opportunities to participate in a system that directly affects their health and wellbeing. The PTO should bring its practices in line with other patent offices worldwide and reduce the financial costs associated with challenging a patent.

8. Reverse “discretionary denial” policies

Bipartisan legislation passed in 2011 allowed any person to mount administrative challenges to patents after they were granted. Since then, opponents have repeatedly sought to weaken the authority of the Patent Trial and Appeal Board (PTAB), the body tasked with reviewing patent challenges. For example, the most recent PTO director, Andre Iancu, restricted participation by expanding the circumstances in which the agency could unilaterally decline to review patent challenges. “Discretionary denials,” as they are called, were rare in 2016 but have surged in recent years. Blocking access to one of the agency’s already limited avenues for public participation will lead to more weak patents, undeserved monopoly power for corporations, and less access to medicines and other goods that benefit public wellbeing. The administration should reverse recent policies that effectively shut the door on public participation in the patent system, and accept more challenges to weak patents.

9. Support and invest in increasing access to COVID-19 medical products

Wealthy governments have swallowed up the vast majority of existing COVID-19 vaccines stock, leaving countries with predominantly Black or Brown populations virtually nothing. More than 85 lower-income countries will not have widespread access to coronavirus vaccines until 2023, which increases the risk that new vaccine-resistant variants will emerge. Indonesia and twenty African countries are the latest to feel the crushing blow of the pandemic as they face an overwhelming surge of cases without the resources and tools necessary to avoid preventable hospitalizations and deaths. These inequities are echoes of the early HIV/AIDS epidemic, a moral failure in which medicines existed to save people’s lives but were inaccessible to the vast majority of high-burden countries in the Global South.

The World Trade Organization (WTO) is currently considering a proposal by South Africa and India to waive certain intellectual property provisions related to the “prevention, containment and treatment of COVID-19.” The U.S. has already voiced its support for the waiver, which if adopted would allow drugmakers in other countries to manufacture desperately needed vaccine supply and other medical products. While the waiver negotiations continue to unfold, the U.S. should remain a steadfast champion of global access to COVID-19 medical products—including vaccines—and press further. It should also compel U.S. pharmaceutical companies that used taxpayer funding to develop a vaccine to share that technology and know-how with manufacturers in other countries. These measures would set a precedent for global cooperation that would end the current pandemic sooner, and better prepare us for the next one.

10. Create a new Office of Technology Assessment

New technologies, like artificial intelligence and gene editing, are raising urgent questions about ownership, inventiveness, equity, and ethics. In Congressional testimony, Dr. Shobita Parthasarathy, a professor of public policy at the University of Michigan, outlined the need to incorporate equity considerations earlier and more robustly into the innovation pipeline. President Biden should request sufficient funding for a new Office of Technology Assessment in their annual budget request to Congress (an office of the same name was defunded in 1995). Rampant misinformation on Facebook, disparities in the use of facial recognition software, and other ramifications of emerging science and technology underscore the need for a body dedicated to preventing prospective future harm. This reimagined office would engage experts and members of the public to better understand the potential consequences of new technologies, and advise the administration and Congress on how to mitigate inequitable and other socially damaging outcomes.

Conclusion

As the Administration commits to increasing equity and lowering drug prices, it cannot do so without transforming our nation’s patent system. Centering equity within an agency that has historically lacked it is no small task. It requires a commitment to challenge the status quo in small ways and large, and a shared belief that all our political and economic systems are stronger when they are truly inclusive. These solutions do not stand alone—they must all be integrated into the PTO’s structure and ethos to truly effect meaningful advances. By acting on these recommendations, President Biden can improve the lives of millions of Americans, and show bold global leadership in creating an economy that works for all.

Priti Krishtel and Tahir Amin are the co-founders and co-executive directors of the Initiative for Medicines, Access & Knowledge (I-MAK), a nonprofit organization working to address structural inequities in how medicines are developed and distributed. They are participating in Patent Quality Week, with Engine Advocacy and others across the country, to encourage conversations on quality and balance in the patent system. Learn more here.

Filed Under: equity, incentives, ipr, joe biden, office of technology assessment, ota, patent challenges, patent quality, patents, public good, uspto

'But Without 230 Reform, Websites Have No Incentive To Change!' They Scream Into The Void As Every Large Company Pulls Ads From Facebook

from the oh,-look-at-that dept

One of the most frustrating lines that we hear from people criticizing internet website content moderation is the idea that thanks to Section 230 of the Communications Decency Act, websites have no incentive to do any moderation. This is a myth that I consider to be the flip side of the claims by aggrieved conservatives insisting that Section 230 requires “no bias” in moderation decisions. The “no incentive” people are (often lawyers) complaining about too little moderation. For reasons I cannot comprehend, they seem to think that the only motivation for doing anything is if the law requires you to do it. We’ve tried to debunk this notion multiple times, and yet it comes up again and again. Just a couple weeks ago in a panel about Section 230, a former top Hollywood lobbyist trotted it out.

I’ve been thinking about that line a bunch over the past few days as a huge number of large companies began pulling ads from Facebook as part of a “Stop Hate for Profit” campaign put together by a bunch of non-profits.

Over 200 companies have said they’ve joined the campaign and pulled their Facebook ads, including some big names, like Unilever, Verizon, Hershey, The North Face, Clorox, Starbucks, Reebok, Pfizer, Microsoft, Levi’s, HP, Honda, Ford, Coca Cola and many, many more. Now, the cynical take on this is that with the current economic conditions and a global pandemic, many were looking to pull back on advertising anyway, and joining this campaign was a way to do so and get a bit of an earned media boost at the same time.

But many of the companies are putting out statements demanding that Facebook change its practices before they’ll bring back ads. Here’s an open letter from Levi’s:

As we near the U.S. election in November and double down on our own efforts to expand voter education and turnout, we are asking Facebook to commit to decisive change. Specifically, we want to see meaningful progress towards ending the amplification of misinformation and hate speech and better addressing of political advertisements and content that contributes to voter suppression. While we appreciate that Facebook announced some steps in this direction today ? it?s simply not enough.

That?s why we are joining the #StopHateForProfit?campaign, pausing all paid Facebook and Instagram advertising globally and across all our brands to ?hit pause on hate.? We will suspend advertising at least through the end of July. When we re-engage will depend on Facebook?s response.

I’m not convinced this campaign is necessarily a good idea, but at the very least it should put an end to people — especially prominent experts — claiming that there is “no incentive” for sites to do a better job with their content moderation practices. There are always non-legal incentives, including keeping users happy — and also keeping advertisers happy.

Filed Under: advertising, incentives, section 230, stop hate for profit
Companies: facebook