private right of action – Techdirt (original) (raw)
Any Privacy Law Is Going To Require Some Compromise: Is APRA The Right Set Of Tradeoffs?
from the a-federal-law-that-doesn't-totally-suck?!? dept
Privacy issues have been at the root cause of so many concerns about the internet, but so many attempts to regulate privacy have been a total mess. There’s now a more thoughtful attempt to regulate privacy in the US that is (perhaps surprisingly!) not terrible.
For a while now, we’ve talked about how many of the claims from politicians and the media about the supposed (and often exaggerated, but not wholly fictitious) concerns about the internet are really the kinds of concerns that could be dealt with by a comprehensive privacy bill that actually did the right things.
Concerns about TikTok, questionably targeted advertising, the sketchy selling of your driving records, and more… are really all issues related to data privacy. It’s something we’ve talked about for a while, but most efforts have been a mess, even as the issue has become more and more important.
Part of the problem is that we’re bad at regulating privacy because most people don’t understand privacy. I’ve said this multiple times in the past, but the instincts of many is that privacy should be regulated as if our data were our “property.” But that only leads to bad results. When we treat data as property, we create new, artificial, property rights laws, a la copyright. And if you’re reading Techdirt, you should already understand what kind of awful mess that can create.
Artificial property rights are a problematic approach to just about anything, and (most seriously) frequently interfere with free speech rights and create all sorts of downstream problems. We’ve already seen this in the EU with the GDPR, which has many good characteristics, but also has created some real speech problems, while also making sure that only the biggest companies can exist, which isn’t a result anyone should want.
Over the last few weeks, there’s been a fair bit of buzz about APRA, the American Privacy Rights Act. It was created after long, bipartisan, bicameral negotiations between two elected officials with very different views on privacy regulation: Senator Maria Cantwell and Rep. Cathy McMorris Rodgers. The two had fought in the past on approaches to privacy laws, yet they were able to come to an agreement on this one.
The bill is massive, which is part of the reason why we’ve been slow to write about it. I wanted to be able to read the whole thing and understand some of the nuances (and also to explore a lot of the commentary on it). If you want a shorter summary, the best, most comprehensive I’ve seen came from Perla Khattar at Tech Policy Press, who broke down the key parts of the bill.
The key parts of the bill are that it takes a “data minimization” approach. Covered companies need to make sure that the data they’re collecting is “necessary” and “proportionate” to what the service is providing. This means organizations making over $40 million a year, processing data on over 200,000 consumers, and that transfer covered data to third parties. If it’s determined that companies are collecting and/or sharing too much, they could face serious penalties.
Very big social media companies, dubbed “high impact social media companies,” that have over 3billioninglobalrevenueand3 billion in global revenue and 3billioninglobalrevenueand300 million or more global monthly active users, have additional rules.
I also greatly appreciate that the law explicitly calls out data brokers (often left out of other privacy bills, even though data brokers are often the real privacy problem) and requires them to take clear steps to be more transparent to users. The law also requires data minimization for those brokers, while prohibiting certain egregious activities.
I always have some concerns about laws that have size thresholds. It creates the risk of game playing and weird incentives. But of most bills in this area that I’ve seen, the thresholds in this one seem… mostly okay? Often the thresholds seem ridiculously low, covering small companies too readily in a way that would create massive compliance costs too early, or only target the very largest companies. This bill takes a more middle ground approach.
There are also a bunch of rules to make sure companies are doing more to protect data security, following best practices that are reasonable based on the size of the company. I’m always a little hesitant on things like that because whether or not a company took reasonable steps is often viewed through the lens of retrospect, after some awful breach occurs, when we realize how poorly someone actually secured their data, even if upfront it appeared secure. How this plays out in practice will matter.
The law is not perfect, but I’m actually coming around to the belief that it may be the best we’re going to get and has many good provisions. I know that many activist groups, including those I normally agree with, don’t like the bill for specific reasons, but I’m going to disagree with them on those reasons. We can look at EFF’s opposition as a representative example.
EFF is concerned that it does not like the state pre-emption provisions, and also wishes that the private right of action (allowing individuals to sue) would be stronger. I actually disagree on both points, though I think it’s important to explain why. These were two big sticking points over previous bills, but I think they were sticking points for a very good reason.
On state pre-emption: many people (and states!) want to be able to pass stricter privacy laws, and many activists support that. However, I think the only way a comprehensive federal privacy bill makes sense is if it pre-empts state privacy laws. Otherwise, companies have to comply with 50+ different state privacy laws, some of which are going to be (or already are) absolutely nutty. This would, again, play right into the hands of the biggest companies, that can afford to craft different policies for different states, or that can figure out ways to craft policies that comply with every state. But it would be deathly for many smaller companies.
Expecting state politicians to get this right is a big ask, given just how messed up attempts to regulate privacy have been over the last few years. Hell, just look at California, where we basically let some super rich dude with no experience in privacy law force the state into writing a truly ridiculously messed up privacy law (then make it worse before anything was even tested) and finally… give that same rich dude control over the enforcement of the law. That’s… not good.
It seems like the only workable way to do this without doing real harm to smaller companies is to have the federal government step in and say “here is the standard across the board.” I have seen some state officials upset about this, but the law still leaves the states’ enforcement powers on the more national standard.
That said, I’m still a bit wary about state enforcement. State AGs (in a bipartisan manner) have quite a history of doing enforcement actions for political purposes more than any legitimate reason. I do fear APRA giving state AGs another weapon to use disproportionately against organizations they simply dislike or have political disagreements with. We’ve seen it happen in other contexts, and we should be wary of it here.
As for the private right of action, again, I understand where folks like the EFF would like to see a broader private right of action. But we also know how this tends to work out in practice. Because of the ways in which attempts to stifle speech can be twisted and presented as “privacy rights” claims, we should be wary about handing too broad a tool for people to use, as we’ll start to see all sorts of vexatious lawsuits, claiming privacy rights, when they’re really an attempt to suppress information, or to simply attack companies someone doesn’t like.
I think APRA sets an appropriate balance in that it doesn’t do away with the private right of action entirely, but does limit how broadly it can be used. Specifically, it limits which parts of the law are covered by the private right of action in a manner that hopefully would avoid the kind of egregious, vexatious litigation that I’ve feared under other laws.
Beyond the states and the private right of action, the bill also sets up the FTC to be able to enforce the law, which will piss off some, but is probably better than just allowing states and private actors to be the enforcers.
I do have some concerns about some of the definitions in the bill being a bit vague and open to problematic interpretations and abuse on the enforcement side, but hopefully that can be clarified before this becomes law.
In the end, the APRA is certainly not perfect, but it seems like one of the better attempts I’ve seen to date at a comprehensive federal privacy bill and is at least a productive attempt at getting such a law on the books.
The bill does seem to be on something of a fast track, though there remain some points of contention. But I’m hopeful that, given the starting point of the bill, maybe it can reach a consensus that no one particularly likes, but which actually gets the US to finally level up on basic privacy protections.
Regulating privacy is inherently difficult, as noted. In an ideal world, we wouldn’t need regulations because we’d have services where our data is separate from the services we use (as envisioned in the protocols not platforms world) and thus more in our own control. But seeing as we still have plenty of platforms out there, the approach presented in APRA seems like a surprisingly good start.
That said, seeing how this kind of sausage gets made, I recognize that bills like this can switch from acceptable to deeply, deeply problematic overnight with small changes. We’ll certainly be watching for that possibility.
Filed Under: apra, california, cathy mcmorris rodgers, ccpr, data minimization, lawsuits, maria cantwell, privacy, private right of action, state preemption
The Weird Legal Posture Of Bounty Laws Strikes Again: Porn Age Verification Lawsuit In Louisiana Dismissed
from the bounty-laws-suck dept
A federal district judge in Louisiana dismissed a lawsuit challenging the state’s mandatory age verification statute in order to access adult content on the internet. The lawsuit was brought by the Free Speech Coalition and stakeholders in and adjacent to the adult entertainment industry.
Plaintiffs intended to block the age verification statute passed by the state legislature last year and entered into force on January 1, 2023. Due to technical grounds, U.S. District Judge Susie Morgan sided with the defendants – state officials, including Attorney General Jeff Landry – in a motion to dismiss because of a lack of jurisdiction. The age verification law was structured as a so-called ‘bounty’ law, meaning that state officials are barred from enforcing it, but anyone else in the state can bring suit against a website for failing to implement the age verification. State courts are the responsible venues to hear private causes of action brought against adult platforms that don’t follow the age verification law. This means that the only “enforcement” comes in the format of a private civil enforcement action entitling the private party resolution in the format of damages, and not by a government official.
We’ve seen this before. A similar age verification law targeting adult content was implemented in Utah. The Free Speech Coalition and many of the same plaintiffs sued in a federal district court, but the case was dismissed on technical grounds, with that judge citing existing case law.
The U.S. Supreme Court ruled in Whole Woman’s Health v. Jackson (2021)that federal lawsuits against government officials that are meant to challenge laws that are designed to only be enforced by private individuals, or ‘bounty hunters,’ cannot advance. Mike Masnick wrote an insightful analysis on this in August.
Whole Woman’s Health v. Jackson challenged a controversial Texas law passed by legislators in 2021, Senate Bill 8 or the Texas Heartbeat Act, that questioned whether abortion activists were able to enjoin state officials with an injunction blocking enforcement of the law that essentially compels private parties to sue people who are suspected of “aiding and abetting” an abortion.
The conservative high court ceded to the states’ rights crowd and ruled that Texas state officials are protected by sovereign immunity. This is the standard the Free Speech Coalition and other plaintiffs failed to meet in both the Utah and Louisiana lawsuits, according to both judges. The coalition appealed the Utah ruling to the Tenth Circuit Court of Appeals in Denver. It appears they will do the same in response to this ruling in Louisiana. Mike Stabile, director of public affairs for the Free Speech Coalition, said that “while we disagree and will appeal, it’s not at all a ruling on the merits of the law, which are still clearly unconstitutional.”
But this is the fucked up part: if you know your federal judicial districts, the U.S. District Court for the Eastern District of Louisiana is covered by the Fifth Circuit Court of Appeals (the appeals court equivalent to the short bus).
The Fifth Circuit is currently hearing oral arguments in the Free Speech Coalition’s case brought against Texas for its age verification law that requires public health labeling. A panel of judges for the circuit issued an administrative stay on a preliminary injunction issued by a Texas federal district judge indicating that the law violates the First Amendment rights of adult users and the sites. The stay essentially allowed the age verification law to go into effect despite the litigation.
Hopefully, the Fifth Circuit doesn’t keep “Fifth Circuit-ing.” I will spare you the rant on why age verification laws in their current format are violations of the First and Fourteenth Amendments. I will leave you with this, though: Porn is a human right, and blocking it in this format is wrong.
*mic drop*
Michael McGrady covers the tech side of the online porn business. He is the contributing editor for AVN.com
Filed Under: 5th circuit, adult content, age verification, bounty laws, jurisdiction, louisiana, private right of action, sovereign immunity, standing
Companies: free speech coalition
By Making Its Porn Age Verification Law A ‘Bounty’ Law, Utah Able To Deflect Challenge To The Law’s Validity
from the passing-the-buck dept
Over the last few years, as we’ve seen state legislatures and governors focusing on culture war legislating, rather than sensible policy legislating, one thing that’s become popular — kicked off by Texas’s anti-abortion law, but gladly embraced by Democrats as well — is the idea of trying to avoid judicial scrutiny by taking enforcement out of the government’s hands, but creating a “bounty” program with a private right of action.
Basically, unlike most laws, rather than saying that a state Attorney General or local prosecutor can take a violator to court, the laws give that right to citizens, and offer them a “reward” if they successfully do so. We’ve had some laws like this before, and historically it’s always resulted in problematic situations driven by warped incentives. Even when the underlying laws may sound sensible, giving anyone who takes anyone else to court the ability to profit from it leads to widespread abuse. We’ve seen this, most notably, with the rise of ADA trolls over the last few years.
But, these bounty laws not only create warped incentives and flood the courthouse with vexatious litigation, but they also make it harder to invalidate those laws. This is by design. Normally when a bad law is passed, if someone is trying to invalidate the law as unconstitutional, they file a “pre-enforcement action,” usually against the state Attorney General (or sometimes governor, depending on the details of the law), to enjoin them from ever enforcing the law.
This is compared to waiting until there’s an attempt to enforce the law, at which point you could also challenge the law. But for laws that by their very existence create chilling effects, or cause companies to massively change their behavior to avoid the risk of liability, a pre-enforcement challenge is super important.
But… some courts are saying you can’t really do that when the enforcement is not by the state, but by private citizens.
And, it appears that’s now what’s happened with Utah’s ridiculous adult content age verification law. Back in May, right after the law went into effect and right after the largest adult content website operator Mindgeek blocked all Utah IP addresses from all its websites, the Free Speech Coalition, a trade group representing the adult content industry (including Mindgeek), sued to get the law declared unconstitutional.
As noted when they filed the lawsuit, the arguments here are quite strong. The law clearly violates the 1st Amendment, among other things.
However, this week the court dismissed the lawsuit… but for exactly the reasons I discussed above regarding these ridiculous and dangerous bounty laws: since the Attorney General doesn’t enforce the law, but state-enabled vigilantes do, the Free Speech Coalition can’t sue the AG:
The Eleventh Amendment states: “The Judicial power of the United States shall not be construed to extend to any suit in law or equity, commenced or prosecuted against one of the United States by Citizens of another State, or by Citizens or Subjects of any Foreign State.” “[T]he Eleventh Amendment has been interpreted to bar a suit by a citizen against the citizen’s own State in Federal Court.” It also extends to “suit[s] against a state official in his or her official capacity” because such suits are “no different from a suit against the State itself.” However, under the Ex parte Young exception to Eleventh Amendment immunity, a plaintiff may bring suit to prospectively enjoin state officials from violating federal law.
To invoke this exception, the named state official “must have some connection with the enforcement” of the challenged statute. Otherwise, the suit “is merely making him a party as a representative of the state, and thereby attempting to make the state a party.” The named official is “not required to have a ‘special connection’ to the unconstitutional act or conduct. Rather, state officials must have a particular duty to ‘enforce’ the statute in question and a demonstrated willingness to exercise that duty.”
Plaintiffs’ claims against the Utah Attorney General do not fall within the Ex parte Young exception to the Eleventh Amendment. As Plaintiffs’ Complaint acknowledges, “the Act creates a private right of action by which Utah residents—and not state actors—are empowered to do the State’s bidding.” Plaintiffs point to the Attorney General’s general duties to “prosecute or defend all causes to which the state or any officer, board, or commission of the state in an official capacity is a party, and take charge, as attorney, of all civil legal matters in which the state is interested” and to “give [their] opinion in writing and without fee.” However, the mere general duty to enforce the law is not sufficient to invoke Ex parte Young
So… basically, due to the way the law is written, the Free Speech Coalition can’t effectively bring a pre-enforcement challenge, and tons of speech gets stifled. This is, of course, done on purpose. The drafters of these laws (especially the first few), did this on purpose, knowing they’d get the benefit of chilled speech while making it way more difficult for those who actually understand the constitutional infirmities of these laws to actually get them tossed out.
In sum, Plaintiffs point only to the Attorney General’s generalized responsibilities to enforce the laws of the state and provide written opinions to the legislature. Such general enforcement powers are not sufficient to establish the connection needed to invoke the Ex parte Young exception to Eleventh Amendment immunity. Plaintiffs have failed to demonstrate that the Attorney General has a particular duty to enforce S.B. 287 or that he has demonstrated a willingness to exercise that duty. Therefore, Plaintiffs’ claims against the Utah Attorney General must be dismissed
The Free Speech Coalition had tried to get around these issues with other arguments, but they all failed. This is unfortunate. One argument was that being able to bring a pre-enforcement challenge would prevent a flood of potentially (likely) frivolous litigation brought by those seeking the bounty. But the court says “nope.”
Plaintiffs also suggest that “[r]elief from this Court would likewise redress Plaintiffs’ injuries by discouraging putative litigants from wasting time suing under a statute promising illusory awards of unrecoverable damages.” The Supreme Court rejected a similar argument in Whole Woman’s Health. There, the petitioners argued that enjoining the attorney general from enforcing a statute “would also automatically bind any private party who might try to bring . . . suit against them.” The Court noted that this theory suffered “from some obvious problems.” The Court explained that even “[s]upposing the attorney general did have some enforcement authority . . . , the petitioners have identified nothing that might allow a federal court to parlay that authority, or any defendant’s enforcement authority, into an injunction against any and all unnamed private persons who might seek to bring their own . . . suits.” Therefore, the potential to ward off future suits is not sufficient.
I’m assuming this decision will be appealed, but who knows. It’s possible that this challenge will now need to wait until someone actually tries to enforce it, which raises a lot of risks, as whoever is targeted may not be in the best position, or have the best strategy or lawyers for challenging the law.
And, of course, there’s all the damage done in the meantime to the companies trying to comply with a law as ridiculous as this, and to various Utah residents who are harmed by the impact of the law itself.
I understand the legal rationale for not allowing the lawsuit to be brought against government actors who are not the enforcers of the law, but this seems like a massive and problematic loophole that more and more states are going to use. So long as you create a system of enforcement that is just a public bounty, it basically wipes out the possibility of a pre-enforcement challenge?
That seems like a pretty massive loophole, and represents a problem at a time when lots of legislatures are passing obviously unconstitutional laws in pursuit of a culture war, where the government itself is passing off enforcement to the vigilante public.
It seems like there should be a simpler, more straightforward method of bringing a pre-enforcement challenge to these laws, even when there’s no direct state enforcement person to target as the defendant. The government gets to put things like cash on trial directly. Why shouldn’t the public get to just put these unconstitutional laws on trial?
Filed Under: 11th amendment, 1st amendment, adult content, age verification, bounty, challenging a law, private right of action, utah
Companies: free speech coalition, mindgeek, pornhub
New ‘Bipartisan’ Federal Privacy Bill Tries To Build Consensus Support, And Basically Succeeds In Annoying Everyone
from the not-much-help dept
There are so, so, so many different discussions going on concerning internet platform regulations, and so many of the different ideas conflict with one another. But there is a general agreement that the US really, really needs a federal privacy law. Without it, we just bounce back and forth between (1) EU and other nations’ privacy laws effectively defining how the internet should work (in a way that has had tons of negative consequences, and little proven benefit), (2) various states pushing half-baked and equally problematic laws, leading to a patchwork of nonsense that’s impossible to comply with and… (3) a never ending string of data breaches and privacy scandals.
Given all of that, it seems like having a comprehensive federal US privacy framework would be a good thing. And it would be. If that privacy framework was sensible, carefully nuanced, and well drafted. Unfortunately, this is the United States, and we’re not always really good at sensible, carefully nuanced, and well drafted laws. Alas, this appears to be the case with the new discussion draft of the “bipartisan” American Data Privacy and Protection Act that was released on Friday.
Again, part of the problem with any of these attempts at regulating privacy is that most people have very different conceptions of what privacy even means. And, all too often, the conception that people have of privacy is simply that they don’t want anything “icky” to happen with their data, and that’s not a particularly useful guideline. I still think that the number one way for people to understand privacy is that it’s not a “thing” that needs to be “protected,” but rather a set of trade-offs, where the two most important elements are (1) does the user have transparency into what they’re getting for their data, and what data is being used for what purpose and (2) does the user have any control over that data. Also, any kind of privacy regime has to take into account the fact that speech rights and data privacy sometimes conflict, and when they do, speech almost always should win out. Otherwise, you end up with privacy laws being used to suppress speech. At the very least, there also needs to be some recognition of the difference between “personal data” and “stuff I observed about you.”
Anyway, that takes us to the bill that was just released. Rather than building such a comprehensive rethinking of privacy… it seems to just kinda mix and match pieces in a manner designed to try to appease lots of interests, but in the process creates a huge mess for everyone. The “headline” around the bill seems to be about the “compromise” on two of the most controversial bits of every federal privacy approach: is there federal preemption of state laws, and is there a private right of action?
Federal preemption means that this bill would wipe out many of the state laws attempting to regulate privacy. For fairly dumb reasons, this has become a mostly partisan issue. The argument against preemption is that it makes a federal privacy law a “floor” that states can improve on. The argument for preemption is basically “have you seen how fucking crazy most state privacy law attempts are, and can you imagine how any website would deal with dozens of disjointed and contradictory privacy laws in different states?” When looked at that way, the real answer should be that there is federal preemption, but that it comes along with a truly comprehensive federal bill, so that you don’t even need the states to fill in the gaps.
That is… not what this bill does. It does have a kind of preemption, but it is done in a confusing way with a number of loopholes — it lists 16 different unclear “preservations” that are not exempted, and then also something about FCC laws. And that kind of wipes away any of the good parts of preemption, because it means that states will still try to write their own laws, and twist themselves into knots to try to squeeze through the loopholes… and then we’ll all spend a decade or so dealing with pointless and distracting litigation to figure out how the courts interpret what Congress actually meant, rather than Congress just making it clear in the first place.
The other big issue, the private right of action, is also a double-edged sword. This is basically the question of whether or not individuals get to sue if they feel their privacy rights are violated, or if it needs to be the government bringing a case on behalf of the public. In theory, a private right of action can make sense, because if your rights are violated you should be able to sue. In practice, private rights of action — especially on unclear and badly drafted laws — are a mess, because they create an industry of ambulance chasing lawyers and plaintiffs filing what often feel like nuisance suits just to shake down companies for cash. Again, this can be fixed with clear and decisive drafting. And again… that’s not what happened here.
This is the problem that we come to with regulating privacy. It’s super important, but because very few people want to understand the nuances and tradeoffs and draft a law accordingly, we get these kinds of compromise bills. Bills where you can tell the drafters tried to craft a kind of Frankenstein bill out of various pieces, trying to keep enough people happy to allow the bill to pass, but in the process building a kind of monster that does no one any good.
So much of the bill seems based on failed paradigms and debunked concepts — like relying on privacy policies, which have long been a failed concept. That’s not to say there aren’t some decent ideas in the bill, because there are. For example, it has one line about how nothing in the act can be construed to limit the 1st Amendment rights of journalists (which is something we’ve seen other privacy laws fail at), but again the details are left vague — meaning litigation. It also does make some handwavy efforts to force companies to be more transparent about what they collect. But the whole bill is kind of a mess.
Just as an example, it excludes “de-identified data,” saying that this is not covered — except, as we’ve noted repeatedly, there is no such thing as truly de-identified data. There are lots of other ideas that, at a first pass may sound good — like a “duty of loyalty” including “data minimization” to not “collect, process, or transfer” data “beyond what is reasonably necessary” but again we’re back into a world where this is going to get litigated, over and over and over again, leading to massive uncertainty.
There are also a lot of fill-in-the-blank aspects to the law, putting tremendous weight on the FTC to figure out what all of this actually means, meaning that there will be further confusion and uncertainty.
In the end, we need a federal framework for privacy protection. This is a federal framework for privacy protection. That doesn’t mean it’s a good one. It seems to be the only one that could get bipartisan support, however. Sometimes “compromise” gets you to an uncomfortable middle ground that no one really likes but it’s the best possible result. But sometimes “compromise” just creates an even bigger mess. This seems to be one of the latter kinds of compromise.
Filed Under: adppa, american data privacy and protection act, congress, preemption, privacy, private right of action
Marjorie Taylor Greene Has A Bill To Burden Elon Musk’s Twitter With An Avalanche Of Frivolous Lawsuits
from the a-gift-for-you-elon dept
You may have heard that Republican politicians have been celebrating Elon Musk’s announced plans to purchase Twitter, in the belief that his extraordinarily confused understanding of free speech and content moderation will allow them to ramp up the kinds of nonsense, abuse, and harassment they can spread on Twitter. I’m still not convinced that will actually be the result, but, in the meantime, it does seem weird that Republicans are now trying to burden their new friend with an avalanche of frivolous lawsuits. But, that’s exactly what they’re doing.
Republican Representative Marjorie Taylor Greene — not exactly known for understanding, well, anything — has introduced a bill to completely abolish Section 230. Also not known for being much of an original thinker, Greene’s bill is simply the House companion to Senator Bill Hagerty’s bill that was mocked almost exactly a year ago.
Of course, stripping Section 230 still doesn’t actually accomplish what most Republicans seem to think it would. Since it would increase liability on websites massively, it would actually make them much more interested in removing content to avoid those lawsuits. Indeed, Greene’s own press release about the bill seems to tout increased lawsuits as a feature of the bill.
Creating a Private Right of Action:
- Consumers can address violations of the previous two provisions via civil action.
So, it seems that Greene’s excited move to abolish Section 230… is also a plan to burden Elon Musk with a ton of frivolous lawsuits. Also, Trump and his Truth Social.
It’s almost as if none of them have thought through any of this.
Filed Under: bill hagerty, elon musk, marjorie taylor greene, private right of action, section 230
Companies: twitter
Bad Section 230 Bills Come From Both Sides Of The Aisle: Schakowsky/Castor Bill Would Be A Disaster For The Open Internet
from the that's-not-how-any-of-this-works dept
It truly is stunning how every single bill that attempts to reform Section 230 appears to be written without any intention of ever understanding how the internet or content moderation works in actual practice. We’ve highlighted tons of Republican-led bills that tend to try to force websites to host more content, not realizing how (1) unconstitutional that is and (2) how it will make the internet into a giant garbage fire. On the Democratic side, the focus seems to be much more on forcing companies to takedown constitutionally protected speech, which similarly (1) raises serious constitutional issues and (2) will lead to massive over-censorship of perfectly legal speech just to avoid liability.
The latest bill of the latter kind comes from Reps. Jan Schakowsky and Rep. Kathy Castor. Schakowsky has been saying for a while now that she was going to introduce this kind of bill to browbeat internet companies into being a lot more proactive in taking down speech she dislikes. The bill, called the Online Consumer Protection Act has now been introduced and it seems clear that this bill was written without ever conferring with anyone with any experience in running a website. It’s the kind of thing one writes when you’ve just come across the problem, but don’t think it’s worth talking to anyone to understand how things really work. It’s also very much a kind of “something must be done, this is something, we should do this” kind of bill that shows up way too often these days.
The premise of the bill is that websites “don’t have accountability to consumers” for the content posted by users, and that they need to be forced to have more accountability. Of course, this leaves out the kind of basic fact that if “consumers” are treated badly, they will go elsewhere, so of course every website has some accountability to consumers: it’s that if they’re bad at it, they will lose users, advertisers, sellers, buyers, whatever. But, that’s apparently not good enough for the “we must do something” crowd.
At best the Online Consumer Protection Act will create a massive amount of silly busywork and paperwork for basically any website. At worst, it will create a liability deathtrap for many sites. In some ways it’s modeled after the idiotic policy we have regarding privacy policies. Almost exactly a decade ago we explained why the entire idea of a privacy policy is dumb. Various laws require websites to post privacy policies, which no one reads, in part because it would be impossible to read them all. The only way a site gets in trouble is by not following its privacy policy. Thus, the incentives are to craft a very broad privacy policy that gives sites leeway — meaning they have less incentive to actually create more stringent privacy protections.
The OCPA basically takes the same approach, but… for “content moderation” policies. It requires basically every website to post one:
Each social media platform or online marketplace shall establish, maintain, and make publicly available at all times and in a machine-readable format, terms of service in a manner that is clear, easily understood, and written in plain and concise language.
That terms of service will require a bunch of pointless things, including a “consumer protection policy” which has to include the following:
FOR SOCIAL MEDIA PLATFORMS.?For social media platforms, the consumer protection policy required by subsection (a) shall include?
> (A) a description of the content and behavior permitted or prohibited on its service both by the platform and by users; > (B) whether content may be blocked, removed, or modified, or if service to users may be terminated and the grounds upon which such actions will be taken; > (C) whether a person can request that content be blocked, removed, or modified, or that a user?s service be terminated, and how to make such a request; > (D) a description of how a user will be notified of and can respond to a request that his or her content be blocked, removed, or modified, or service be terminated, if such actions are taken; > (E) how a person can appeal a decision to block, remove, or modify content, allow content to remain, or terminate or not terminate service to a user, if such actions are taken; and > (F) any other topic the Commission deems appropriate.
It’s difficult to look at that list and not laugh and wonder if whoever came up with it has ever been anywhere near a content moderation or trust & safety team, because that’s not how any of this works. Trust & Safety is an ongoing effort of constantly needing to adjust and change with the times, and there is no possible policy that can cover all cases. Can whoever wrote this bill listen to the excellent Radiolab episode about content moderation and think through how that process would have played out under this bill? If every time you change the policies to cover a new case you have to publicly update your already ridiculously complex policies — while the new requirements be that those same policies are “clear, easily understood, and written in plain and concise language” — you’ve created an impossible demand.
Hell, someone should turn this around and push it back on Congress first. Hey, Congress, can you restate the US civil and criminal code such that it is “clear, easily understood, and written in plain and concise language?” How about we try that first before demanding that private companies be forced to do the same for their ever changing policies as well?
Honestly, requiring all of this be in a policy is just begging angry Trumpists to sue websites saying they didn’t live up to the promises made in their policies. We see those lawsuits today, but they’re kicked out of court under Section 230… but Schakowsky’s bill says that this part is now exempted from 230. It’s bizarre to see a Democratic bill that will lead to more lawsuits from pissed off Trumpists who have been removed, but that’s what this bill will do.
Also, what “problem” does this bill actually solve? From the way the bill is framed, it seems like Schakowsky wants to make it easier for people to complain about content and to get the site to review it. But every social media company already does that. How does this help, other than put the sites at risk of liability for slipping up somewhere?
The bill then has separate requirements for “online marketplaces” which again suggest literally zero knowledge or experience with that space:
FOR ONLINE MARKETPLACES.?For online marketplaces, the consumer protection policy required by subsection (a) shall include?
> (A) a description of the products, product descriptions, and marketing material, allowed or disallowed on the marketplace; > (B) whether a product, product descriptions, and marketing material may be blocked, removed, or modified, or if service to a user may be terminated and the grounds upon which such actions will be taken; > (C) whether users will be notified of products that have been recalled or are dangerous, and how they will be notified; > (D) for users? > > > (i) whether a user can report suspected fraud, deception, dangerous products, or violations of the online marketplace?s terms of service, and how to make such report; > > (ii) whether a user who submitted a report will be notified of whether action was taken as a result of the report, the action that was taken and the reason why action was taken or not taken, and how the user will be notified; > > (iii) how to appeal the result of a report; and > > (iv) under what circumstances a user is entitled to refund, repair, or other remedy and the remedy to which the user may be entitled, how the user will be notified of such entitlement, and how the user may claim such remedy; and
(i) how sellers are notified of a report by a user or a violation of the terms of service or consumer protection policy; (ii) how to contest a report by a user; (iii) how a seller who is the subject of a report will be notified of what action will be or must be taken as a result of the report and the justification for such action; (iv) how to appeal a decision of the online marketplace to take an action in response to a user report or for a violation of the terms of service or consumer protection policy; and (v) the policy regarding refunds, repairs, replacements, or other remedies as a result of a user report or a violation of the terms of service or consumer protection policy.
Honestly, this reminds me a lot of Josh Hawley’s bills, in that it seems that both Hawley and Schakowsky want to appoint themselves product manager for the internet. All of the things listed above are the kinds of things that most companies do already because you need to do it that way. But it’s also the kind of thing that has evolved over time as new and different challenges arise, and locking the specifics into law does not take into account that very basic reality. It also doesn’t take into account that different companies might not fit into this exact paradigm, but under this bill will be required to act like they do. I can’t see how that’s at all helpful.
And, it gets worse. It will create a kind of politburo for how all internet websites must be run:
Not later than 180 days after the date of the enactment of this Act, the Commission shall conduct a study to determine the most effective method of communicating common consumer protection practices in short-form consumer disclosure statements or graphic icons that disclose the consumer protection and content moderation practices of social media platforms and online marketplaces. The Commission shall submit a report to the Committee on Energy and Commerce of the House of Representatives and the Committee on Commerce, Science, and Transportation of the Senate with the results of the study. The report shall also be made publicly available on the website of the Commission.
Yeah, because nothing works so well as having a government commission jump in and determine the “best” way to do things in a rapidly evolving market.
Also, um, if the government needs to create a commission to tell it what those best practices are why is it regulating how companies have to act before the commission has even done its job?
There are a bunch more requirements in the bill, but all of them are nitty gritty things about how companies create policies and implement them — something that companies are constantly changing, because the world (and the threats and attacks!) is constantly changing as well. This bill is written by people who seem to think that the internet — and bad actors on the internet — are a static phenomena. And that’s just wrong.
Also, there’s a ton of paperwork for nearly every company with a website, including idiotic and pointless requirements that are busywork, with the threat of legal liability attached! Fun!
FILING REQUIREMENTS.?Each social media platform or online marketplace that either has annual revenue in excess of $250,000 in the prior year or that has more than 10,000 monthly active users on average in the prior year, shall be required to submit to the Commission, on an annual basis, a filing that includes?
> (A) a detailed and granular description of each of the requirements in section 2 and this section; > (B) the name and contact information of the consumer protection officer required under subsection (b)(4); and > (C) a description of any material changes in the consumer protection program or the terms of service since the most recent prior disclosure to the Commission
(2) OFFICER CERTIFICATION.?For each entity that submits an annual filing under paragraph (1), the entity?s principal executive officer and the consumer protection officer required under subsection (b)(4), shall be required to certify in each such annual filing that?
> (A) the signing officer has reviewed the filing; > (B) based on such officer?s knowledge, the filing does not contain any untrue statement of a material fact or omit to state a material fact necessary to make the statements, in light of the circumstances under which such statements were made, not misleading; > (C) based on such officer?s knowledge, the filing fairly presents in all material respects the consumer protection practices of the social media platform or online marketplace; and > (D) the signing consumer protection officer? > > > (i) is responsible for establishing and maintaining safeguards and controls to protect consumers and administer the consumer protection program; and > > (ii) has provided all material conclusions about the effectiveness of such safeguards and controls.
So… uh, I need to hire a “consumer protection officer” for Techdirt now? And spend a few thousand dollars every year to have lawyers (and, most likely a new bunch of “compliance consultants” review this totally pointless statement I’ll need to sign each year? For what purpose?
The bill also makes sure that our courts are flooded with bogus claims from “wronged” individuals thanks to its private right of action. It also, on top of everything else, exempts various state consumer protection laws from Section 230. That’s buried in the bill but is a huge fucking deal. We’ve talked about this for years, as various state Attorneys General have been demanding it. But that’s because those state AGs have a very long history of abusing state “consumer protection” laws to effectively shake down companies. A decade ago we wrote a definitive version of this in watching dozens of state attorneys general attack Topix, with no legal basis, because they didn’t like how the company moderated its site. They were blocked from doing anything serious because of Section 230.
Under this bill, that will change.
And we’ve seen just how dangerous that can be. Remember how Mississippi Attorney General Jim Hood demanded all sorts of information from Google, claiming that the company was responsible for anything bad found online? It later came out (via the Sony Pictures hack) that the entire episode was actually funded by the MPAA, with Hood’s legal demands written by the MPAA’s lawyers, as part of Hollywood explicit plan to saddle Google with extra legal costs.
Schakowsky’s bill would make that kind of corruption an every day occurrence.
And, again, the big companies can handle this. They already do almost everything listed anyway. All this really does is saddle tons of tiny companies (earning more than $250k a year?!?) with ridiculous and overly burdensome compliance costs, which open them up to not just the FTC going after them, but any state attorney general, or any individual who feels wronged by the rules.
The definitions in the bill are so broad that it would cover a ton of websites. Under my reading, it’s possible that Techdirt itself qualifies as a “social media platform” because we have comments. This is yet another garbage bill from someone who appears to have no knowledge or experience how any of this works in practice, but is quite sure that if everyone just did things the way she wanted, magically good stuff would happen. It’s ridiculous.
Filed Under: busywork, consumer protection, content moderation, ftc, jan schakowsky, kathy castor, paperwork, private right of action, section 230, terms of service
Can You Build A Privacy Law That Doesn't Create Privacy Trolls?
from the finding-the-balance dept
It’s safe to assume that most people recognize patent trolling as a problem, one that arises from our uniquely inefficient legal system. If we were given the opportunity to redesign patent litigation rules from scratch, why would anyone intentionally create a system so costly and uncertain that bad faith actors have a financial incentive to file weak lawsuits against cash-strapped startups that are better off paying extortive settlements than actually winning the case? And yet, some privacy advocates are openly calling for policymakers to replicate this disastrous state of affairs through a new regime of private enforcement for proposed state and federal comprehensive consumer privacy laws.
In two recent posts on the Greenhouse, Joe Jerome and Ernesto Falcon outlined the debate about how to optimally enforce privacy laws; arguing that private rights of action in some form are necessary to make companies take consumer privacy seriously and claiming that industry’s preference for public agency enforcement reflects a desire to make privacy rules functionally toothless. While I disagree with their ultimate conclusion, I understand the skepticism of leaving enforcement up to chronically underfunded regulatory agencies with a subpar track record of curtailing privacy violations. I also agree with Jerome’s conclusion that a strict private vs. public enforcement dichotomy has created a “ridiculous impasse.” As an advocate for startups, I’m just the sort of industry representative positioned to answer his call for a more nuanced conversation around privacy enforcement:
If industry is serious about working toward clear privacy rules, business interests have two obligations: (1) they should offer up some new ideas to boost enforcement and address legitimate concerns about regulatory limitations and capture; and (2) they need to explain why private rights of action should be a non-starter in areas where businesses already are misbehaving.
I’ll address the second request first.
Simply put, comprehensive consumer privacy laws of the kind passed in recent years at the state level and currently under consideration at the federal level are uniquely susceptible to abusive private litigation. These laws feature all the hallmarks of legal regimes with trolling problems: damages wholly unrelated to any actual harm suffered, ambiguous liability standards that preclude early dismissal of weak claims, and high discovery costs that fall disproportionately on defendants.
Privacy harms are under-enforced in large part because, generally speaking, the U.S. legal system allows plaintiffs to recover only to the extent they’ve suffered economically quantifiable injuries. Privacy injuries, on the other hand, tend to be intangible. It may be morally reprehensible for a company to disseminate a user’s browsing history, but it’s unlikely to have caused the user any direct economic injury. Since people are usually somewhat economically rational, they don’t often file lawsuits that are likely to cost significantly more than they can recover. Consequently, the efficacy of any private enforcement regime for intangible privacy harms hinges on the availability of statutory damages. Indeed, the California Consumer Privacy Act and other proposed state laws offer statutory damages to plaintiffs—up to $750 per aggrieved user in data breach cases under CCPA, regardless of whether those users suffered any economic harms at all. As anyone familiar with copyright law knows, high statutory damages awards makes litigation incredibly lucrative for unscrupulous plaintiffs. Since privacy laws tend to support class litigation, private actions for privacy harms have the potential to be incredibly high-stakes, incentivizing plaintiffs to bring whatever claims they can, regardless of how substantively weak they may be.
Of course, the potential for massive judgements alone doesn’t create a trolling problem. If a plaintiff bringing a meritless claim faces possible sanctions or high litigation costs, the expected value of bringing a weak lawsuit decreases alongside its likelihood of success. But, under the American legal system, each party has to pay its own litigation costs, and sanctions for bad behavior are vanishingly rare. As such, a defendant is better off paying a settlement for any value lower than the cost of defense, even if the lawsuit is effectively meritless.
Fortunately for plaintiffs’ attorneys, privacy litigation is likely to be incredibly expensive for defendants, making extortive settlements a lucrative business model. New comprehensive consumer privacy laws are, as the name suggests, expansive. In order to cover as many business models, technologies, and unforeseen situations as possible, these laws are typically written with very general liability standards. Under the CCPA, for example, a defendant company is liable for a data breach if it failed to implement “reasonable security procedures.” What sorts of security practices are reasonable? The law doesn’t provide a definition, so the only way a company can ever know if its security practices are legally reasonable is to have a judge or jury so declare at the end of a lawsuit. Thus, even if a company has the best possible security measures in place but nevertheless suffers a data breach, it will have to spend massive sums of money to get rid of a lawsuit alleging that its security was insufficient. Vexatious plaintiffs will quickly figure this out and file lawsuits any time a company suffers a data breach, without regard as to whether users suffered injury or whether the company did anything wrong.
The same pattern plays out again and again whenever there are high litigation costs and ambiguous liability standards that prevent early dismissal of bad-faith lawsuits. Stock-drop lawsuits, patent trolling, TCPA abuse, FCRA litigation, the list goes on and on. Given the expansive scope of comprehensive privacy laws, a private right of action in this context could create a trolling problem that dwarfs anything we’ve seen before. Patent trolling is limited in some sense by the number of patents available, whereas virtually any activity involving consumer information could spark a privacy trolling lawsuit. In the first year of GDPR, European data regulators received more than 200,000 complaints, nearly 65,000 of which involved data breaches. Under U.S. rules, enterprising plaintiffs’ attorneys would have every incentive to turn all of these into lawsuits, regardless of their merits.
There are all sorts of theoretical reasons why centralizing enforcement power in a well-functioning expert regulatory agency is the optimal way to effectively enforce privacy law. For one, privacy law involves highly technical issues beyond the expertise of most judges. Similarly, since privacy harms like data breaches typically impact a large, geographically dispersed population without direct access to the underlying facts, a central governmental regulator is better positioned to gather the information necessary to bring successful enforcement actions. But, I take the point that leaving it solely up to federal and state regulators alone is likely to result in some under-enforcement, even with the budget increases that virtually everyone supports.
Thankfully, designing an optimal privacy enforcement regime doesn’t come down to a binary choice between relying exclusively on underfunded (and potentially captured) central regulators or creating a free-for-all of dubious private litigation, which brings me to my response to Jerome’s first demand to industry.
To expand enforcement capacity beyond federal and state agencies while preventing litigation abuse, we propose a multi-layered scheme of public and private enforcement that will empower non-financially motivated private attorneys general to bring class action lawsuits, allow injured individuals to obtain redress from companies, and create a direct private right of action for monetary damages against companies that flout the law.
First, to supplement FTC and state attorney general enforcement, Congress should take a page from the GDPR playbook and allow privacy-focused non-profits to bring class actions seeking injunctive relief on behalf of aggrieved users. Limiting class actions to non-profits seeking injunctive relief forecloses the possibility of financially motivated lawsuits and nuisance-value settlements while increasing the number of entities with privacy expertise available to enforce the law. We would also support giving the FTC the authority to levy fines in the first instance, allowing for financial penalties against companies subject to injunctions arising from non-profit lawsuits.
Second, we recognize that some privacy harms are too individualized and small for class enforcement, so we propose allowing individual users to bring direct claims against companies for violations where injunctive relief / specific performance largely rectifies the harm. For example, most comprehensive privacy proposals give users the right to request deletion of their personal information. If a company refuses to comply with such a request, it’s unlikely that the FTC or a non-profit will bring a lawsuit to force the company to comply with a single ignored deletion request. Without a private right of action in this circumstance, a user will have no recourse unless and until the company ignores enough user deletion requests to draw regulator scrutiny. In this case, the appropriate remedy would be an order forcing the company to comply with the deletion request. Simply responding to a lawsuit to enforce a user deletion request would cost a company far more than following through with the request, so these types of lawsuits are unlikely to be prevalent.
Third, anyone injured by violation of a previously issued injunction mandating compliance with a comprehensive federal consumer privacy law should have the right to bring a lawsuit—individually or as a part of a class—for monetary damages. Basically, if a company violates the law, gets hit with an injunction and continues to commit the same violations, aggrieved users should be able to sue. Critically, while injured users should be able to seek actual economic damages, we also propose allowing users to obtain monetary damages for intangible injuries if the injunction at issue establishes a damages schedule for future violations. Giving the FTC or non-profit litigants the ability to seek injunctions that specify a form of liquidated damages for future violations of the injunction deters future violations and creates far more flexibility in appropriately compensating users for intangible privacy harms than would be afforded through a fixed statutory damages calculation. The FTC could determine that a narrow, technical violation is too minor to warrant a high damages award for future violations and tailor the consent decree to reflect the seriousness of the offense and desired level of deterrence.
This system would satisfy the concerns of privacy advocates and industry alike. It ensures that enforcement power isn’t solely vested in an overwhelmed agency and allows for individuals to hold companies directly accountable while preventing abusive litigation by building some checkpoints into the system to make sure that there’s no financial incentive to bring meritless claims.
In the end, there’s a lot of common ground between consumer advocates and tech companies on what a federal comprehensive consumer privacy law should look like, but the window for reaching agreement is closing fast. To move beyond the stalemate over how such a law should be enforced, we need to learn from the lessons of other areas of the law and avoid creating a wave of bad faith litigation that would disproportionately hurt smaller platforms and cement the power of the larger companies.
Filed Under: frivolous lawsuits, ftc, innovation, privacy, privacy trolls, private right of action, startups, torts
Senator Cantwell Releases Another Federal Privacy Law That Won't Go Anywhere And Doesn't Deal With Actual Issues
from the this-isn't-going-to-work dept
A few weeks ago we wrote about a privacy bill in the House that seemed unlikely to go anywhere, and now we have the same thing from the Senate: a new privacy bill from Senator Maria Cantwell, called COPRA for “Consumer Online Privacy Rights Act.” For months it had been said that Cantwell was working on a bipartisan effort to create a federal privacy law, so the fact that this bill only has Democratic co-sponsors (Senators Schatz, Klobuchar and Markey) doesn’t bode well for its likelihood of success.
The basic features of the bill are to give more power and resources to the FTC to enforce “digital privacy” and also allowing state Attorneys General to enforce the law. And… as with the House bill it includes a private right of action. This is something that many privacy organizations do favor, but still seems likely to be a disaster in practice. Letting anyone sue for privacy violations when no one actually agrees what “privacy means” is a recipe for a ton of nuisance lawsuits. If this bill actually had a chance, it could lead to the rise of “privacy trolls.” Even in the most well meaning sense of trying to protect privacy, the fact that so many people disagree over what should actually be private and what privacy means, would create quite a legal clusterfuck.
One thing this bill does that the House bill would not, is to pre-empt “directly conflicting state laws.” That’s important for any federal bill, as otherwise companies will have to figure out how to comply with many different (and sometimes conflicting) standards and rules from multiple different states. At least this bill would prevent that. As Consumer Reports notes in its write-up of the bill, it is good to have more alternatives out there, and the bill does have some useful ideas in terms of protecting privacy:
?It?s a good bill to have out there,? Brookman says. ?It gets a lot of things right?companies shouldn?t be collecting cross-site profiles about me and sharing that information?and it?s pretty aggressive on those things.?
The proposed law expands the definition of sensitive information to include biometric details such as facial recognition data, as well as geolocation data that is collected as individuals go about their day-to-day lives.
But, that “aggressiveness” is also what limits the bill’s chances. This seems more performative than anything else, recognizing that people are (reasonably) worried about their privacy online, but does little to deal with the actual issues regarding privacy.
Filed Under: corpa, maria cantwell, privacy, private right of action
The Race Is On To Create A Federal Online Privacy Law: First Entry From Reps. Eshoo & Lofgren
from the lots-of-thought,-but-little-chance dept
There’s a race on to have Congress introduce a comprehensive federal privacy law. As you may (or may not?) know, the US really doesn’t have a law protecting our privacy. To date, any privacy protections have been a mixture of other laws, from the defanged 4th Amendment protecting (in theory more than reality) against government intrusion into our private lives, to the FTC’s consumer protection mandates. However, many people recognize that this probably isn’t doing enough to protect privacy in this age — and with the EU taking the lead with the GDPR, it’s become clear that the US needs to put at least something in place. So far, Congress has failed to come up with much, and there’s a bit of a ticking time bomb in the form of California’s hugely problematic CCPA law, which is set to go into effect on January 1st, despite a long list of problems with the law.
So much of the discussion has been around whether or not a new federal law will come into play that pre-empts various states trying to create their own set of privacy laws. Reps. Anna Eshoo and Zoe Lofgren have now announced their entrant into the discussion with their Online Privacy Act. It is quite long and detailed, coming in at 132 pages which I recommend reading. They’ve also created a one page summary of the bill.
The bill is ambitious, detailed and thoughtful… but also has some problems and is not likely to become law. There’s a lot in the bill, but it will create a brand new federal agency, staffed with 1,600 employees, to “enforce users’ privacy rights.” Along those lines, it establishes what those rights are — with much of it pulling from concepts currently found in the GDPR (i.e., rights to access, correct, delete, and download information companies hold about you). There are some opt-in requirements for using your data for things like machine learning (what seems like a response to the kerfuffle over IBM using Flickr images to train facial recognition AI).
The law would also put a bunch of obligations on companies regarding data minimization and also force the companies to be more upfront about what they need particular data for. It would also limit the sale or transfer of personal information. It also criminalizes “doxxing” which it defines as disclosing “personal information with intent to cause harm.” If this became law, that section might run into some 1st Amendment problems.
Part of the “thoughtfulness” of the bill is that Eshoo and Lofgren have clearly heard some of the concerns that were laid out about the GDPR or other approaches to privacy. It includes an exemption for small businesses and then also includes a “ramp up” phase for companies that cross out of the small business realm. I’m always a bit concerned about “small business exemptions” because they lead to weird incentives and not always great outcomes. From a purely efficient standpoint, I tend to think that if the law is written in a manner that requires exempting certain classes of companies, it tends to highlight problems with the overall law itself, though there are some exceptions to that rule.
Importantly, the bill also calls out that it should have no impact on journalism, and acts of journalism (reporting on people) should never be seen as violating the law. That could lead to some conflicting situations within the bill, but hopefully the blanket exemption on journalism would protect journalistic activity.
That said, there are still problems with the bill. The biggest one is that it does not appear to pre-empt state laws, which is kind of the whole reason for introducing a federal law in the first place. I know that some privacy activists have pushed back against state pre-emption, but that by itself makes the bill somewhat useless, because California’s law and other state privacy laws would more or less wipe this law off the books in terms of effectiveness. I understand the thinking that some have put forth that letting states craft their own privacy laws encourages more experimentation and thoughtfulness, but it makes little sense on an internet that crosses all borders. Complying with all state privacy laws is going to be a huge mess — and therefore it seems like a federal law must include pre-emption of state laws for it to be valid.
The bill also includes a private right of action, which is seen by many to be problematic — as it’s going to enable the rise of what are, in effect, privacy trolls. Again, there are reasonable concerns about if it’s only left up to government enforcement that enforcement will be lax, or will suffer from regulatory capture, but leaving open a broad private right of action could have significant problematic consequences. The bill also seems clearly designed to set up certain non-profits to file a bunch of class action privacy lawsuits:
NONPROFIT COLLECTIVE REPRESENTATION.? An individual shall have the right to appoint a nonprofit body, organization, or association which has been properly constituted in accordance with the law, has statutory objectives which are in the public interest, and is active in the field of the protection of individual rights and freedoms with regard to the protection of their personal data to lodge the complaint on his or her behalf, to exercise the rights referred to in this Act on his or her behalf.
I worry a bit about the incentive structure there as well. I certainly have faith that groups like EFF would use this particular power wisely and in pursuit of actually protecting our privacy, but there are a number of non-profits out there that would likely take this to ridiculous extremes and immediately go after lots of companies for potentially dubious reasons.
Most reports on this acknowledge that this bill is unlikely to become law. It does not currently have bipartisan support, and the creation of an entirely new government agency, the lack of state pre-emption, and the private right of action have been seen as non-starters for many.
All that said, we’re likely to see a bunch of privacy bills showing up in Congress soon, so it’s worth exploring the details of this one. And, of course, it should be noted that both Lofgren and Eshoo represent parts of Silicon Valley, which might make you assume that the bill is “friendly” to tech companies. Looking through the details, though, and that would be a mistake. While I’m sure some will criticize the bill for not going far enough, this would create a pretty massive overhaul in how online privacy is handled in the US today and would, in effect, create an equivalent of the GDPR. That might still “benefit” large companies in making it more difficult for others and new entrants to compete (even with the small business exemption), but this bill doesn’t do any favors for internet companies.
I do still worry that most of our attempts to regulate privacy fail because we often misunderstand what privacy means, and I do worry that the approach in this bill, as with the GDPR and the CCPA, suggests a static, rather than dynamic internet world, in which the focus is on “limiting” things, rather than recognizing how they might be better enabled by putting more control in the hands of the end users. So much of the structure of this and other bills seems based on the idea that there are central entities “controlling” our data — which may be the case today, but need not necessarily be the case in the future.
Filed Under: anna eshoo, competition, doxxing, gdpr, online privacy act, privacy, private right of action, state pre-emption, states, zoe lofgren
Intellectual Property Maximalist Lobbying Group Proposes A New Trademark SOPA (Plus Girl Scout Badges…)
from the because-of-course-they-would dept
The Intellectual Property Owners Association (IPO) — which is a sort of “super group” of companies looking to always ratchet up intellectual property laws — had a brief note on their front page on Monday pushing for bringing back SOPA, but with a promise that it’s for trademark law only (the story may disappear from the front page and apparently “archives” are for “members only”):
On September 7 the IPO Board of Directors adopted a resolution supporting in principle legislation to attack online trademark counterfeiting. Such legislation would enable brand owners to file suit against domestic websites selling or offering for sale or distributing counterfeit products, and also as to ?foreign counterfeiting websites,? in order to obtain a court order that would require (a) that financial service providers cease processing payment transaction to the defendant(s) and the foreign counterfeiting website, at least in the United States, (b) that internet advertising service providers cease providing such services to the defendants and the foreign counterfeiting website, at least in the United States, and (c) any other injunctive relief the court may determine as appropriate.
The legislation should focus on trademark counterfeiting only; provide for nationwide personal jurisdiction and venue over any foreign counterfeiting website, so long as such is consistent with due process; and permit e-mail service of process to a domestic or foreign counterfeiting website without requiring leave of court based on the e-mail address listed in domain registration for the administrative or ownership contact and to the e-mail address found on the website, if no real or actual address is available for providing notice to the potential defendant.
All of that sounds nearly identical to parts of SOPA — except the IPO seems to think that if they just focus on the trademark issue, it will be able to sneak it through without a SOPA-like eruption from the public. But the basics here are the same. Allowing companies a private right of action to block out sites (both domestic and foreign) deemed as “counterfeiting websites” is a dangerous plan. Note that, in the past, big brands have regularly declared perfectly legitimate resellers as counterfeiters, and have attacked and sued companies like eBay for not magically stopping people from selling counterfeit goods.
Of course, part of the problem is that these companies regularly exaggerate the issue of “losses” due to trademark infringement and counterfeiting. The numbers are stretched beyond belief. Meanwhile, multiple studies that have looked at the actual size of the problem have found it to be quite small. In fact, multiple studies have found that most people buying counterfeit goods aren’t being fooled, but know they’re buying counterfeit, but are only doing so because they can’t afford the real version. And, the studies have noted, many of the same people later do buy the real version when they can afford it. In other words, counterfeit purchases are often aspirational, rather than acting as a substitute. They’re not doing any harm.
And, of course, the real threat here is that if the IPO can sneak this kind of legislation through, it won’t be that long until someone tries to slip in some language extending the law to copyrights as well. It’ll be slipped in quietly, perhaps with some talk about “harmonizing” different regulations related to trademark and copyright law, hoping that no one notices that basically the original version of SOPA is now the law.
The same IPO notice also talks up its new “Girl Scout” patch, which we had discussed back in March. This was a patch designed by the IPO, but with the support of the US Patent and Trademark Office, so you know it’s basically preaching maximalism:
Cookie selling teaches Girl Scouts valuable business practices. Now they have the opportunity to learn a few more in the form of IP. IPO Education Foundation recently partnered with the Girl Scout Council of the Nation?s Capital and the USPTO to develop the IP patch. The IP patch program teaches girls about the value of IP and the process for obtaining different rights. You can help by telling your friends about the patch or volunteering to talk to a troop about what you do. Click here for more information.
It’s the same basic story we noted back in March. The plan is all about why intellectual property is valuable — not taking a balanced view about where it creates more harm than good, and where other alternatives might be better. It’s especially troubling that it’s focused on girls entering science, technology, engineering and math studies, since those are areas where over-aggressive use of intellectual property have been most damaging, locking up knowledge, rather than increasing the kind of knowledge sharing that drives innovation forward.
Filed Under: counterfeiting, girl scouts, intellectual property, merit badge, private right of action, rogue sites, sopa, trademark