lawsuits – Techdirt (original) (raw)
The Fake Government ‘Efficiency’ Agency Known As DOGE Already Faces Multiple Lawsuits
from the this-is-why-we-can't-have-nice-things dept
One of the many new executive orders signed by President Donald Trump on Monday was the long-hyped creation of the Department of Government Efficiency (DOGE). DOGE is portrayed as a sort of government efficiency and innovation office, but it’s primarily flimsy cover for the extraction class as they eliminate corporate oversight, consumer protection, labor rights, and the social safety net.
The program was supposed to be spearheaded by two of the country’s biggest bloviating weirdos, Elon Musk and Vivek Ramaswamy. Ramaswamy is already leaving the agency because he purportedly wants to take a shot at becoming the Governor of Ohio (though other reports suggest he somehow managed to annoy most of the people at a fake government agency already filled with annoying people).
DOGE has other issues already as well. While it’s not a real government agency, it does appear to qualify as a federal advisory committee (FACA). And FACAs do have documentation, transparency, and other rules they have to follow, including producing meeting minutes, filing a Charter with Congress, having “fairly balanced” ideological representation, and maintaining some semblance of public open access.
Not surprisingly, Musk’s fake government efficiency agency has allegedly done none of those things, resulting in several new lawsuits that may or may not result in any reform of note.
One of the lawsuits was filed by the The American Public Health Association, the American Federation of Teachers, Minority Veterans of America, VoteVets Action Fund, Center for Auto Safety, and CREW. It calls DOGE a “shadow operation led by unelected billionaires who stand to reap huge financial rewards from this influence and access.”
“Plaintiffs and those they represent believe that the government should work for the American people and be transparent, efficient, and effective – and that the government can and should do better,” the complaint states.
Another lawsuit, filed by Public Citizen, filed in conjunction with the American Federation of Government Employees, also alleges the fake government agency is playing fast and loose with government rules.
Yet another lawsuit, filed by National Security Counselors, also points out how the setup of DOGE seems wholly disconnected from how the government is supposed to work.
It’s clear DOGE supporters (including lots of corporate backed deregulatory “innovation” think tanks) want to have their cake and eat it too. They want DOGE to be respected as a serious thing, while simultaneously having to do none of the serious things adults have to do to be taken seriously in the world of government policy:
“Sam Hammond, senior economist at the Foundation for American Innovation, who has been supportive of DOGE’s efforts, said the initiative will primarily implement ideas within the executive branch and White House, which he said would exempt it from FACA requirements. If Trump does treat DOGE as a FACA, then it should follow the required reporting rules. But for now, he said, “DOGE isn’t a federal advisory committee because DOGE doesn’t really exist. DOGE is a branding exercise, a shorthand for Trump’s government reform efforts.”
When announced, the press went out of its way to frame DOGE as a very serious thing. Of course it’s mostly a vehicle for access (read: corruption). And a way to put a lazy shine on what will be a brutal and very harmful dismantling of federal consumer protection, labor rights, environmental law, and social safety programs, which will result in very real suffering at unprecedented scale.
Musk himself admits this suffering is coming, but hopes he can bedazzle a lazy press with enough bullshit that they softsell and downplay the broad, percussive looming harms to the American public. Meanwhile fake government official Musk is already walking back claims that his fake government efficiency agency would drive some two trillion in overall government savings.
You’re supposed to ignore the fact that this is because the stuff usually most in need of cutting — fat and purposeless corporate subsidies (see: the Starlink kerfuffle) and the bottomless well of military and intelligence overbilling — are precisely the sort of stuff billionaire extraction class parasites enjoy glomming on to. The stuff deemed “inefficient” is the stuff that doesn’t benefit them personally.
Filed Under: department of government efficiency, doge, efficiency, elon musk, government, lawsuits, subsidies, transparency
Elon Musk Doesn’t Like Some Headlines. But That Doesn’t Make Them Defamatory
from the what-free-speech? dept
Elon Musk is once again threatening to sue over speech he dislikes — this time, over factual headlines about a deadly explosion involving a Tesla Cybertruck. But not liking how a story is framed doesn’t make it defamatory. For a statement to be defamatory, it must be false, damaging, and published with “reckless disregard for the truth” (effectively meaning “knowing it was false when you decided to publish”). None of that applies here.
Merely unflattering portrayals, or a factual framing that some feel is “misleading” is not defamatory.
Musk’s legal threats over these headlines are not just baseless, but dangerous. They show a disregard for free speech and an attempt to intimidate the press. And unfortunately, he’s not alone in pushing this censorial theory.
Back in 2020, you may recall that we criticized Larry Lessig for trying to make what he called “Clickbait Defamation” into a thing. His argument was that a fully truthful headline that is framed to imply something he felt was unfair should be considered defamatory. That, of course, is not how defamation actually works. Lessig eventually dropped his lawsuit after the NY Times changed the headline he disliked, but it appears that others are now picking up on this theory, with Elon Musk leading the charge.
As you have likely heard, yesterday, a US Army special forces operations sergeant allegedly drove a rented Tesla Cybertruck full of explosives/fireworks in the bed, and parked it in front of the Trump Hotel in Las Vegas, where the explosives in the trunk were then detonated, killing the driver and injuring a few people nearby.
As with many breaking news stories, the details of the story were not known at first, with the salient facts at the beginning being (1) Trump Hotel in Vegas, (2) Tesla Cybertruck, and (3) explosion.
Given that there have been multiple stories in the past year of Cybertrucks catching fire, including one from just a few days ago, many people initially wondered if this was just another case of that happening. Others, noting the close relationship between Donald Trump and Elon Musk suggested that the imagery of a burning Cybertruck in front of the Trump Hotel worked as a metaphor for world news, but also suggested something more deliberate. Investigators are still working out the details.
But, either way, including Cybertruck and explosion in a headline is totally factual. Yet, Elon Musk is suggesting that he might sue over such headlines:
But, for there to be actual defamation, there needs to be a false statement of fact (and, likely, published knowing or deeply suspecting it was false). Nothing in the headline: “Tesla Cybertruck explosion in front of Trump hotel in Las Vegas leaves 1 dead, 7 injured” is false. It’s all factual.
Senator Mike Lee, who once presented himself as a supporter of free speech and the First Amendment, also jumped into the fray suggesting the NYT v. Sullivan’s “actual malice” standard should fall, allowing Musk to sue over similar headlines:
I mean, first of all, Elon Musk isn’t even mentioned, so it’s difficult to say that this would be defamation against Elon. Second, that was the original AP headline, right after the event occurred, when that was basically all that was known: a Cybertruck did, indeed, catch fire outside of the Trump Hotel. At that moment it wasn’t even known that the bed was full of explosive materials.
But also, everything in there is factual.
And, yes, you can argue that the eventual framing is misleading or even unfair. But that’s how free speech works. There are tons of headlines that people feel are misleading or unfair. I call them out, and I also get accused of misleading headlines. That’s how free speech works. People sometimes don’t like the way other people frame things or title things.
But none of that is defamatory.
Indeed, if Mike Lee is so concerned about the use of the passive voice in headlines, when will we see him claiming that the traditional passive voice of “police-involved shooting” is defamatory as well?
Some could argue (and a few people did yell at me on Bluesky about this!) that other incidents involving cars, including the attack in New Orleans the same day, didn’t focus on the model of the car involved (a Ford F-150 Lightning, if you’re wondering).
But that’s understandable. Again, before anyone knew the details of what happened in Vegas, all that was known were the three simple facts that were reported in those headlines. Furthermore, the make and model of the car actually was perfectly newsworthy in this story because of Musk’s close association with Trump, which certainly suggested there may have been a connection worth mentioning.
That wasn’t true in the New Orleans case (though certainly some news stories talked about the Ford truck and how heavy it was, likely contributing to the damage caused).
Either way, this is yet another case where the self-described “free speech absolutist” Elon Musk seems to be threatening legal action over speech he dislikes, which isn’t even in the same zip code as defamation.
Whether or not he actually sues, it suggests an intimidation stance: if you don’t cover stories in a way that makes me look good, I may sue you and drag you into a costly and resource-intensive lawsuit, no matter how preposterous the claims may be.
Actual free speech means that public figures, like Elon Musk, need to have a thicker skin. They need to recognize that not everyone will publish things that are flattering, and sometimes you just have to suck it up and take it. Or use the fact that you have one of the world’s largest megaphones to… use your own voice to respond. Rather than threatening legal recourse. That’s how free speech works.
This is also why we need stronger anti-SLAPP laws in every state and a federal anti-SLAPP law. Because we know that the rich and powerful have no problem abusing the judicial system to burden the media with vexatious SLAPP suits as a method of intimidation.
Filed Under: 1st amendment, clickbait defamation, defamation, donald trump, elon musk, framing, free speech, las vegas, lawsuits, mike lee
Companies: tesla
OCLC Says ‘What Is Known Must Be Shared,’ But Is Suing Anna’s Archive For Sharing Knowledge
from the live-up-to-your-principles dept
Back in March, Walled Culture wrote about the terrible job that academic publishers are doing in terms of creating backups of the articles they publish. We also mentioned there two large-scale archives that are trying to help, Sci-Hub and Anna’s Archive. Legal action by publishers against the former seems to have led to a halt to new items being added to its collection. This has resulted in the rise of Anna’s Archive as the main large-scale archive of academic papers and other material. It has also led to a lawsuit against the site, as TorrentFreak reports. The legal move is by the non-profit OCLC, which was originally the Ohio College Library Center, then became the Online Computer Library Center, and is now simply OCLC. It describes itself as follows:
OCLC is a global library organization that provides shared technology services, original research, and community programs for its membership and the library community at large. We are librarians, technologists, researchers, pioneers, leaders, and learners. With thousands of library members in more than 100 countries, we come together as OCLC to make information more accessible and more useful.
OCLC and thousands of its member libraries cooperatively produce and maintain WorldCat, “the world’s most comprehensive database of information about library collections”. The OCLC says:
WorldCat helps you share what makes your library great to make all libraries better.
As these quotations emphasize, sharing is central to what OCLC does, and this is encapsulated by OCLC’s slogan: “Because what is known must be shared”. Despite that laudable commitment to sharing, it is suing Anna’s Archive for downloading the WorldCat database and sharing it. This seems odd. OCLC is a non-profit organization, and one that believes “what is known must be shared”. Providing the WorldCat data on Anna’s Archive helps what is known to be shared, and therefore aligns with the OCLC’s goals.
The people at OCLC clearly want to do good by making “information more accessible and more useful”, but are being hampered by a misguided belief that limiting access to its WorldCat database is more important than promoting the widest access to knowledge. According to TorrentFreak, OCLC claims that it spent $5 million, including the salaries of 34 full-time employees, in a forlorn attempt to stop Anna’s Archive from downloading the database information. It could have avoided these costs by simply giving the database to Anna’s Archive – or to anyone else – so that people can help the OCLC in its important mission to share what is known.
The current lawsuit will probably be the first of many, just as happened with Sci-Hub. How Anna’s Archive will respond is not yet clear. But an interesting post on the latter site points out that the continuing rapid fall in storage costs means that in a few years’ time it will be possible to mirror the entirety of even expanded versions of Anna’s Archive for a few thousand dollars. When that happens, there won’t be one or two backups of the site – and hence most human knowledge – but thousands, possibly millions of copies:
We have a critical window of about 5-10 years during which it’s still fairly expensive to operate a shadow library and create many mirrors around the world, and during which access has not been completely shut down yet.
If we can bridge this window, then we’ll indeed have preserved humanity’s knowledge and culture in perpetuity.
If the OCLC truly believes “what is known must be shared” it should celebrate the fact that Anna’s Archive could soon make humanity’s knowledge universally and freely available – not try to fight it with costly and pointless legal actions.
Featured image by Anna’s Archive via Archive.org. Originally published to Walled Culture.
Filed Under: academic publishing, academic research, archives, copyright, knowledge, lawsuits, sharing, worldcat
Companies: anna's archive, oclc, sci-hub
Any Privacy Law Is Going To Require Some Compromise: Is APRA The Right Set Of Tradeoffs?
from the a-federal-law-that-doesn't-totally-suck?!? dept
Privacy issues have been at the root cause of so many concerns about the internet, but so many attempts to regulate privacy have been a total mess. There’s now a more thoughtful attempt to regulate privacy in the US that is (perhaps surprisingly!) not terrible.
For a while now, we’ve talked about how many of the claims from politicians and the media about the supposed (and often exaggerated, but not wholly fictitious) concerns about the internet are really the kinds of concerns that could be dealt with by a comprehensive privacy bill that actually did the right things.
Concerns about TikTok, questionably targeted advertising, the sketchy selling of your driving records, and more… are really all issues related to data privacy. It’s something we’ve talked about for a while, but most efforts have been a mess, even as the issue has become more and more important.
Part of the problem is that we’re bad at regulating privacy because most people don’t understand privacy. I’ve said this multiple times in the past, but the instincts of many is that privacy should be regulated as if our data were our “property.” But that only leads to bad results. When we treat data as property, we create new, artificial, property rights laws, a la copyright. And if you’re reading Techdirt, you should already understand what kind of awful mess that can create.
Artificial property rights are a problematic approach to just about anything, and (most seriously) frequently interfere with free speech rights and create all sorts of downstream problems. We’ve already seen this in the EU with the GDPR, which has many good characteristics, but also has created some real speech problems, while also making sure that only the biggest companies can exist, which isn’t a result anyone should want.
Over the last few weeks, there’s been a fair bit of buzz about APRA, the American Privacy Rights Act. It was created after long, bipartisan, bicameral negotiations between two elected officials with very different views on privacy regulation: Senator Maria Cantwell and Rep. Cathy McMorris Rodgers. The two had fought in the past on approaches to privacy laws, yet they were able to come to an agreement on this one.
The bill is massive, which is part of the reason why we’ve been slow to write about it. I wanted to be able to read the whole thing and understand some of the nuances (and also to explore a lot of the commentary on it). If you want a shorter summary, the best, most comprehensive I’ve seen came from Perla Khattar at Tech Policy Press, who broke down the key parts of the bill.
The key parts of the bill are that it takes a “data minimization” approach. Covered companies need to make sure that the data they’re collecting is “necessary” and “proportionate” to what the service is providing. This means organizations making over $40 million a year, processing data on over 200,000 consumers, and that transfer covered data to third parties. If it’s determined that companies are collecting and/or sharing too much, they could face serious penalties.
Very big social media companies, dubbed “high impact social media companies,” that have over 3billioninglobalrevenueand3 billion in global revenue and 3billioninglobalrevenueand300 million or more global monthly active users, have additional rules.
I also greatly appreciate that the law explicitly calls out data brokers (often left out of other privacy bills, even though data brokers are often the real privacy problem) and requires them to take clear steps to be more transparent to users. The law also requires data minimization for those brokers, while prohibiting certain egregious activities.
I always have some concerns about laws that have size thresholds. It creates the risk of game playing and weird incentives. But of most bills in this area that I’ve seen, the thresholds in this one seem… mostly okay? Often the thresholds seem ridiculously low, covering small companies too readily in a way that would create massive compliance costs too early, or only target the very largest companies. This bill takes a more middle ground approach.
There are also a bunch of rules to make sure companies are doing more to protect data security, following best practices that are reasonable based on the size of the company. I’m always a little hesitant on things like that because whether or not a company took reasonable steps is often viewed through the lens of retrospect, after some awful breach occurs, when we realize how poorly someone actually secured their data, even if upfront it appeared secure. How this plays out in practice will matter.
The law is not perfect, but I’m actually coming around to the belief that it may be the best we’re going to get and has many good provisions. I know that many activist groups, including those I normally agree with, don’t like the bill for specific reasons, but I’m going to disagree with them on those reasons. We can look at EFF’s opposition as a representative example.
EFF is concerned that it does not like the state pre-emption provisions, and also wishes that the private right of action (allowing individuals to sue) would be stronger. I actually disagree on both points, though I think it’s important to explain why. These were two big sticking points over previous bills, but I think they were sticking points for a very good reason.
On state pre-emption: many people (and states!) want to be able to pass stricter privacy laws, and many activists support that. However, I think the only way a comprehensive federal privacy bill makes sense is if it pre-empts state privacy laws. Otherwise, companies have to comply with 50+ different state privacy laws, some of which are going to be (or already are) absolutely nutty. This would, again, play right into the hands of the biggest companies, that can afford to craft different policies for different states, or that can figure out ways to craft policies that comply with every state. But it would be deathly for many smaller companies.
Expecting state politicians to get this right is a big ask, given just how messed up attempts to regulate privacy have been over the last few years. Hell, just look at California, where we basically let some super rich dude with no experience in privacy law force the state into writing a truly ridiculously messed up privacy law (then make it worse before anything was even tested) and finally… give that same rich dude control over the enforcement of the law. That’s… not good.
It seems like the only workable way to do this without doing real harm to smaller companies is to have the federal government step in and say “here is the standard across the board.” I have seen some state officials upset about this, but the law still leaves the states’ enforcement powers on the more national standard.
That said, I’m still a bit wary about state enforcement. State AGs (in a bipartisan manner) have quite a history of doing enforcement actions for political purposes more than any legitimate reason. I do fear APRA giving state AGs another weapon to use disproportionately against organizations they simply dislike or have political disagreements with. We’ve seen it happen in other contexts, and we should be wary of it here.
As for the private right of action, again, I understand where folks like the EFF would like to see a broader private right of action. But we also know how this tends to work out in practice. Because of the ways in which attempts to stifle speech can be twisted and presented as “privacy rights” claims, we should be wary about handing too broad a tool for people to use, as we’ll start to see all sorts of vexatious lawsuits, claiming privacy rights, when they’re really an attempt to suppress information, or to simply attack companies someone doesn’t like.
I think APRA sets an appropriate balance in that it doesn’t do away with the private right of action entirely, but does limit how broadly it can be used. Specifically, it limits which parts of the law are covered by the private right of action in a manner that hopefully would avoid the kind of egregious, vexatious litigation that I’ve feared under other laws.
Beyond the states and the private right of action, the bill also sets up the FTC to be able to enforce the law, which will piss off some, but is probably better than just allowing states and private actors to be the enforcers.
I do have some concerns about some of the definitions in the bill being a bit vague and open to problematic interpretations and abuse on the enforcement side, but hopefully that can be clarified before this becomes law.
In the end, the APRA is certainly not perfect, but it seems like one of the better attempts I’ve seen to date at a comprehensive federal privacy bill and is at least a productive attempt at getting such a law on the books.
The bill does seem to be on something of a fast track, though there remain some points of contention. But I’m hopeful that, given the starting point of the bill, maybe it can reach a consensus that no one particularly likes, but which actually gets the US to finally level up on basic privacy protections.
Regulating privacy is inherently difficult, as noted. In an ideal world, we wouldn’t need regulations because we’d have services where our data is separate from the services we use (as envisioned in the protocols not platforms world) and thus more in our own control. But seeing as we still have plenty of platforms out there, the approach presented in APRA seems like a surprisingly good start.
That said, seeing how this kind of sausage gets made, I recognize that bills like this can switch from acceptable to deeply, deeply problematic overnight with small changes. We’ll certainly be watching for that possibility.
Filed Under: apra, california, cathy mcmorris rodgers, ccpr, data minimization, lawsuits, maria cantwell, privacy, private right of action, state preemption
Filing A Badly Drafted, Mistargeted, Bullshit SLAPP Suit Is No Way To Convince Women You’re Not An Asshole
from the yeah,-that'll-work,-sure dept
Dating can be difficult, but there are certain things you can do to not make things worse on yourself. Don’t be a creep. Be kind. Take no for an answer. Actually listen to the people you date. I mean, that’s kinda the standard stuff.
But also, if things go bad and they complain about you online, don’t file the single dumbest lawsuit on the planet in retaliation.
Nikko D’Ambrosio was apparently unable to follow at least one (and possibly more!) of those simple rules. Nikko, a 32-year-old Chicago man (old enough to know better), apparently dated around a bit, then lost his shit when he discovered that some of the women he dated went to the Facebook group “Are We Dating the Same Guy” to offer what were mostly pretty mild complaints about him.
“Very clingy very fast,” the woman commented. “Flaunted money very awkwardly and kept talking about how I don’t want to see his bad side.”
More screenshots showed the woman — who commented as an anonymous member — claimed that after she blocked D’Ambrosio’s number, he used a different number to send her a text in which it appears he attacked her appearance.
Nikko didn’t too much like this. And the guy once described as “very clingy very fast” who allegedly told someone you “don’t want to see his bad side” showed off his bad side in filing this very obvious SLAPP suit against basically anyone he could think of. There are 56 total defendants, including 29 women (some of whom are just relatives of the people he’s actually mad at). There are also 22 variations on Meta/Facebook. While the company has multiple corporate entities, you do not need to sue them all. For good measure, he also sued Patreon and GoFundMe, because why not?
It’s not at all clear why he sued all of those defendants. Most of the individual defendants are not clearly connected to this case. The case only names one woman who he says made defamatory comments about him (they’re not, but we’ll get to that). The rest are just… thrown in there and never explained. Did they like or share the original comments? Who knows. It does appear he sued family members of the main woman he’s mad at, again, for what?
There are so, so, so many problems with the lawsuit I’ve literally restarted this paragraph about six times as I change my mind on which to cover first. But let’s start here: Section 230. As far as I can tell, D’Ambrosio’s lawyers have never heard of it. The complaint doesn’t address it. But it easily bars the lawsuit against all of the many Facebook defendants, as well as Patreon and GoFundMe. He also sues AWDTSG Inc., which is apparently a company that helps to run a series of local “Are We Dating the Same Guy” groups on Facebook, which is what Nikko is particularly pissed at.
Section 230 says that for things like defamation, you get to sue the party who said the actual defamatory thing, not the website that hosts the speech. Should the case even get that far (and it’s not clear that it will), all the Facebook/Meta parties, GoFundme, Patreon, and AWDTSG will easily get their cases tossed on 230 grounds. Having a lawyer file a lawsuit like this without understanding (or even attempting to address) Section 230 seems like malpractice.
Indeed, the lawyers who filed this lawsuit, Marc Trent and Dan Nikolic, kind of parade their ignorance. In the lawsuit they claim that because of “Defendants content moderation responsibilities” they would have had to “review” the posts, and that makes them liable for the alleged defamation. But, um, Section 230 was passed directly to deal with exactly that scenario, and to say that, no, reviewing posts doesn’t make you liable.
And Section 230 protects not just “interactive computer services” but “users” who pass along third party content. So even if he’s suing people for sharing or liking the comments he’s mad about, all of those defendants are protected by Section 230 as well.
It’s stunning that the lawyers in question seem wholly unaware of this.
Next up, defamation. Nothing in the suit appears even remotely close to defamation. The statements all appear to be statements of opinion about what kind of creepy jerk Nikko is. Sorry, Nikko, people are allowed to have opinions of you. That’s not defamation. Nearly all of the statements are clearly opinion statements. And, no, it may not feel great, but opinions that you’re “very clingy, very fast” are not defamatory.
Also, in a defamation suit, you plead which statements were defamatory, including why they are false and defamatory. This complaint does not do that.
Next, they’re trying to use Illinois’ brand new (just went into effect this year!) “doxxing” law, claiming that talking about him and posting his picture violates the law. Now, I think there are some potential 1st Amendment issues with that law, and they’re really driven home by using it here. But to try to make sure that this law is on the correct side of the 1st Amendment, it says that the law is not violated when the speech in question is “activity protected under the United States Constitution,” and boy, lemme tell ya, calling a dude “very clingy” sure qualifies.
There are a bunch of other pretty big legal problems with the lawsuit that are just embarrassing. Ken “Popehat” White covered many of them in his post on this subject. The big one, suggesting that the lawyers have little (if any) familiarity with federal court, is that to file in federal court over state law claims, you have to show “diversity,” meaning that the parties in the case are all in different states. And White notes how badly they fucked that up:
D’Ambrosio’s lawyers assert diversity jurisdiction but make an utter dumpster fire out of it. They admit that both D’Ambrosio and at least one of the defendants come from Illinois, which defeats diversity jurisdiction. They admit they don’t know what state a bunch of the defendants come from. They identify a bunch of the defendants as limited liability companies, but don’t plead the facts necessary to identify those entities’ citizenship for purposes of diversity. This is the kind of thing that makes federal judges issue orders of their own accord saying, in judicial terms, “what the fuck is this shit?”
Also, the lawyers claim it’s a “class action” lawsuit, and are actively seeking to recruit more plaintiffs on Reddit, naturally (where — hilariously — the person who originally posted the topic asked the lawyers if they wanted him to start a GoFundMe, apparently not realizing GoFundMe was one of the defendants in the case). Class action defamation lawsuits aren’t really a thing, because for it to be defamation it has to be a statement about a specific person, and the specifics matter. But even beyond that, if you’re filing a class action lawsuit, you have to take some steps, and as White points out, these lawyers didn’t do that:
The caption of the lawsuit proclaims that it’s a class action, and D’Ambrosio’s lawyers have made comments suggesting that they see themselves as suing on behalf of “victims” other than D’Ambrosio. But other than the caption, the lawsuit contains not a single relevant allegation about being a class action. It doesn’t plead any of the factors necessary to qualify as a class action. It’s also obviously unsuited to be a class action: a class action requires a pool of plaintiffs with factually and legally similar claims, but defamation claims are by their nature very individual and context-specific, and each aggrieved man’s case would be very different depending on what was said about them.
White notes that the lawsuit is so badly drafted that he expects it may get dismissed just on the jurisdictional problems without defendants even having to file anything. He also suggests it’s so bad that it could lead to sanctions from the judge.
But, also, this is exactly the kind of case for which I coined the term Streisand Effect nearly twenty years ago. Doing this kind of shit won’t protect your reputation, it will destroy your reputation. And, again as White points out, a good lawyer would warn you of that before filing this sort of lawsuit. Whether or not they warned him about it, the lawsuit has been filed and now the allegedly “very clingy, very fast” guy who might be “very awkward” is, well, having his reputation spread pretty far and wide.
And there are many, many more. So rather than just the types of people who hang out on the “Are We Dating the Same Guy” Facebook groups, now many, many, many more people — some of whom I’d assume are in the dating pool in the Chicago area — are aware of Nikko D’Ambrosio and his reputation. And not just his reputation for being very clingy, very fast, but his reputation for filing bullshit SLAPP suits to try to silence women for expressing their opinion of him.
Hopefully the judge does dump the case. While Illinois does have a decent anti-SLAPP law (which would clearly apply here), the 7th Circuit has suggested it does not apply in federal court (of course, because of the jurisdiction issues, this case doesn’t apply there either, but… whatever).
More importantly, this is a case that demonstrates yet again why Section 230 is so important to protect people against harassment like this very lawsuit. Without Section 230, it becomes way easier to abuse the legal system to try to silence women who point out that you’re a creep. Section 230 protects that kind of information sharing.
The whole case is a mess of epic proportions. It’s a lawsuit that never should have been filed, but now that it has, congrats to Nikko D’Ambrosio for making sure every dating-eligible woman in Chicago knows to avoid you.
Filed Under: clingy, dating, defamation, doxxing, illinois, lawsuits, nikko d'ambrosio, opinion, section 230, slapp suit, very clingy
Companies: awdtsg, facebook, meta, trent law firm
California Court, Ridiculously, Allows School Lawsuits Against Social Media To Move Forward
from the this-is-not-how-any-of-this-works dept
Over the last year, we’ve covered a whole bunch of truly ridiculous, vexatious, bullshit lawsuits filed by school districts against social media companies, blaming them for the fact that the school boards don’t know how to teach students (the one thing they’re supposed to specialize in!) how to use the internet properly. Instead of realizing the school board ought to fire themselves, some greedy ambulance-chasing lawyers have convinced them that if courts force social media companies to pay up, they’ll never have a budget shortfall again. And school boards desperate for cash, and unwilling to admit their own failings as educators, have bought into the still unproven moral panic that social media is harming kids. This is despite widespread evidence that it’s just not true.
While there are a bunch of these lawsuits, some in federal court and some in state courts, some of the California state court ones were rolled up into a single case, and on Friday, California state Judge Carolyn Kuhl (ridiculously) said that the case can move forward, and that the social media companies’ 1st Amendment and Section 230 defenses don’t apply (first reported by Bloomberg Law).
There is so much wrong with this decision, it’s hard to know where to start, other than to note one hopes that a higher court takes some time to explain to Judge Kuhl how the 1st Amendment and Section 230 actually work. Because this is not it.
The court determines that Defendants’ social media platforms are not “products” for purpose of product liability claims, but that Plaintiffs have adequately pled a cause of action for negligence that is not barred by federal immunity or by the First Amendment. Plaintiffs also have adequately pled a claim of fraudulent concealment against Defendant Meta.
As noted in that paragraph, the product liability claims fail, as the court at least finds that social media apps don’t fit the classification of a “product” for product liability purposes.
Product liability doctrine is inappropriate for analyzing Defendants’ responsibility for Plaintiffs’ injuries for three reasons. First, Defendants’ platforms are not tangible products and are not analogous to tangible products within the framework of product liability. Second, the “risk-benefit” analysis at the heart of determining whether liability for a product defect can be imposed is illusive in the context of a social media site because the necessary functionality of the product is not easily defined. Third, the interaction between Defendants and their customers is better conceptualized as a course of conduct implemented by Defendants through computer algorithms.
However, it does say that the negligence claims can move forward and are not barred by 230 or the 1st Amendment. A number of cases have been brought using this theory over the last few years, and nearly all of them have failed. Just recently we wrote about one such case against Amazon that failed on Section 230 grounds (though the court also makes clear that even without 230 it would have failed).
But… the negligence argument the judge adopts is… crazy. It starts out by saying that the lack of age verification can show negligence:
In addition to maintaining “unreasonably dangerous features and algorithms”, Defendants are alleged to have facilitated use of their platforms by youth under the age of 13 by adopting protocols that do not verify the age of users, and “facilitate[ed] unsupervised and/or hidden use of their respective platforms by youth” by allowing “youth users to create multiple and private accounts and by offering features that allow youth users to delete, hide, or mask their usage.”
This seems kinda crazy to say when it comes less than a month after a federal court in California literally said that requiring age verification is a clear 1st Amendment violation.
The court invents, pretty much out of thin air, a “duty of care” for internet services. There have been many laws that have tried to create such a duty of care, but as we’ve explained at great length over the years, a duty of care regarding speech on social media is unconstitutional, as it will easily lead to over-blocking out of fear of liability. Even as the court recognized that internet services are not a product in the product liability sense, because that would make no sense, for negligence… it cited a case involving… electric scooters? Yup. Electric scooters.
In Hacala, the Court of Appeal held that defendant had a duty to use care when it made its products available for public use and one of those products harmed the plaintiff. The defendant provided electric motorized scooters that could be rented through a “downloadable app.” (Id. at p. 311.) The app allowed the defendant “to monitor and locate its scooters and to determine if its scooters were properly parked and out of the pedestrian right-of-way.” (Id., internal quotation marks and brackets omitted.) The defendant failed to locate and remove scooters that were parked in violation of the requirements set forth in the defendant’s city permit, including those parked within 25 feet of a single pedestrian ramp. (Id.) The defendant also knew that, because the defendant had failed to place proper lighting on the scooters, the scooters would not be visible to pedestrians at night. (Id. at p. 312.) The court found that these allegations were a sufficient basis on which to find that the defendant owed a duty to members of the public like the plaintiff, who tripped on the back wheel of one of the defendant’s scooters when walking “just after twilight.” (Id. at p. 300.)
Here, Plaintiffs seek to hold Defendants liable for the way that Defendants manage their property, that is, for the way in which Defendants designed and operated their platforms for users like Plaintiffs. Plaintiffs allege that they were directly injured by Defendants’ conduct in providing Plaintiffs with the use of Defendants’ platforms. Because all persons are required to use ordinary care to prevent others from being injured as the result of their conduct, Defendants had a duty not to harm the users of Defendants’ platforms through the design and/or operation of those platforms.
But, again, scooters are not speech. It is bizarre that the court refused to recognize that.
The social media companies also pointed out that the claims made by the school districts about kids saying they ended up suffering from depression, anxiety, eating disorders, and more from social media, can’t be directly traced back to the social media companies. As the social media companies point out, if a student goes to a school and suffers from depression, she can’t sue the schools for causing depression. But, no, the judge says that there’s a “close connection” between social media and the suffering (based on WHAT?!? she does not say).
Here, as previously discussed, there is a close connection between Defendants’ management of their platforms and Plaintiffs’ injuries. The Master Complaint is clear in stating that the use of each of Defendants’ platforms leads to minors’ addiction to those products, which, in turn, leads to mental and physical harms. (See, e.g., Mast. Compl., 11 80-95.) These design features themselves are alleged to “cause or contribute to (and, with respect to Plaintiffs, have caused and contributed to) [specified] injuries in young people….” (Mast. Compl., ¶ 96, internal footnotes omitted; see also Mast. Compl., ¶ 102 [alleging that Defendants’ platforms “can have a detrimental effect on the psychological health of their users, including compulsive use, addiction, body dissatisfaction, anxiety, depression, and self-harming behaviors such as eating disorders”], internal quotation marks, brackets, and footnotes omitted.) Plaintiffs allege that the design features of each of the platforms at issue here cause these types of harms. (See, e.g., Mast. Compl., 11268-337 (Meta); 1 484-487, 489-490 (Snap); 11 589-598 (ByteDance); ¶¶ 713-773, 803 (Google).) These allegations are sufficient under California’s liberal pleading standard to adequately plead causation.
The court also says that if the platforms dispute the level to which they caused these harms, that’s a matter of fact, to be dealt with by a jury.
Then we get to the Section 230 bit. The court bases much of its reasoning on Lemmon v. Snap. This is why we were yelling about the problems that Lemmon v. Snap would cause, even as we heard from many (including EFF?) who thought that the case was decided correctly. It’s now become a vector for abuse, and we’re seeing that here. If you just claim negligence, some courts, like this one, will let you get around Section 230.
As in Lemmon, Plaintiffs’ claims based on the interactive operational features of Defendants’ platforms do not seek to require that Defendants publish or de- publish third-party content that is posted on those platforms. The features themselves allegedly operate to addict and harm minor users of the platforms regardless of the particular third-party content viewed by the minor user. (See, e.g., Mast. Compl., 11 81, 84.) For example, the Master Complaint alleges that TikTok is designed with “continuous scrolling,” a feature of the platform that “makes it hard for users to disengage from the app,” (Mast. Compl., ¶ 567) and that minor users cannot disable the “auto-play function” so that a “flow-state” is induced in the minds of the minor users (Mast. Compl., 1 590). The Master Complaint also alleges that some Plaintiffs suffer sleep disturbances because “Defendants’ products, driven by IVR algorithms, deprive users of sleep by sending push notifications and emails at night, prompting children to re-engage with the apps when they should be sleeping.” (Mast. Comp., 107 [also noting that disturbed sleep increases the risk of major depression and is associated with “future suicidal behavior in adolescents”].)
Also similar to the allegations in Lemmon, the Master Complaint alleges harm from “filters” and “rewards” offered by Defendants. Plaintiffs allege, for example, that Defendants encourage minor users to create and post their own content using appearance-altering tools provided by Defendants that promote unhealthy “body image issues.” (Mast. Compl., 194). The Master Complaint alleges that some minors spend hours editing photographs they have taken of themselves using Defendants’ tools. (See, e.g., Mast. Compl., 318.) The Master Complaint also alleges that Defendants use “rewards” to keep users checking the social media sites in ways that contribute to feelings of social pressure and anxiety. (See, e.g., Mast. Compl., ¶ 257 [social pressure not to lose or break a “Snap Streak”].)
There’s also the fact that kids “secretly” used these apps without their parents knowing, but… it’s not at all clear how that’s the social media companies’ fault. But the judge rolls with it.
Another aspect of Defendants’ alleged lack of due care in the operation of their platforms is their facilitation of unsupervised or secret use by allowing minor users to create multiple and private accounts and allowing minor users to mask their usage. (Mast. Compl., 1929(d), (e), (f).) Plaintiffs J.S. and D.S., the parents of minor Plaintiff L.J.S., allege that L.J.S. was able to secretly use Facebook and Instagram, that they would not have allowed use of those sites, and that L.J.S. developed an addiction to those social media sites which led to “a steady decline in his mental health, including sleep deprivation, anxiety, depression, and related mental and physical health harms.” (J.S. SFC 11 7-8.)
Then, there’s a really weird discussion about how Section 230 was designed to enable users to have more control over their online experiences, and therefore, the fact that users felt out of control means 230 doesn’t apply? Along similar lines, the court notes that since the intent of 230 was “to remove disincentives” for creating tools for parents to filter the internet for their kids, the fact that parents couldn’t control their kids online somehow goes against 230?
Similarly, Congress made no secret of its intent regarding parental supervision of minors’ social media use. By enacting Section 230, Congress expressly sought “to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict children’s access to objectionable or inappropriate online material.” (47 U.S.C. § 230, subd. (b)(4).) While in some instances there may be an “apparent tension between Congress’s goals of promoting free speech while at the same time giving parents the tools to limit the material their children can access over the Internet” (Barrett, supra, 40 Cal.4th at p. 56), where a plaintiff seeks to impose liability for a provider’s acts that diminish the effectiveness of parental supervision, and where the plaintiff does not challenge any act of the provider in publishing particular content, there is no tension between Congress’s goals.
But that’s wholly misunderstanding both the nature of Section 230 and what’s going on here. Services shouldn’t lose 230 protections just because kids are using services behind their parents’ backs. That makes no sense. But, here, the judge seems to think it’s compelling.
The judge also claims (totally incorrectly based on nearly all of the case law) that if, as the social media companies claim, any harms from social media are due to third party content (which would mean Section 230 protections apply), that’s a matter for the jury.
Although Defendants argue they cannot be liable for their design features’ ability to addict minor users and cause near constant engagement with Defendants’ platforms because Defendants create such “engagement” “with user-generated content” (Defs’ Dem., at p. 42, internal italics omitted), this argument is best understood as taking issue with the facts as pleaded in the Master Complaint. It may very well be that a jury would find that Plaintiffs were addicted to Defendants’ platforms because of the third-party content posted thereon. But the Master Complaint nonetheless can be read to state the contrary-that is, that it was the design of Defendants’ platforms themselves that caused minor users to become addicted. To take another example, even though L.J.S. was viewing content of some kind on Facebook and Instagram, if he became addicted and lost sleep due to constant unsupervised use of the social media sites, and if Defendants facilitated L.J.S.’s addictive behavior and unsupervised use of their social media platforms (i.e., acted so as to maximize engagement to the point of addiction and to deter parental supervision), the negligence cause of action does not seek to impose liability for Defendants’ publication decisions, but rather for their conduct that was intended to achieve this frequency of use and deter parental supervision. Section 230 does not shield Defendants from liability for the way in which their platforms actually operated.
But if that’s the case, it completely wipes out the entire point of Section 230, which is to get these kinds of silly, vexatious cases dismissed early on, such that companies aren’t constantly under threat of liability if they don’t magically solve large societal problems.
From there, the court also rejects the 1st Amendment arguments. To get around those arguments, the court repeatedly keeps arguing that the issue is the way that social media designed its services, and not the content on those services. But that’s tap dancing around reality. When you dig into any of these claims, they’re all, at their heart, entirely about the content.
It’s not the “infinite scroll” that is keeping people up at night. It’s the content people see. It’s not the lack of age verification that is making someone depressed. Assuming it’s even related to the social media site, it’s from the content. Ditto for eating disorders. When you look at the supposed harm, it always comes back to the content, but the judge dismisses all of that and says that the users are addicted to the platform, not the content on the platform.
Because the allegations in the Master Complaint can be read to state that Defendants’ liability grows from the way their platforms functioned, the Demurrer cannot be sustained pursuant to the protections of the First Amendment. As Plaintiffs argue in their Opposition, the allegations can be read to state that Plaintiffs’ harms were caused by their addiction to Defendants’ platforms themselves, not simply to exposure to any particular content visible on those platforms. Therefore, Defendants here cannot be analogized to mere publishers of information. To put it another way, the design features of Defendants’ platforms can best be analogized to the physical material of a book containing Shakespeare’s sonnets, rather than to the sonnets themselves.
Defendants fail to demonstrate that the design features of Defendants’ applications must be understood at the pleadings stage to be protected speech or expression. Indeed, throughout their Demurrer, Defendants make clear their position that Plaintiffs’ claims are based on content created by third parties that was merely posted on Defendants’ platforms. (See, e.g., Defs’ Dem., at p. 49.) As discussed above, a trier of fact might find that Plaintiffs’ harms resulted from the content to which they were exposed, but Plaintiffs’ allegations to the contrary control at the pleading stage.
There are some other oddities in the ruling as well, including dismissing the citation to the NetChoice/CCIA victory in the 11th Circuit regarding Florida’s social media moderation law, because the judge says that ruling doesn’t apply here, since the lawsuit isn’t about content moderation. She seems to falsely think that the features on social media have nothing to do with content moderation, but that’s just factually wrong.
There are a few more issues in the ruling, but those are basically the big parts of it. Now, it’s true that this is just based on the initial complaints, and at this stage of the procedure, the judge has to rule assuming that everything pleaded by the plaintiffs is true, but the way it was done here almost entirely wipes out the entire point of Section 230 (not to mention the 1st Amendment).
Letting these cases move forward enables exactly what Section 230 was designed to prevent: creating massive liability and expensive litigation over choices regarding how a website publishes and presents content. The end result, if this is not overturned, is likely to be a large number of similar (and similarly vexatious) lawsuits that overwhelm websites over potential liability. If each one has to go to a jury before its decided, it’s going to be a total mess.
The whole point of Section 230 was to have judges dismiss these cases early on. And here, the judge has gotten almost all of it backwards.
Filed Under: 1st amendment, addiction, california, carolyn kuhl, depression, lawsuits, moral panic, schools, section 230, social media
Companies: facebook, instagram, meta, snap, tiktok, youtube
Elon Musk’s ‘War’ On Possibly Imaginary Scrapers Now A Lawsuit, Which Might Actually Work
from the killing-the-open-web dept
Elon Musk seems infatuated with bots and scrapers as the root of all his problems at Twitter. Given his propensity to fire engineers who tell him things he doesn’t want to hear, it’s not difficult to believe that engineers afraid to tell Musk the truth are conveniently blaming “scraping” on the variety of problems that Twitter has had since Musk’s YOLO leadership style at Twitter has knocked out some fundamental tools that kept the site reliable in the before times.
He tried to blame bots for spam (which he’s claimed repeatedly to have dealt with, but then gone back to blaming them for other things, because he hasn’t actually stopped automated spam). His attempts to “stop the bots” has resulted in a series of horrifically stupid decision-making, including believing that his non-verification Twitter Blue system would solve it (it didn’t), believing that cutting off free API access would drive away the spam bots (it drove away the good bots), and then believing that rate limiting post views would somehow magically stop scraping bots (which might only be scraping because of his earlier dumb decision to kill off the API).
The latest, though, is that last week Twitter went to court last week to sue ‘John Doe’ scrapers in a Texas court. And while I’ve long argued that scraping should be entirely legal, court precedents may be on Twitter’s side here.
Scraping is part of how the internet works and has always worked. The war on scraping is problematic for all sorts of reasons, and is an attack on the formerly open web. Unfortunately, though, courts are repeatedly coming out against scraping.
So, while I’d argue that this, from the complaint, is utter nonsense, multiple courts seem to disagree and find the argument perfectly plausible:
Scraping is a form of unauthorized data collection that uses automation and other processes to harvest data from a website or a mobile application.
Scraping interferes with the legitimate operation of websites and mobile applications, including Twitter, by placing millions of requests that tax the capacity of servers and impair the experience of actual users.
This is not how any of this should work, and is basically just an attack on the open web. Yes, scraping bots can overwhelm a site, but it’s on the site itself to block it, not the courts.
Twitter users have no control over how data-scraping companies repackage and sell their personal information.
This sounds scary, but again is nonsense. Scraping only has access to public information. If you post information publicly, then of course users don’t have control over that information any more. That’s how information works.
The complaint says that Twitter (I’m not fucking calling it ‘X Corp.’) has discovered IP addresses engaged in “flooding Twitter’s sign-up page with automated requests.” The complaint says:
The volume of these requests far exceeded what any single individual could send to a server in a given period and clearly indicated that these automated requests were aimed at scraping data from Twitter.
This also feels like a stretch. It seems like the more likely reason for flooding a sign up page is to create spam accounts. That’s also bad, of course, but it’s not clear how this automatically suggests scraping.
Of course, there have been a bunch of scraping cases in the past, and there are some somewhat mixed precedents here. There was the infamous Power.com case, that said it could be a CFAA (Computer Fraud and Abuse Act) violation to scrape content from behind a registration wall (even if the user gave permission). Last year, there was the April ruling in the 9th Circuit on LinkedIn/HiQ which notably said that scraping from a public website rather than a registration-walled website could not be a CFAA violation.
Indeed, much of the reporting on Twitter’s new lawsuit is pointing to that decision. But, unfortunately, that’s the wrong decision to look at. Months later, the same court ruled again in that case (in a ruling that got way less attention) that even if the scraping wasn’t a CFAA violation, it was still a a violation of LinkedIn’s terms of service, and granted an injunction against the scrapers.
Given the framing in the complaint, Twitter seems to be arguing the same thing (rather than a CFAA violation, that this is a terms of service violation). On top of that, this case is filed in Texas state court, and at least in federal court in Texas, the 5th Circuit has found that scraping data can be considered “unjust enrichment.”
In other words, as silly as this is, and as important scraping is to the open web, it seems that courts are buying the logic of this kind of lawsuit, meaning that Twitter’s case is probably stronger than it should be.
Of course, Twitter still needs to figure out who is actually behind these apparent scraping IP addresses, and then show that they actually were scraping. And who knows if the company will be able to do that. In the meantime, though, this is yet another case, following in the unfortunate pattern of Facebook, LinkedIn, and even Craigslist, to spit on the open web they were built on.
Filed Under: lawsuits, scraping, terms of service, texas
Companies: twitter
Madison Square Garden’s Facial Recognition-Enabled Bans Now Being Targeted By Legislators, State AG
from the yet-another-problematic-use-of-the-tech dept
Late last year, it was revealed that MSG Entertainment (the owner of several New York entertainment venues, including the titular Madison Square Garden) was using its facial recognition tech to, in essence, blacklist its owner’s enemies.
Those targeted included lawyers working for firms currently engaged in litigation against MSG Entertainment. Owner James Dolan, through his company’s PR wing, stated these bans were meant to prevent adversarial litigants from performing ad hoc discovery by snooping around arenas under the auspices of event attendance.
That might have made sense if it only targeted lawyers actually involved in this litigation. But these blanket bans appeared to deny access to any lawyer employed by these firms, something that resulted in a woman being unable to attend a Rockettes performance with her daughter and her Girl Scout troop, and another lawyer being ejected from a Knicks game.
These facial recognition-assisted bans immediately became the subject of new litigation against MSG Entertainment. Some litigants were able to secure a temporary injunction by reaching deep into the past to invoke a 1941 law enacted to prevent entertainment venues from banning entertainment critics from attending events.
The restraining orders were of limited utility, though. Some affected lawyers still found themselves prevented from entering despite carrying copies of the injunction with them. And the law itself has a pretty significant loophole: it does not cover sporting events, which are a major part of MSG Entertainment’s offerings.
While possibly legal (given that private companies can refuse service to anyone [for the most part]), it was also stupid. It looked more vindictive than useful, with owner James Dolan punishing anyone who had the temerity to be employed by law firms that dared to sue his company. It’s robber baron type stuff and it never plays well anywhere, which you’d think someone involved in the entertainment business would have realized.
Now, the government is coming for Dolan and his facial recognition tech-based bans. As Jon Brodkin reports for Ars Technica, the state attorney general’s office is starting to ask some serious questions.
[AG Letitia] James’ office sent a letter Tuesday to MSG Entertainment, noting reports that it “used facial recognition software to forbid all lawyers in all law firms representing clients engaged in any litigation against the Company from entering the Company’s venues in New York, including the use of any season tickets.”
“We write to raise concerns that the Policy may violate the New York Civil Rights Law and other city, state, and federal laws prohibiting discrimination and retaliation for engaging in protected activity,” Assistant AG Kyle Rapiñan of the Civil Rights Bureau wrote in the letter. “Such practices certainly run counter to the spirit and purpose of such laws, and laws promoting equal access to the courts: forbidding entry to lawyers representing clients who have engaged in litigation against the Company may dissuade such lawyers from taking on legitimate cases, including sexual harassment or employment discrimination claims.”
The AG’s office also expressed its concerns about facial recognition tech in general, noting it’s often “plagued with biases and false positives.” It’s a legitimate concern, but perhaps AG James should cc the NYPD, which has been using this “plagued with bias” tech for more than a decade.
Dolan/MSG’s plan to keep booting lawyers from venues is now facing another potential obstacle. City legislators are prepping an amendment that would pretty much force MSG to end this practice.
“New Yorkers are outraged by Madison Square Garden booting fans from their venue simply because they’re perceived as corporate enemies of James Dolan,” the bill’s sponsor, state Sen. Brad Hoylman-Sigal, told the Daily News.
“This is a straightforward, simple fix to the New York State civil rights law that would hopefully prevent fans from being denied entry simply because they work for a law firm that may have a client in litigation against Madison Square Garden Entertainment,” he added.
It is a simple fix. The bill would take the existing 1941 law — the one forbidding entertainment venues from banning critics — and close the sporting event loophole. That would pretty much cover everything hosted by MSG, which would make Dolan’s ban plan unenforceable.
Of course, none of this had to happen. If Dolan and MSG were having problems with adversarial lawyers snooping around their venues, they could bring this to the attention of the courts handling these cases. A blanket ban of entire law firms did little more than anger the sort of people you generally don’t want to piss off when there’s litigation afoot: lawyers. What looked like a cheap and punitive win now looks like a PR black eye coupled with a brand new litigation headache.
Filed Under: facial recognition, james dolan, lawsuits, lawyers, letitia james, new york
Companies: msg entertainment
Utah Promises That It’s Going To Sue Social Media For Being Bad For Kids
from the good-look-with-that dept
Utah, as a state, has a pretty long history of having terrible policy proposals regarding laws about the internet. And now it’s getting dumber. On Monday, the state’s Attorney General Sean Reyes and Governor Spencer Cox, hosted a very weird press conference. It was billed by them as an announcement about how Utah is suing all the social media companies for not “protecting kids.” Which is already pretty ridiculous. Even more ridiculous, is that Governor Cox’s audience eagerly announced that people should watch the livestream… on social media.
Even more ridiculous: I kept expecting them to announce the details of the actual lawsuit, but it turns out that they haven’t even hired lawyers, let alone planned out the lawsuit. The official announcement notes that they’re putting out a request for proposal to find the most ridiculous law firm possible to file the suit.
Specifics of any legal action are not being released at this time. A Request for Proposal (RFP) document will be submitted this week to prepare for hiring outside counsel to assist with any litigation that could soon occur.
Can I reply to the RFP with a document that just says: “this is not how any of this works, and it makes Utah look like a clueless, anti-tech, anti-innovation backwater?” Cox has actually been surprisingly good on internet issues in the past, and seemed like he understood this stuff, but this kind of nonsense grandstanding makes him look really bad.
Again, the actual evidence regarding social media and children is at best inconclusive, and more likely shows that most kids actually get real value out of it as a way to keep in touch with more people, and get more access to valuable, useful information and people. A big look at basically all of the research on the “harm” of social media on kids found… no evidence to support the narrative.
And looking at the actual research we see the same thing again and again. Oxford did a massive study, looking at over 12,000 kids, and found that social media had effectively zero impact on the health and well being of children. A few years ago, a study (again, looking at multiple studies) noted that the emerging consensus view was that social media didn’t harm kids.
Just recently, we covered a pretty massive Pew Research Center study that surveyed over 1,300 teenagers, and found that, not only was social media not causing harm, it appeared to be providing real value to many of them.
And, whether or not you trust Facebook’s own internal research, the leaked research that the company did on whether or not Facebook and Instagram made kids feel worse about themselves, found that on nearly all issues, it actually made them feel better about themselves:
So, just starting out, the entire premise of this lawsuit seems to be on a moral panic myth that is not supported by any actual evidence, which seems like a pretty dumb reason to file a lawsuit.
The reasons given in the announcement in Utah are the usual moral panic list of things that basically all teenagers face, and faced before the internet existed as well:
“Depression, eating disorders, suicide ideation, cutting, addictions, mass violence, cyberbullying, and other dangers among young people may be initiated or amplified by negative influences and traumatic experiences online through social media.
Except, it’s one thing to say that people using social media experience these things, because basically everyone is on social media these days. The real question is whether or not social media is somehow causing these things, and again, pretty much all of the actual studies say the answer is “no.” And, expecting anyone to be able to sort out which harms are caused by social media, let alone in a way that has legal liability, is ridiculous.
Also, many of these topics are way more complex than the simple analyses cover. We’ve talked before about the studies on eating disorders, for example. Multiple studies have shown that when social media tried to crack down on online discussions about eating disorders it actually made the problem worse, not better. That’s because the eating disorders aren’t caused by social media. The kids are dealing with them no matter what. So when the content is banned, kids find ways around the bans. They always do. And, in doing so, it made it more difficult for others to monitor those discussions, and it often destroyed more open communities where people were helping those who had eating disorders get the help they needed. So demands that websites “crack down” on such content are actually making things worse, and doing more harm to the kids than the websites were doing in the first place.
There’s evidence to suggest the same is true of suicide discussions as well.
All that is to say, this is complicated stuff, and a bunch of grandstanding politicians ignoring what the actual research says in order to generate misleading headlines for themselves are not helping. At all.
And that’s not even getting into what any possible lawsuit could claim. What legal violation is there here? The answer is that there’s none. It doesn’t mean that AG Reyes can’t hassle and annoy companies. But, there’s no actual legal, factual, or moral reason to do any of this. There are only bad reasons, based around Reyes and Cox wanting headlines playing off the moral panics of today.
Filed Under: for the children, lawsuits, protect the children, sean reyes, social media, spencer cox, studies, utah
Hundreds Of Authors Ask Publishers To Stop Attacking Libraries
from the save-the-libraries dept
We keep pointing out that publishers hate libraries. Oh, they’ll pretend otherwise, and make broad platitudes about libraries and the good of society. But, it’s clear in how they act that they think of libraries as dens of piracy. They’re now using the ebook revolution as a chance to harm, or even wipe out, libraries. The biggest battle on this front is the big publishers’ lawsuit against the Internet Archive.
Now, hundreds of authors have signed onto a letter put together by Fight for the Future in support of libraries and asking the publishers to back off.
Libraries are a fundamental collective good. We, the undersigned authors, are disheartened by the recent attacks against libraries being made in our name by trade associations such as the Association of American Publishers and the Publishers Association: undermining the traditional rights of libraries to own and preserve books, intimidating libraries with lawsuits, and smearing librarians.
We urge all who are engaged in the work of getting books into the hands of readers to act in the interests of all authors, including the long-marginalized, midlist, and emerging authors whom librarians have championed for decades.
This is important, because like with the RIAA claiming to represent musicians (rather than the labels they actually represent), the publishers always frame their attacks on libraries as if it’s about protecting authors’ interests. And here are tons of authors, including some very big names like Neil Gaiman, saying that the publishers need to not just stop going after libraries, but especially that they need to stop doing so in the name of authors.
The letter has three asks, all of which I think are important and which I’ll quote fully here:
- Enshrine the right of libraries to permanently own and preserve books, and to purchase these permanent copies on reasonable terms, regardless of format. Many libraries would prefer to own and preserve digital editions, as they have always done with print books, but these days publishers rarely offer them the option. Instead, when libraries have access to ebooks at all, the prices libraries pay to rent ebooks are often likened to extortion.
Digital editions are more affordable to produce and often more accessible, but libraries are already relying on emergency funds and may only be able to license a small selection of mainstream works in the future. In turn, readers will have fewer opportunities to discover the more diverse potential bestsellers of tomorrow.
It is past time to determine a path forward that is fair to both libraries and authors—including a perpetual model for digital ownership based on the cost to maintain a print edition. - End lawsuits aimed at intimidating libraries and diminishing their role in society. The interests of libraries are the interests of the public, and of any author concerned with equity and longevity for themselves and their fellow writers. We are all on the same side. Yet a unanimously passed Maryland state law ensuring libraries pay “reasonable fees” for digital editions died after the AAP sued. And after a previous suit failed, several publishers are currently suing the Internet Archive Library in an attempt to prohibit all libraries from lending out scanned copies of books they own. While undermining libraries may financially benefit the wealthiest and most privileged authors and corporations in the short term, this behavior is utterly opposed to the interests of authors as a whole.
- End smear campaigns against librarians. Recent comments likening library advocates to “mouthpieces” for Big Tech are as tasteless as they are inaccurate. Also concerning are the awards recently given to legislators who have advocated in favor of the dangerous surveillance of library patrons, and of laws that criminalize librarians. As a last bastion of truth, privacy, and access to diverse voices, libraries’ digital operations grow ever more essential to our society—and their work should be celebrated, not censured.
Many of the authors were so vocal about this issue, that they didn’t just sign the letter, but provided further quotes of support as well.
The real question then, is why the publishers are continuing this never ending attack on libraries. One hopes that journalists will be asking the heads of the big publishers, as well as the boss of the Association of American Publishers (AAP), (former Copyright Office boss) Maria Pallante, why they continue to drive forward with these anti-author, anti-library attacks.
Filed Under: authors, ebooks, lawsuits, libraries
Companies: aap