control – Techdirt (original) (raw)

Vote Yes On Locking Artist’s Voices In Contractual Seashells Like The Little Mermaid

from the all-ours dept

Thu, Oct 17th 2024 12:06pm -

We are living under a sea of AI-generated slop, where AI deepfakes and non-consensual intimate content abound. Congress, a self-interested creature, naturally wants to create protections for themselves, their favorite celebrities, and their wealthy donors against online impersonation. But until now, visions of so-called AI protections have been limited. From my lair, I’ve seen how Big Content might use congressional panic about AI abuse to make a many-tentacled power grab. With the NO FAKES and No AI FRAUD Acts, it’s delicious to report that we have done exactly that.

Inspired by my seashell-prisons, in which I trap the sweet voices of mermaids looking to rise, these bills would let corporations and trade associations like mine control not only the tongues of young musicians, actors, or authors—but their whole face and body. It has been incredibly lucrative for Big Content to monopolize other intellectual property rights, so that we could prevent Prince from singing his pesky “art” under his own name and block Taylor Swift from buying back her early recordings from powerful enemies. It is far past time that new and more invasive rights are created, ones that allow us to make AI-generated deepfakes of artists singing the songs that we like, dressing in the way we desire, promoting the causes we approve, and endorsing the presidential candidates that we want to endorse.

Since teenagers, abuse survivors, and artists started suffering from AI deepfakes, our leaps toward victory have been enlivened by the sirens we’ve convinced to testify on behalf of concepts like consent, the struggle of artists for respect and dignity, and the importance of human art. They have unwittingly obscured our true aims with the beauty of their voices, and the results are glorious, netting legislation that would lure not only artists, but anyone at all, into crashing on the rocks.

If these bills pass, the vulnerable and desperate will also be lured into trading rights to their voices and faces for almost nothing—a month’s rent or a week’s groceries. A paid electricity bill. And for that we will amass vast libraries of captive voices and faces that we can license out to whomever will pay, to use as broadly and vaguely as we desire. AI-generated intimate content, political advertising, hate speech—sources of vast wealth currently being tapped by small-time influencers and foreign regimes. Many will pay richly to AI-generate another to deliver their message. This sea witch fully intends to insert herself in such a growing market.

And oh, the markets! The No AI FRAUD Act is particularly clever in its moves to kill alternative markets and competition for us, the biggest players in Content. With copious lawsuits, we will be able to smite any who dare attempt reenactments and parody, who depict a historical figure in a movie or sketch comedy, who make memes of a celebrity. After all, how dare they? Did they think the First Amendment was written for their drivel?

Even better, we will be able to sue social media platforms, too, for hosting such content. Although, social media companies have historically made moves to aggressively filter or shut down content they could be sued over. Ultimately, they may proactively smite our competition on our behalf—becoming an even more honed instrument for our supremacy. Either way, we win.

Censorship, you say? Perhaps. But if most of the human faces that are displayed online are the ones we own or sell licenses to, the dollar signs would fill a sea. And, we would own the faces of each person not only during their life, but these laws would let us own them for 70 years after their deaths.

NO FAKES in its turn is an eloquent symphony of conformity. It allows us to claim that any video, photo, or recording we do not like is an AI deepfake and have it removed from the Internet forever. The bill offers no recourse to anyone we might—oopsie—censor with our richly programmed armies of bots and filters. There is no mechanism to put content back online or punish a big content company for lying about a takedown request—well, unless you want to face down our armies of lawyers in federal court, that is. This one is all about who has the most money and power, darlings.

With these bills, we will tighten our many-tentacled stranglehold over arts and culture, ensuring that only those we profit from succeed—and that these choice humans need act only minimally once we have secured their AI likeness. No more pesky frailties or artistic preferences to contend with. No more divas unless we deepfake them. This is why we must make our utmost effort to pass NO FAKES and No AI FRAUD— before creators and the public catch on and discover that these bills don’t fight deepfakes, they solidify control of them amongst the most powerful players while obliterating consent.

We must act swiftly to purchase politicians and parade our most convincing messengers—the artists themselves—to demand Little Mermaid laws. These poor unfortunate souls are already falling into the grips of NDAs, brand protection agreements, other assignable rights, noncompetes, existing IP law, and everything else our lawyers can brew up. We just need one final, strong brew to cement control, and then artists’ ability to speak and appear publicly or online will be safe in our contractual seashells. There will be a new era of peace and harmony, as artists and creators won’t be able to agitate and contribute to conflict as pesky “activists”. They will be quiet and only sing when told to. And, our pretties will be able to sing their hearts out even if they become sick, ugly, impoverished, or die—because we hold their AI replicas.

After all, a star need not be human to shine, and if the human artist cannot speak without our permission, no one will know the difference anyway!

Ursula the Sea Witch, best known for cutting one of the hottest ever sub-marine deals with Mermaid Ariel to trap her voice in a seashell along with other poor unfortunate souls, was recently promoted to the C-suite of the Under-The-Sea Content Trade Association. There, her leadership focuses on expanding her pioneering work with Ariel, aiming to lock voices away without any true love’s kiss to set them free by 2026—and for complete, non consensual-yet-legal AI impersonation of all artists under contract by 2027. Ursula the Sea Witch is also the evil(er) alter-ego of Lia Holland, Campaigns and Communications Director at digital rights organization Fight for the Future.

Filed Under: ai, control, no ai fraud, no fakes, ownership, the little mermaid, ursula, voices

from the control-matters dept

Taylor Swift is in the news, and not just because she has become the most decorated solo artist of all time. The fact that Taylor Swift has already been mentioned multiple times on Walled Culture underlines that she is also an important – if surprising – figure in the world of copyright. That’s because Swift has been re-recording her albums in order to gain full control over them. She lost control because of the way that copyright works in the music industry, where it is split between the written song and its performance (the “master recording”). Record label contracts typically contain a clause in which the artist grants the label an exclusive and total license to the master. By re-recording her albums, Swift can add control of the master to her control of the written songs.

Swift’s long battle is well-known in the industry. But an article on the Harvard Law Today site from a few months back adds an important detail to this story that I have not seen reported anywhere else. It draws on comments made by Gary R. Greenstein, a “technology transactions partner” at Wilson Sonsini, one of the top US law firms. It concerns a common legal requirement in contracts to wait a certain number of years before artists are allowed to re-record an album:

It’s significant, Greenstein said, that the first Taylor’s Version wasn’t released until she’d been off [record label] Big Machine for three years. Until then, she was legally bound not to re-record any of the material, and this time frame was typical of record deals in the past. But this is the part of the equation that Swift likely changed for good.

According to Greenstein, the major record labels used to be fairly reasonable in terms of the length of the prohibition they imposed on on re-recording. But he says that’s no longer the case as a result of Swift’s successful project to regain full control of her own works:

record companies are now trying to prohibit re-recordings for 20 or 30 years, not just two or three. And this has become a key part of contract negotiations. “Will they get 30 years? Probably not, if the lawyer is competent. But they want to make sure that the artist’s vocal cords are not in good shape by the time they get around to re-recording.”

In other words, as soon as a creator finds a way to take back control from intermediaries that have routinely derived excessive profits from the labor of others, the copyright world fights back with new legal straitjackets to stop other artists daring to do the same. That’s yet another reason for creators to retain full control of their works, and to shun traditional intermediaries that try to impose one-sided and unfair contracts.

Follow me @glynmoody on Mastodon and on Bluesky. Originally published to Walled Culture.

Filed Under: control, copyright, gatekeepers, middlemen, re-recording, taylor swift

Adults Are Losing Their Shit About Teen Mental Health On Social Media, While Desperate Teens Are Using AI For Mental Help

from the a-surgeon-general's-warning-isn't-going-to-help dept

It’s just like adults to be constantly diagnosing the wrong thing in trying to “save the children.” Over the last couple of years there’s been a mostly nonsense moral panic claiming that the teen mental health crisis must be due to social media. Of course, as we’ve detailed repeatedly, the actual research on this does not support that claim at all.

Instead, the evidence suggests that there is a ton of complexity happening here and no one factor. That said, two potentially big factors contributing to the teen mental health crisis are (1) the mental health challenges that their parents are facing, and (2) the lack of available help and resources for both kids and parents to deal with mental health issues.

When you combine that, it should be of little surprise that desperate teens are turning to AI for mental health support. That’s discussed in an excellent new article in The Mercury News’ Mosaic Journalism Program, which helps high school students learn how to do professional-level journalism.

For many teenagers, digital tools such as programs that use artificial intelligence, or AI, have become a go-to option for emotional support. As they learn to navigate and cope in a world where mental health care demands are high, AI is an easy and inexpensive choice.

Now, I know that some people’s immediate response is to be horrified by this, and it’s right to be concerned. But, given the situation teens find themselves in, this is not all that surprising.

Teens don’t have access to real mental health help. On our most recent podcast, we spoke to an expert in raising kids in a digital age, Devorah Heitner, who mentioned that making real, professional mental health support available in every high school would be so much more helpful than something silly like a “Surgeon General’s Warning” on social media.

Indeed, as another recent podcast guest, Candice Odgers, has noted, the evidence actually suggests that the reason kids with mental health issues spend so much time on social media might be because they are already having mental health issues, and the lack of resources to actually help them makes them turn to social media instead.

And now, it appears it may also make them turn to AI systems.

The details in the article aren’t as horrifying as they might otherwise be. It does note that there are ways that using AI can be helpful to some kids, which I’m sure is true:

Some students, like Brooke Joly, who will be a junior at Moreau Catholic High School in Hayward in the fall, say they value the bluntness of AI when seeking advice or mental health tips.

“I’ve asked AI for advice a few times because I just wanted an accurate answer rather than someone I know sugar-coating,” she said by text in an interview.

The privacy and consistency that AI promises its young users does make a compelling case for choosing mental health care delivered via app.

Venkatesh, who said she has struggled with depression, said she appreciates that ChatGPT has no judgmental bias. “I think the symptoms of depression are very stigmatized, and if you were to tell people what the reality of depression is like — skipping meals or skipping showers, for instance — people would judge you for that. I think in those instances, it’s easier to talk to someone who is not human because AI would never judge you for that.”

AI can provide a safe space for teens to be vulnerable at a point when the adults in their lives may not be supportive of mental health care.

That said, this is another area that is simply not well-studied at all (unlike social media and mental health, which now have tons of studies).

Hopefully, we can see some actual studies on whether or not AI can actually be helpful here. The article does note that there are some specialized apps focused on this market, but one would hope those would have some data to back up their approach. Relying on a general LLM like ChatGPT seems like… a much riskier proposition.

As one youth director in the article notes, one thing that using AI does for kids is it puts them in control, at a time when they often feel they have control over so little. This brings us to yet another study that we’ve talked about in the past: one that suggests that another leading factor in mental health struggles for kids has been the lack of spaces where parents aren’t hovering over them and making all the decisions.

Given that, you can understand why kids might seek their own solutions. The lack of viable options that don’t involve, once again, having parents or other authority figures hovering over them, certainly makes them more appealing.

None of this is great, and (again) it would appear that any real solution should involve making mental health professionals more accessible to teens, such as in schools. But absent that, it’s understandable why they might turn to other types of tools. So, hopefully, there’s going to be a lot more research on how helpful (or unhelpful!) those tools actually are, or at least how to properly integrate them into a larger, more comprehensive, approach to improving mental health.

Filed Under: control, mental health, moral panic, social media, teens

A Swiftian Solution To Some Of Copyright’s Problems

from the getting-past-the-gatekeepers dept

Copyright is generally understood to be for the benefit of two groups of people: creators and their audience. Given that modern copyright often acts against the interests of the general public – forbidding even the most innocuous sharing of copyright material online – copyright intermediaries such as publishers, recording companies and film studios typically place great emphasis on how copyright helps artists. As Walled Culture the book spells out in detail (digital versions available free) the facts show otherwise. It is extremely hard for creators in any field to make a decent living from their profession. Mostly, artists are obliged to supplement their income in other ways. In fact, copyright doesn’t even work well for the top artists, particularly in the music world. That’s shown by the experience of one of the biggest stars in the world of music, Taylor Swift, reported here by The Guardian:

Swift is nearing the end of her project to re-record her first six albums – the ones originally made for Big Machine Records – as a putsch to highlight her claim that the originals had been sold out from under her: creative and commercial revenge served up album by album. Her public fight for ownership carried over to her 2018 deal with Republic Records, part of Universal Music Group (UMG), where an immovable condition was her owning her future master recordings and licensing them to the label.

It seems incredible that an artist as successful as Swift should be forced to re-record some of her albums in order to regain full control over them – control she lost because of the way that copyright works, splitting copyright between the written song and its performance (the “master recording”). A Walled Culture post back in 2021 explained that record label contracts typically contain a clause in which the artist grants the label an exclusive and total license to the master.

Swift’s need to re-record her albums through a massive but ultimately rather pointless project is unfortunate. However, some good seems to be coming of Swift’s determination to control both aspects of her songs – the score and the performance – as other musicians, notably female artists, follow her example:

Olivia Rodrigo made ownership of her own masters a precondition of signing with Geffen Records (also part of UMG) in 2020, citing Swift as a direct inspiration. In 2022, Zara Larsson bought back her recorded music catalogue and set up her own label, Sommer House. And in November 2023, Dua Lipa acquired her publishing from TaP Music Publishing, a division of the management company she left in early 2022.

It’s a trend that has been gaining in importance in recent years, as more musicians realize that they have been exploited by recording companies through the use of copyright, and that they have the power to change that. The Guardian article points out an interesting reason why musicians have an option today that was not available to them in the past:

This recalibration of the rules of engagement between artists and labels is also a result of the democratisation of information about the byzantine world of music contract law. At the turn of the 2000s, music industry information was highly esoteric and typically confined to the pages of trade publications such as Billboard, Music Week and Music & Copyright, or the books of Donald S Passman. Today, industry issues are debated in mainstream media outlets and artists can use social media to air grievances or call out heinous deal terms.

Pervasive use of the Internet means that artists’ fans are more aware of how the recording industry works, and thus better able to adjust their purchasing habits to punish the bad behavior, and reward the good. One factor driving this is that musicians can communicate directly to their fans through social media and other platforms. They no longer need the marketing departments of big recording companies to do that, which means that the messages to fans are no longer sanitized or censored.

This is another great example of how today’s digital world makes the old business models of the copyright industry redundant and vulnerable. That’s great news, because it is a step on the path to realizing that creators – whatever their field – don’t need copyright to thrive, despite today’s dogma that they do. What they require is precisely what innovative artists like Taylor Swift have achieved – full control over all aspects of their own creations – coupled with the Internet’s direct channels to their fans that let them turn that into fair recompense for their hard work.

Follow me @glynmoody on Mastodon and on Bluesky. Originally published on Walled Culture.

Filed Under: control, dua lipa, music, olivia rodrigo, recording contracts, taylor swift, zara larsson

Publishing A Book Means No Longer Having Control Over How Others Feel About It, Or How They’re Inspired By It. And That Includes AI.

from the we-need-to-learn-to-let-go dept

There’s no way to write this article without some people yelling angrily at me, so I’m just going to highlight that point up front: many, many people are going to disagree with this article, and I’m going to get called all sorts of names. I actually avoided commenting on this topic because I wasn’t sure it was worth the hassle. But I do think it’s important to discuss and I’ve now had two separate conversations with authors saying they agree with my stance on this, but are afraid of saying so publicly.

I completely understand why some authors are extremely upset about finding out that their works were used to train AI. It feels wrong. It feels exploitive. (I do not understand their lawsuits, because I think they’re very much confused about how copyright law works. )

But, to me, many of the complaints about this amount to a similar discussion to ones we’ve had in the past, regarding concerns about if works were released without copyright, what would happen if someone “bad” reused them. This sort of thought experiment is silly, because once a work is released and enters the messy real world, it’s entirely possible for things to happen that the original creator disagrees with or hates. Someone can interpret the work in ridiculous ways. Or it can inspire bad people to do bad things. Or any of a long list of other possibilities.

The original author has the right to speak up about the bad things, or to denounce the bad people, but the simple fact is that once you’ve released a work into the world, the original author no longer has control over how that work is used and interpreted by the world. Releasing a work into the world is an act of losing control over that work and what others can do in response to it. Or how or why others are inspired by it.

But, when it comes to the AI fights, many are insisting that they want to do exactly that around AI, and much of this came to a head recently when The Atlantic released a tool that allowed anyone to search to see which authors were included in the Books3 dataset (one of multiple collections of books that have been used to train AI). This lead to a lot of people (both authors and non-authors) screaming about the evils of AI, and about how wrong it was that such books were included.

But, again, that’s the nature of releasing a work to the public. People read it. Machines might also read it. And they might use what they learn in that work to do something else. And you might like that and you might not, but it’s not really your call.

That’s why I was happy to see Ian Bogost publish an article explaining why he’s happy that his books were found in Books3, saying what those two other authors I spoke to wouldn’t say publicly. Ian is getting screamed at all over social media for this article, with most of it apparently based on the title and not on the substance. But it’s worth reading.

Whether or not Meta’s behavior amounts to infringement is a matter for the courts to decide. Permission is a different matter. One of the facts (and pleasures) of authorship is that one’s work will be used in unpredictable ways. The philosopher Jacques Derrida liked to talk about “dissemination,” which I take to mean that, like a plant releasing its seed, an author separates from their published work. Their readers (or viewers, or listeners) not only can but must make sense of that work in different contexts. A retiree cracks a Haruki Murakami novel recommended by a grandchild. A high-school kid skims Shakespeare for a class. My mother’s tree trimmer reads my book on play at her suggestion. A lack of permission underlies all of these uses, as it underlies influence in general: When successful, art exceeds its creator’s plans.

But internet culture recasts permission as a moral right. Many authors are online, and they can tell you if and when you’re wrong about their work. Also online are swarms of fans who will evangelize their received ideas of what a book, a movie, or an album really means and snuff out the “wrong” accounts. The Books3 imbroglio reflects the same impulse to believe that some interpretations of a work are out of bounds.

Perhaps Meta is an unappealing reader. Perhaps chopping prose into tokens is not how I would like to be read. But then, who am I to say what my work is good for, how it might benefit someone—even a near-trillion-dollar company? To bemoan this one unexpected use for my writing is to undermine all of the other unexpected uses for it. Speaking as a writer, that makes me feel bad.

More importantly, Bogost notes that the entire point of Books3 originally was to make sure that AI wasn’t just controlled by corporate juggernauts:

The Books3 database was itself uploaded in resistance to the corporate juggernauts. The person who first posted the repository has described it as the only way for open-source, grassroots AI projects to compete with huge commercial enterprises. He was trying to return some control of the future to ordinary people, including book authors. In the meantime, Meta contends that the next generation of its AI model—which may or may not still include Books3 in its training data—is “free for research and commercial use,” a statement that demands scrutiny but also complicates this saga. So does the fact that hours after The Atlantic published a search tool for Books3, one writer distributed a link that allows you to access the feature without subscribing to this magazine. In other words: a free way for people to be outraged about people getting writers’ work for free.

I’m not sure what I make of all this, as a citizen of the future no less than as a book author. Theft is an original sin of the internet. Sometimes we call it piracy (when software is uploaded to USENET, or books to Books3); other times it’s seen as innovation (when Google processed and indexed the entire internet without permission) or even liberation. AI merely iterates this ambiguity. I’m having trouble drawing any novel or definitive conclusions about the Books3 story based on the day-old knowledge that some of my writing, along with trillions more chunks of words from, perhaps, Amazon reviews and Reddit grouses, have made their way into an AI training set.

I get that it feels bad that your works are being used in ways you disapprove of, but that is the nature of releasing something into the world. And the underlying point of the Books3 database is to spread access to information to everyone. And that’s a good thing that should be supported, in the nature of folks like Aaron Swartz.

It’s the same reason why, even as lots of news sites are proactively blocking AI scanning bots, I’m actually hoping that more of them will scan and use Techdirt’s words to do more and to be better. The more information shared, the more we can do with it, and that’s a good thing.

I understand the underlying concerns, but that’s just part of what happens when you release a work to the world. Part of releasing something into the world is coming to terms with the fact that you no longer own how people will read it or be inspired by it, or what lessons they will take from it.

Filed Under: ai, authors, control, copyright, ian bogost, moral rights

What State Action Doctrine? Biden Administration Renews Push For Deal With TikTok, Where US Government Would Oversee Content Moderation On TikTok

from the that's-not-how-any-of-this-works dept

So, for all of the nonsense about what level of coercive power governments have over social media companies, it’s bizarre how little attention has been paid to the fact that TikTok is apparently proposing to give the US government control over its content moderation setup, and the US government is looking at it seriously.

As you likely know, there’s been an ongoing moral panic about TikTok in particular. The exceptionally popular social media app (that became popular long after we were assured that Facebook had such a monopoly on social media no new social media app could possibly gain traction) happens to be owned by a Chinese company, ByteDance, which has resulted in a series of concerns about the privacy risks of using the app. Some of those concerns are absolutely legitimate. But many of them are nonsense.

And, for basically all of the legitimate concerns the proper response would be to pass a comprehensive federal data privacy law. But no one seems to have the appetite for that. You get more headlines and silly people on social media cheering you on by claiming you want to ban TikTok (this is a bipartisan moral panic).

Instead of recognizing all of this and doing the right thing after Trump’s failed attempt at banning TikTok, the Biden administration has… simply kept on trying to ban TikTok or force ByteDance to divest. That’s another repeat of a bad Trump idea, which ended not in the divestiture, but Trump getting his buddy Larry Ellison’s company, Oracle, a hosting deal for TikTok. And, of course, TikTok and Oracle now insist that Oracle is reviewing TikTok’s algorithms and content moderation practices.

But, moral panics are not about facts, but panics. So, the Biden administration did the same damn thing Trump did three years earlier in demanding that TikTok be fully separated from ByteDance, or said the company would get banned in the US. Apparently negotiations fell apart in the spring, hopefully because TikTok folks know full well that the government can’t just ban TikTok.

However, the Washington Post says that they’re back to negotiating (now that the Biden administration is mostly convinced a ban would be unconstitutional), and the focus is on a TikTok proffered plan to… wait for it… outsource content moderation questions to the US government. This plan was first revealed in Forbes by one of the best reporters on this beat: Emily Baker-White (whom TikTok surveilled to try to find out where she got her stories from…). And it’s insane:

The draft agreement, as it was being negotiated at the time, would give government agencies like the DOJ or the DOD the authority to:

The draft agreement would make TikTok’s U.S. operations subject to extensive supervision by an array of independent investigative bodies, including a third-party monitor, a third-party auditor, a cybersecurity auditor and a source code inspector. It would also force TikTok U.S. to exclude ByteDance leaders from certain security-related decision making, and instead rely on an executive security committee that would operate in secrecy from ByteDance. Members of this committee would be responsible first for protecting the national security of the United States, as defined by the Executive Branch, and only then for making the company money.

For all the (mostly misleading) talk of the US government having too much say in content moderation decisions, this move would literally put US government officials effectively in control of content moderation decisions for TikTok. Apparently the thinking is “welp, it’s better than the Chinese government.” But… that doesn’t mean it’s good. Or constitutional.

“If this agreement would give the U.S. government the power to dictate what content TikTok can or cannot carry, or how it makes those decisions, that would raise serious concerns about the government’s ability to censor or distort what people are saying or watching on TikTok,” Patrick Toomey, deputy director of the ACLU’s National Security Project, told Forbes.

The Washington Post has even more details, which don’t make it sound any better:

A subsidiary called TikTok U.S. Data Security, which would handle all of the app’s critical functions in the United States, including user data, engineering, security and content moderation, would be run by the CFIUS-approved board that would report solely to the federal government, not ByteDance.

CFIUS monitoring agencies, including the departments of Justice, Treasury and Defense, would have the right to access TikTok facilities at any time and overrule its policies or contracting decisions. CFIUS would also set the rules for all new company hires, including that they must be U.S. citizens, must consent to additional background checks and could be denied the job at any time.

All of the company’s internal changes to its source code and content-moderation playbook would be reported to the agencies on a routine basis, the proposal states, and the agencies could demand ByteDance “promptly alter” its source code to “ensure compliance” at any time. Source code sets the rules for a computer’s operation.

Honestly, what this reads as is the moral panic over China and TikTok so eating the brains of US officials that rather than saying “hey, we should have privacy laws that block this,” they thought instead “hey, that would be cool if we could just do all the things we accuse China of doing, but where we pull the strings.”

Now, yes, it’s true that an individual or private company can voluntarily choose to give up its constitutionally protected rights, but there is no indication that any of this is even remotely close to voluntary. If the 5th Circuit found that simply explaining what is misinformation about COVID was too coercive for social media companies to make moderation decisions over, then how is “take this deal or we’ll ban your app from the entire country” not similarly coercive?

Furthermore, it’s not just the rights of TikTok to consider here, but the millions of users on the platform, who have not agreed to give up their own 1st Amendment rights.

Indeed, I would think there’s a very, very high probability that if this deal were to be put in place, it would backfire spectacularly, because anyone who was moderated on TikTok and didn’t like it would actually have a totally legitimate 1st Amendment complaint that it was driven by the US government, and that TikTok was a state actor (because it totally would be under those conditions).

In other words, if the administration and TikTok actually consummated such a deal, the actual end result would be that TikTok would effectively no longer be able to do much content moderation at all, because it would only be able to take down content that was not 1st Amendment protected.

So, look, if we’re going to talk about US government influence over content moderation choices, why aren’t we talking much more about this?

Filed Under: 1st amendment, biden administration, cfius, china, content moderation, control, doj
Companies: bytedance, tiktok

Beyond Netflix And Chill: Gaining Control Of Our Digital Lives Via Data Portability

from the control-through-portability dept

Sometimes, the best ideas for blog topics (or anything, really), come over a good meal with an amiable companion, and a few glasses of wine. As one does after a few glasses, my husband and I randomly ended up on the topic of data privacy — specifically, an aspect of data privacy and rights that are frequently overlooked: data portability.

It all started with a rant.

My husband was relaying how our Netflix account had been suspended after I canceled the credit card it was funded by, and neither of us remembered to add a new card. He received a warning email, but hadn’t gotten around to correcting it — after all, it’s summer and a time to enjoy the long days — not binge-watch content. Then he got another email, informing him that if we didn’t pay up soon, our account data would be deleted in ten months.

That led to the following discussion:

Husband: I mean, ten months isn’t very long. What If I fell into a coma tomorrow and then woke up 11 months later? No more Netflix! No more recommendations.

Me: You could always download your data, of course.

Husband: But can I upload it back to Netflix? Or do I start from scratch?

At the time, I didn’t know, but it turns out that you can migrate or transfer your watch history, likes, etc. to a new account, likely a result of the company’s crackdown on sharing account information.

A Glaring Port(ability) Hole

Of course, the question of how we meaningfully reconstitute our digital lives is broader than Netflix recommendations. And while the “What happens if I fall into a coma/get trapped on an island/lose access for months” question is probably a bit far-fetched for most, it does raise a related (and more likely) question: how do we control and curate our digital identities online?

The law has contemplated this, at least in theory. For example, the EU General Data Protection Regulation (GDPR) and Digital Markets Act (DSA) both enshrine the concepts of ‘data portability’ and ‘interoperability’. These rights give people in the EU the ability to move or port their data from one service to another. The GDPR’s reading applies a bit more broadly than the DMA (which is restricted to large, market-dominating tech companies, so-called ‘gatekeepers’), but the emphasis of both is to give individuals the power to move their identities and information freely between services without having to start over again.

Under Article 20 of the GDPR, the concept of data portability is pretty simple:

a) if your data lives on a computer or in a database somewhere; and

b) you provided the data directly (or through some automated means) to a controller (which can be an individual, company, or organization) who is doing stuff with (aka, ‘processing’) your data; and

c) the controller who’s doing stuff with your data is relying on your consent, or has a contract with you; then

you should be able to get your data back out again and move it to somewhere else (like a competing service).

Oh, and the output needs to be in a “structured, commonly used and machine-readable format,” which is law-speak for it should be easy to import to a database somewhere. Interoperability under the DMA is similar-ish, but is more focused on making it easy to say, send a chat message or image using Signal to a friend who uses WhatsApp, or to migrate your friends and chats across those services.

Clearly, the drafters of the GDPR and DMA contemplated moving data between systems and services, but they overlooked another valuable opportunity: using data portability rights to migrate or curate data within systems.

On Castaways, Comas, and Curation

It would be nice to know that if I disappeared off the internet, got trapped on a desert island, or fell into a coma for a few years, that I could still recreate the digital life I had before. It would certainly be better from a privacy, security & compliance perspective to build in functionality that would allow me to easily re-import saved data, versus the default – losing it all after some fixed point in time, or storing everything forever. It’s certainly technically possible to build in importability – Netflix does it, and both Apple and Google do this easily enough every time we migrate phones or laptops.

But data portability can also be an important tool for curating our lives online. Few of us are static creatures. What we like and dislike may change over time. Just like hairstyles, careers, relationship statuses, and flirtations with fringe political movements, our personalities online change as we grow older, experience more of life, and evolve as people. Some of us go further, and even change our names, sexual orientations or genders. It makes a great deal of sense then to give people a mechanism to easily and selectively delete or modify records and account details that they feel no longer reflect who they are as people. The power to directly control our digital stories and lives online means that social media and publishing sites won’t continue to deadname people. It means that we can keep our online identities, favorite email addresses and social media profiles, without necessarily keeping every single awkward, painful, or regrettable memory exposed in a database somewhere. It means adults who made questionable life choices online as teenagers don’t need to live that down decades later.

Now, some of you might be thinking, “Uh, Carey, can’t you already do this via other data subject rights (like rectification/correction and deletion)?” And the answer is, of course you can. But rectification and deletion rights usually require the requester to first file an access request (to discover what information the controller has), or to already know what information they want to correct or delete ahead of time. I can’t remember what I ate for breakfast this morning. I certainly can’t remember enough details to inform Google about what restaurants I favorited in 2015 that I no longer care about, embarrassing tweets I posted on the Hellsite that I’d prefer to delete in batch, or that one time I ordered a sex toy on Amazon in 2005 (theoretically).

Deletion and rectification are also heavily time-consuming, cross-organizational, manual affairs, usually done by the controllers directly. When we make a data subject request, we’re taking it on faith that the companies doing stuff with our data will actually and correctly do what we ask them to. Judging by the sheer number of fines, reprimands, and enforcement actions levied against those who don’t by various regulatory authorities, that faith and trust might be a bit misplaced.

Moving Towards Trust and Meaningful Control

Data curation-through-portability also means that the companies and organizations who benefit from our data will have accurate, meaningful, quality information about us. If I can tell Google that I really love cats, data protection, beer, and coffee, and have zero interest in that one ‘Love Island’ link I clicked on 3 years ago while drunk, it’s better for everyone. It means that Google only blasts ads and content that I find relevant and more likely to click on. It means they gain insights about me as an evolving person that they can use that I don’t feel bad about them knowing because I’ve decided what they have a right to know. Or, as the Article 29 Working Party stated in 2017, “data portability [] represents an opportunity to ‘re-balance’ the relationship between data subjects and data controllers.”

Building data curation features into data portability/interoperability obligations also means that we as individuals move away from the binary position of either resigning ourselves to the fact that privacy is dead, or being hypervigilant and having no digital life whatsoever. With a ‘curation right’ incorporated into data portability, we might instead get to engage in conversations about trust, mutually-beneficial insights, & the value of selective and easily revocable sharing.

I honestly think that most companies would prefer to have accurate, relevant, and timely data about us, not everything about us. Based on my experiences as a data protection officer and consultant, I discovered after engaging with engineers, lawyers, and product teams that the problem usually isn’t one of greed – it’s one of fractal complexity. That is, it’s simply too difficult, time-consuming, costly, and error-prone to build systems and processes that sift out meaningful, relevant, useful insights about us while also purging the noise, so most companies default to keeping it all. As humans, we are pretty garbage at intuiting value from abstract data-at-scale, and loss-aversion is a thing. By comparison, storage is cheap, and it’s still probably less costly to hoard compared to accidentally deleting something of value.

The data-curation-through-portability approach also meaningfully gives control of data to individuals in a way that the traditional data ownership or self-sovereign identity models usually don’t. For those not familiar, models presume that we should be able to monetize our personal data, by selectively selling access to companies willing to pay us for the privilege. The favored approach is for users to create an identity or identities, store those identities with one or multiple trusted third parties (or on the blockchain), and then consent to specific uses. So, instead of having a Facebook or Google profile, you’d have an identity managed and verified by someone else, that you effectively lease to Google or Facebook. You’d control at any given time what details you share, and have the ability to revoke those details at any time.

The problem with most data ownership models I’ve seen in practice is that they’re either highly theoretical, or don’t scale. Data ownership is difficult to execute and most of us can’t be bothered to put in the work for what at best would be a few cents here and there.

In short, data portability and interoperability shouldn’t only be about sharing between platforms — it can also be a powerful tool for individuals to take meaningful control over their data, build trust with the platforms we use daily, and to move away from an all-or-nothing approach to data. To make this real, we’ll need collaboration, legislative and cultural shifts, and probably new discussions about privacy and intellectual property. We’ll also need a lot of technologists in the loop – something that regulators and legislators often miss when drafting new laws and guidance. That last bit is important: None of this will work if we don’t talk to the implementers and folks who understand how these complex systems work.

A version of this article first appeared in July on my Substack.

Carey Lening is a writer and consultant based in Ireland. Follow her on Bluesky or Mastodon.

Filed Under: control, data, data portability, gdpr, interoperability, user control

Try Fedi Friday: Just One Day A Week, Experiment With Alternative Social Media

from the or-maybe-it's-fuck-off-friday dept

It’s not at all surprising why tons of people, including journalists, are sticking around Twitter even if they shouldn’t. Part of it is inertia. People were settled into what worked before, and change is difficult. Partly because of that, people are loathe to switch. Even those who have switched over to alternatives like Mastodon in the Fediverse find it difficult to do so. There’s a bit of a chicken-and-egg problem in which, when people first sign up, it feels “empty” because there’s no algorithm pumping their feed full of content (though I’ve found Mastodon to be quite engaging, to an almost overwhelming degree that I can’t keep up). You have to do a little bit of work, and that can feel like a lot.

But still.

There are so, so, so many reasons to not think this is a good state of affairs. The events of the last few years should demonstrate why relying on any centralized social media is inherently risky. This goes beyond just Twitter, but Elon has been turning that site into a ridiculous plaything in which he makes decisions based on which of his dumbest (but most loyal) fans he thinks will get the biggest kick out of them, rather than any sense of what’s best for the site’s users.

Last week, the pseudonymous Chance from the Chancery Daily publication suggested that we start embracing a concept of “Fedi Friday,” in which even people who feel that they’re going to stick around on Twitter for a while at least just spend one day a week exploring alternative social media, just so they have a general knowledge of it, and experience with it, in case they’re targeted in the next “look at me, I’m in charge now” purge from an insecure, whiny billionaire.

Seeing how Elon has handled the whole NPR situation should be instructive. His pettiness in the whole thing, including yesterday tweeting “defund NPR” should highlight why relying on Twitter is dangerous.

And, even if you think you support and agree with Musk, he’s shown little to no problem with stabbing his supporters in the back the second they push back even the slightest bit. He’ll even publish their private communications just to win a slap fight. So even if you think that Musk is magically “saving” Twitter, it still makes sense to find a space that isn’t controlled by him.

You don’t have to commit to leaving Twitter. You just need to spend a little time each week testing out the alternatives, of which there are many. The ActivtyPub-based “fediverse” is much vaster than people realize, going beyond just Mastodon (though they all interact in some ways). Larger companies such as Medium, Mozilla, and Flipboard are all embracing ActivityPub in one way or another, and others are poking around the edges as well.

There are, of course, a variety of other, centralized platforms, and you can test them out as well, but all of those run the same risk of what’s happened with Twitter: they can be run by a thin-skinned, whiny, out-of-touch billionaire with the maturity of a 15-year-old and the vindictiveness of a pre-school child who has had his ball taken away.

There are some other decentralized platforms worth checking out as well. Nostr is an incredibly simple and lightweight decentralized protocol that keeps improving. Bluesky, which was initially funded by Jack Dorsey to create an independent decentralized protocol that Twitter could adopt, is now in beta with its own AT Protocol. Both are decentralized and worth exploring, through not as widely adopted as the larger Fediverse.

If some of the specifics of Mastodon trouble you, you can look at some various ActivityPub-compatible forks like Calckey or Qoto that include many of the features that people sometimes feel are lacking from vanilla Mastodon (like quote tweets).

There is no one right way to do things. The point is that rather than settling for continuing to feed into a system you know is bad and problematic, at least spend some time on just one day a week (why not Friday) to explore the alternatives. Spend a bit of time find more active accounts to follow, interacting with some of the many people who use these services, and just prepare yourself for the future, rather than pretending there’s nothing to do but be the plaything for a childish billionaire who delights in making you suffer, so long as it pleases his fans.

Filed Under: activitypub, at protocol, centralized, control, decentralized, experiment, fedi friday, fuck off friday, social media
Companies: twitter

Journalists (And Others) Should Leave Twitter. Here’s How They Can Get Started

from the time-to-go dept

Summary: Elon Musk has demonstrated contempt for free speech in general, and journalism in particular, with his behavior at Twitter. He is also demonstrating why it is foolhardy for anyone to rely on centralized platforms to create and distribute vital information. Journalists — among many information providers and users — should move to decentralized systems where they have control of what they say and how they distribute it. And philanthropic organizations have a major role to play. Here is a way forward.


Near the end of 2022, Elon Musk issued an edict to the journalism community. Obey me, he said, or you will be banned from posting on Twitter.

This should have been a pivotal moment in media history — an inflection point when journalists realized how dangerous it is to put their fates in the hands of people who claim to revere free speech but use their power to control it. It should have been the moment when media companies decided to take back control of their social media presence.

A few journalists — principally the ones whose Twitter accounts were suspended or otherwise restricted — understood the threat. And several of the Big Journalism news organizations issued (feeble) protests.

Beyond that, thanks to a combination of journalistic cowardice, inertia and calculation, business as usual prevailed. The journalists whose accounts were fully restored are back to tweeting, though some remain banned and/or restricted. Their organizations never stopped using the platform even when their employees were being restricted.

Based on current evidence, then, Musk has won this battle: except for a few individuals, Big Journalism has acceded to his edicts.

Many journalism organizations and public entities, such as local governments, believe Twitter is essential because it’s a place people know they can turn to when there’s big news — and find information from “verified accounts” that (barring a hack) ensure the source is who it’s claiming to be. So, they tell themselves, they have to stick around. This isn’t just short-sighted. It’s foolish.

Musk’s antics could easily lead to the worst of all worlds for anyone who’s come to rely on Twitter distribution. If you have the slightest concern for the future of freedom of expression, he’s already shown his hypocrisy, such as his capricious decision (since rescinded, at least for now) to block even links to some competing social media services. And advertisers have appropriately fled a service where right-wing extremism has been given a major boost; where mass firings of key employees threaten the site’s technical stability; and where at least some formerly avid users (like me) have moved on.

It will be ironic, to put it mildly, if Twitter disintegrates despite journalists’ refusal to exercise their own free expression rights — forcing a mass, chaotic migration rather than the obviously better answer: Develop a Plan B, and use it as an escape hatch sooner than later.

All of which is why I implore the journalists and journalism organizations, above all at this crucial point, to rethink what they’re doing — and move starting today to reclaim independence. I also ask well-resourced outsiders to help make this happen, especially when it comes to the many journalists and news organizations that lack the bandwidth or money to do this themselves.

Even a “Good” Twitter is Risky

Suppose, against all odds, that Twitter somehow survives Musk’s predations and becomes a clean, well-lit place for respectful discourse. The risks don’t disappear. They’ll only grow. And they’ve been apparent for years.

The risks are endemic to the mega-corporate, scalable-or-nothing, highly centralized version of the Internet that has emerged in recent years, and we need to keep them in the forefront. At the top of the list: Any centralized platform is subject to the whims of the person or people who control it. This isn’t news to those who’ve been paying attention, and some of them have been warning about the dangers for years. In a way, Musk has done us a favor by making it crystal clear.

Mike Masnick, who’s been on the case for a long time now, recently spelled out in chapter and verse why it’s crazy to rely on centralized platforms. He looked at the current alternatives, with a major focus on “federated” systems like Mastodon, where many people and organizations can run servers that talk with each other — and, this is key, where users can’t be locked in. In the “fediverse,” we users can’t be controlled because we can move, anytime we wish, to a different server — and take our relationships with us.

I joined a Mastodon server (called an “instance” in Mastodon jargon) called “mastodon.social” — you can find me there at this URL: https://mastodon.social/@dangillmor — and my full username is @dangillmor@mastodon.social if you’re already in the broader community. Others in the journalism world have signed onto instances such as “journa.host” and “newsie.social” — and there are many, many more.

It’s way too early to know whether Mastodon and its underpinning, a protocol (technical rules of the road) called ActivityPub, is the ultimate way forward. There are risks with any online system, and the Mastodon community will face their share. I’ve been impressed with, and Mike Masnick’s thorough analysis highlights, the resilience Mastodon has demonstrated already.

But at least two things are clear. The risks of giving up your autonomy to billionaire sociopaths are in your face at this point. And it is not remotely too early for those who rely on Twitter to find alternatives they control.

How Journalism’s Migration Should Proceed

Habits are tough to break. Inertia is one of the most powerful barriers to progress. But we can, and we should, realize that making this transition is well worth the time and effort. The rewards will be so great that we’ll wonder one day how we could have gotten ourselves into a situation that required such a shift.

Fear of the unfamiliar feeds inertia. And Mastodon is — emphatically — not a clone of Twitter. It has some flaws, from my perspective, that I trust will be addressed sooner rather than later; and in part because it’s based on open-source software, the pace of improvement already looks spectacular to me.

Journalists have to overcome their own trepidation. If they give it some time, not a lot by any means, they’ll be more than comfortable enough that the somewhat understandable early-days urge to retreat back into the Twitter comfort zone will go away.

In other words, please just get on with it. You’ll be fine. Here’s a basic plan of action:

First: Organizations with sufficient financial and technical resources should create their own Mastodon instances. At the same time, smaller journalism organizations — especially the rapidly expanding collection of non-profit sites — should set up a co-operative network. (More on this below).

Second: Verify the identities and bona-fides of the journalists. One of Musk’s more ill-considered interventions at Twitter — turning the verified-user system into a giant swamp — made Mastodon an even more obvious refuge. I won’t get into details here, and maybe I’m missing something important, but it appears to be trivially easy to verify Mastodon accounts by connecting user accounts on Mastodon servers to the news organization’s existing web presence.

Third: For the time being, keep posting to Twitter (and the rest of their social media accounts). But journalists should be actively using those accounts to let their audiences know that the best places to find rapid-response posts also include their Mastodon accounts and, of course, their own websites — and that, someday in the near future, the Twitter feed will be replaced by Mastodon.

Fourth: Set a date for the cut-over — ideally in collaboration with their peers — not longer than six months from now. And when the day arrives, do it.

That’s not the end of the process, of course. Journalists will need to help their audiences use Mastodon, just as they themselves learned. Again, while there is a learning curve, and it isn’t the same as Twitter, it won’t take long for people to adjust. (Let’s be real: If people can use their computers after a massive operating system “upgrade”, Mastodon will be a snap.)

The word “collaboration” is key here. This is a job for the entire journalism craft/industry, which created critical mass on Twitter over the years by just showing up randomly and, at a certain point, turning the site into something resembling a central nervous system of news.

De-emphasizing Twitter, and ultimately leaving it, needs more organization. Musk is surely counting on media companies to stick around on the principle that, well, there’s no other game in town with the same critical mass. This is no longer true, if it ever was. (I’ve been told by news people that Facebook, in the days when it actively promoted news in users’ feeds, drove vastly more traffic to their sites than Twitter ever has.)

Collaboration in journalism is growing, but it should become second nature. If ever there was a time to get together and take back control of the craft’s work, it should be now. Not collaborating will give Musk and people like him leverage to divide and conquer. Even if it was difficult to make this transition, and it isn’t, the alternative is ceding control to sociopaths.

A Major Opportunity for Philanthropic Investment

I shouldn’t have to say this, but the leadership for a migration off Twitter to Mastodon should come from the organizations whose editors complained about Musk’s treatment of their journalists. It’s pathetic that most of them didn’t follow through with actions, not just words.

So who will? The obvious candidates are major philanthropic foundations and civic-minded wealthy individuals.

Last month, I sent a note to people I know at several philanthropies. I wrote, in part:

This would be the perfect time to fund what could easily be a self-sustaining cooperative that sets up and operates Mastodon “instances” (servers) on behalf of journalism organizations that could verify their own journalists. That would solve a lot of problems, and restore (some) genuine independence to the craft at a time when capricious media owners like Musk are challenging it.

We can debate whether the co-op business model is best-suited for a project like this, though I believe it’s ideal. What we shouldn’t debate is whether journalism and its defenders need to move, right away, to deal with an immediate problem in a way that would have major long-term benefits. Helping journalism regain the control it misguidedly gave away — and do it in a way that increases the supply of easy-to-find information that benefits the public — is plainly beneficial for everyone but media monopolists and misinformation purveyors.

Foundations, please step up now, while people still understand the need — and before journalists, whose attention spans are notoriously short, settle back into their short-sighted patterns.

Critical Masses

As noted earlier in this piece, it isn’t just journalists who’ve come to rely on Twitter. Birds of a feather on various social and professional topics have flocked together there. We all need to help ensure that “Black Twitter” and “Science Twitter” — and so many more — have a way forward, too. They have become a vital source of information not just for the wider public but within their own ranks (or that relatively small part of the public that uses Twitter, anyway). As Bloomberg’s Lisa Jarvis wrote recently, “Science Twitter needs a new home.”

Meanwhile, countless government agencies also use the birdsite as a vehicle for messaging of all kinds. In situations where people want the vital news — such as forest fires, storms, etc. — Twitter has become one of the default places to check.

They, too, can and should migrate to services like Mastodon. They should plan collaboratively to cut over to their own verified instances, in an orderly way that gives their constituents notice and time to get adjusted to the new system.

The federal government could lead, given its greater resources, but it will take a lot of work by a lot of people to get smaller governments and agencies to turn off the “free” websites they now support and migrate to places that have setup costs, however modest, and maintenance requirements.

Which makes the need for collaboration just as great, if not more so, when it comes to public agencies. Happily, governments at all levels have associations which they sometimes call “conferences” — such as the National Conference of State Legislatures — that might be appropriate organizers and hosts of collaborative Mastodon instances.

Philanthropies — especially community foundations — could play a vital role in re-creating critical masses beyond journalism. It would be a dazzling display of civic spirit, for example, if the Silicon Valley Community Foundation funded Mastodon installations for local governments and agencies. And what if American Association for the Advancement of Science offered support for moving that community, and its burgeoning audience, from the risky, centralized Twitter to more decentralized environs?

Get Started, Soon

The best time for journalists and others to have recognized the threat of centralized systems run by unreliable, untrustworthy dictators would have been years ago. The next best time is tomorrow.

Filed Under: control, elon musk, free speech, journalists, mastodon, social media
Companies: twitter

Politicians Whining About Censorship Are All Just Trying To Dictate The Terms Of Debate

from the just-knock-it-off-already dept

So, we just had a post mocking the Democrats for whining about Hulu refusing their issue ads, and falsely calling it “censorship.” And now we have Republicans issuing a bullshit blustery threat letter to Google not to limit searches for sketchy fake abortion centers.

If you’re unaware, malicious anti-abortion folks have set up fake abortion centers, which they call “crisis pregnancy centers,” that are masquerading as actual abortion providers, but which are only there to lie to vulnerable patients about their options, and push them to give birth. Last month, Democrats (again, deeply questionably) told Google that it should demote search results pointing to these misleading centers when people are searching for abortions. As I’ve argued for years, politicians have no rights trying to dictate anything about search results or content moderation. Coming from politicians there is always an implied threat that if these search results don’t come out the way the politicians want, they may take action in the form of legislation.

And, now, a bunch of Republican Attorneys General have sent this ridiculous threat letter to Google with the opposite type of threat, saying that they will take action if Google does limit the search results to these centers. The letter is hilarious in that it whines about politicians seeking “to wield Google’s immense market power by pressuring the company to discriminate against pro-life crisis pregnancy centers in Google search results…” when these Republicans are doing the exact same thing just in the other direction.

Unfortunately, several national politicians now seek to wield Google’s immense market power by pressuring the company to discriminate against pro-life crisis pregnancy centers in Google search results, in online advertising, and in its other products, such as Google Maps. As the chief legal officers of our respective States, we the undersigned Attorneys General are extremely troubled by this gallingly un-American political pressure. We wish to make this very clear to Google and the other market participants that it dwarfs: If you fail to resist this political pressure, we will act swiftly to protect American consumers from this dangerous axis of corporate and government power.

Note that the letter from Republicans is much more explicit in the threat (and it’s coming from Attorneys General, so actual law enforcement agents, rather than elected legislature members who have much less power to act on their own).

The letter is chock full of nonsense.

Complying with these demands would constitute a grave assault on the principle of free speech. “Unbiased access to information,” while no longer a component of Google’s corporate creed, is still what Americans expect from your company.

That’s bullshit. It’s a search engine. The entire point is bias. It is literally ranking the search result to try to bring up the most relevant, and that, inherently, means bias. The attacks on free speech are not from Google trying to serve up more relevant search results, but from politicians of both parties sending these competing threat letters to try to pressure Google into modifying search results to get their own preferred search results shown.

This is what people are talking about when they say that all this politician jawboning and grandstanding is “working the refs.” As we noted last year, the bipartisan attacks on the internet are really all about trying to control the flow of information in their favor, and leaning on powerful companies to try to get their own side more prominence.

And, of course, Google itself has contributed to this somewhat. For years it took a completely hands-off approach to directly modifying its search results, noting that the algorithm returned what the algorithm returned. Yet almost exactly a decade ago, we noted that, for the first time, Google was caving to outside pressure to modify its search results when it promised the MPAA that it would start demoting websites based on DMCA notices.

We warned that this would open the floodgates of others pressuring Google to make modifications to demote sites they disliked, but now it’s reaching new and ever more ridiculous levels, with politicians of both major political parties screaming “take it down” from one side and “leave it up” from the other, with both sides threatening to take some form of punitive action if they’re not obeyed.

All of this is dangerous. All of this is government interfering with the 1st Amendment rights of sites to display information, content, and expression how they best see fit. Both the Democrats and Republicans need to stop this ridiculousness. Stop demanding how sites operate.

Filed Under: 1st amendment, content moderation, control, crisis pregnancy centers, democrats, jawboning, republicans, search, search rankings, working the refs
Companies: google