trust – Techdirt (original) (raw)
Semafor Keeps Hosting “Restoring Trust In News” Events Stocked With Known Right Wing Bullshitters
from the can't-fix-a-problem-you-don't-understand dept
Back in 2022, former New York Times reporter Ben Smith and friends launched a new media company named Semafor on the back of $25 million in donations. You might recall that one of the organization’s launch events didn’t go particularly well: a “trust in news” event that somehow didn’t see the problem with platforming and amplifying millionaire propagandist Tucker Carlson as a respectable voice in media.
At the event, Smith introduced Carlson as a serious journalist (this was before he got fired from Fox for being terrible), then conducted a softball interview where Smith failed to press Carlson on any issue of substance. All the while, Semafor proudly proclaimed it was a new kind of news company that would restore Americans’ trust in news, despite not really demonstrating any capacity to actually do that.
Three years later, and it’s not particularly clear Ben Smith or Semafor learned much of anything about the criticism they received for spearheading a “trust in news” event with a right wing propagandist.
The company recent announced a new February 27 event, entitled “Innovating To Restore Trust In News.” The event is very short on any of the independent voices actually innovating in news right now, and mostly features giant companies like Fox News, the New York Times, Comcast/NBC, and CNN — most of which have been broadly criticizing for softball election coverage and normalizing authoritarianism.
CNN, under Time Warner Discovery CEO David Zaslav, has responded to authoritarianism by shifting the channel’s already corporatist, centrist coverage even further rightward. The New York Times has been aggressively lambasted for “both sides” coverage that loses the truth in a quest for false objectivity. And Fox News is, by any measure, the most successful right wing propaganda effort ever constructed.
As media critic Parker Molloy has long noted, the idea that the U.S. press suffered a “liberal bias” has long been a lie. Most consolidated, billionaire-owned U.S. media giants veer corporatist and center-right, and a long list of outlets (CBS, NPR, CNN, The Los Angeles Times, The Washington Post) have responded to authoritarianism by softening their criticism of far-right ideology and hiring more Republicans.
The reason Americans’ are losing their trust in news is because U.S. media has been highly consolidated in to the hands of the rich and is increasingly incapable of having too intimate of a relationship with the truth. Instead we get a lot of ad-engagement-chasing, feckless infotainment.
If you look around at journalism in this dangerous moment, most of the best, most innovative journalism is being performed by smaller companies or individuals. Wired has been beating major outlets to the punch on their coverage of Musk’s corrupt disemboweling of governance. Ditto for independent journalist Marisa Kabas, who has been out-scooping outlets with far greater staff and resources.
There are a ton of great worker-owned outlets like 404 Media or Defector or Aftermath that have emerged from U.S. media sector carnage caused by the extraction class and a rotating crop of fail-upward brunchlords. They’re all staffed with shell-shocked young journalists keen to do journalism that have been fired a dozen times by a rotating crop of terrible, incompetent media executives never held accountable.
Semafor didn’t think any of these folks were worth featuring as innovators. The closest the “innovation” symposium gets to an innovator is Mehdi Hasan, who recently launched a new news organization on the back of Substack despite the platform’s nazi problem. Semafor did, however, think it was important to feature Fox News’ Brett Baier and Megyn Kelly, both long maligned for oodles of right wing bullshit.
I recently wrote a long column for Dame Magazine on the deadly rise of right wing propaganda, its symbiosis with corporatist media, and the way this charade is all aided by outlets like Semafor that are seemingly incapable of calling a duck a duck.
Semafor, like countless other DC gossip mags (Axios, Politico), are the poster-child for “he said, she said” journalism that’s generally too afraid of upsetting sources, ad-clicking readers, or affluent ownership to get anywhere near the truth. This kind of “view from nowhere” journalism prioritizes maximizing engagement over helping readers genuinely understand what’s actually happening.
If you candidly acknowledge right wing propagandists and bullshitters as right wing propagandists and bullshitters, you risk alienating DC sources, a big segment of your readership, and the right wing rich brunchlord that likely owns the company. You lose money. So instead you get a sort of pseudo-journalistic mush, which looks like journalism but is routinely incapable of speaking truth to power.
That’s not to say these kinds of outlets can’t sometimes do good journalism. But more often than not they’re conditioned to be fecklessly truth-averse, something long exploited by authoritarians and their armies of disinfo merchants. With unelected fascist oligarchs now dismantling whatever’s left of effective U.S. governance to deadly effect, the hour is getting late for this kind of weak-kneed bullshit.
Filed Under: consolidation, disinformation, journalism, media, propaganda, reporting, trust
Companies: semafor
LA Times Owner To Personally Review Opinion Headlines To Avoid Offending Elon Musk
from the the-speech-gatekeepers dept
Billionaire media owners are getting bolder in their attempts to suppress opinions they dislike. The latest example comes from LA Times owner Patrick Soon-Shiong, who reportedly plans to personally review all opinion headlines to ensure they don’t offend Elon Musk or others that Soon-Shiong is trying to impress.
Considering the billionaire class keeps pretending that they’re concerned about free speech, then doing shit like this suggests it’s really speech criticizing themselves and the levels of corruption and obscene wealth that they actually dislike.
In October, we covered the decisions by the billionaire owners of the LA Times and the Washington Post to block both papers from running editorials endorsing Kamala Harris. WaPo owner Jeff Bezos tried to defend this decision by claiming that it was in response to Americans losing trust in the media. However, as we pointed out, it seemed a lot more likely that billionaires aggressively trying to control our lives through the media is a better explanation for that loss of trust.
We had noted that LA Times owner, Patrick Soon-Shiong, had tried to get a cabinet position in the first Trump administration, which might have explained his reasoning. And now it appears he’s taken things to a new level. According to Oliver Darcy (who is very plugged in to the media world), Soon-Shiong got so upset about an opinion piece that criticized Elon Musk that he has declared that all headlines for opinion pieces must be run through him personally before the pieces can be run. This has alarmed the newsroom.
As Darcy notes, this all seems part of a pattern of Soon-Shiong trying to ingratiate himself into Trump/Musk circles:
Soon-Shiong, who once fashioned himself as a Black Lives Matter-supporting vaccine proponent, has morphed into a Robert F. Kennedy Jr. and Jennings fanboy. Since Trump’s victory in November, Soon-Shiong has turned to X to criticize the news media, praise Trump’s cabinet picks, and appeal to a MAGA audience. The change in behavior has confounded his journalists, who wonder what happened to the Soon-Shiong whose newspaper enforced strict Covid restrictions and emphasized its support for social justice causes.
But Soon-Shiong’s efforts to reduce the appearance of bias go beyond just reviewing headlines. Darcy notes that Soon-Shiong’s supposed “solution” to complaints of “biased news” is that all stories published by the LA Times will have an AI-powered “bias meter” on them.
This seems like a deeply misguided idea, designed mainly to placate critics who see any coverage they dislike as biased, rather than to do anything useful. Especially in an age where it has been shown repeatedly, and scientifically, that the Republican world is buried in layers upon layers of bullshit. That means, as Stephen Colbert once noted, “reality has a liberal bias.” In such a world, reporting accurately may be deemed as “biased” towards liberal beliefs.
When one political party is increasingly untethered from facts, attempting to artificially “balance” coverage is not actually going to improve trust or accuracy. It’s just going to cause people to dig in further on their beliefs across the board, based on AI systems whose entire purpose is to make shit up.
This kind of meddling by billionaire owners will only further erode public trust in media. It sends the message that news outlets serve the interests of the wealthy and powerful rather than the public. And in an era of rampant misinformation, it will drive more people to retreat into confirmation bias bubbles, explaining away any news they dislike as “biased.”
We like to think that a big part of the media’s role in society is to hold the powerful accountable. But now the powerful are transforming the media to make sure they can’t be held accountable at all.
Yes, of course, as the owner of the newspaper, Soon-Shiong has every right to do whatever stupid shit he wants to do with it. He has the right to do all of this. Just as I have the right to call out how stupid and trust-destroying it is.
And, as Darcy notes, it’s demoralizing the reporters in his newsroom:
The meddling has alarmed staffers, some of whom now harbor concerns that the billionaire presents an active danger to the paper they once believed he might help rescue.
“The man who was supposed to be our savior has turned into what now feels like the biggest internal threat to the paper,” one staffer confided in me Wednesday, speaking on the condition of anonymity, like others, because they were not authorized to talk to the press.
This story is based on nearly a dozen conversations over the last week with current and former staffers at the newspaper. The staffers described a publication depleted of its spirit in which employees are “offended,” “confused,” and “frustrated.” After a year of turbulence in which the Times underwent painful layoffs and lost top editor Kevin Merida, along with several other high-ranking editorial leaders, staffers are now coming to terms with their rule-by-tweet owner using the newspaper as an apparent vehicle to appeal to Donald Trump. Several veteran staffers told me that morale has never been lower, with some people even wondering whether the newspaper will be disfigured beyond recognition under this new era of Soon-Shiong’s reign.
It’s difficult to see how any of this is useful as journalism. It seems entirely based around massaging the egos of certain billionaires.
Filed Under: ai, bias meter, editorial, elon musk, journalism, patrick soon-shiong, trust
Companies: la times
Dear Jeff Bezos: The ‘Hard Truth’ Is That Cowardice Like Yours Is Why People Don’t Trust The Media
from the does-being-a-billionaire-kill-brain-cells? dept
Hey Jeff,
Since I know you’ll never actually read this, I figured the best way to set this up as an open letter. One that you should read, but never will.
It appears that your stupendously cowardly decision to block the Washington Post from publishing its planned endorsement of Kamala Harris just days before the election is not working out quite the way you hoped. While it’s pretty common for people to claim they’re canceling a subscription whenever a newspaper does something bad, this time it appears they are actually doing so. In droves. Oops!
Reports note that over 200,000 subscriptions have been cancelled, or around 8% of your subscribers, since the news came out on Friday (coming via the publisher rather than you directly). And it sounds like more cancellations are coming in:
More than 200,000 people had canceled their digital subscriptions by midday Monday, according to two people at the paper with knowledge of internal matters. Not all cancellations take effect immediately. Still, the figure represents about 8% of the paper’s paid circulation of roughly 2.5 million subscribers, which includes print as well. The number of cancellations continued to grow Monday afternoon.
Last night, I saw you took to the pages of the newspaper whose credibility you just destroyed to give a sanitized explanation for this decision.
All I can say is, Jeff, fire whichever lackey wrote this. They’re terrible.
Let’s be clear: there are plenty of good reasons not to do endorsements. At Techdirt, we don’t do endorsements. There’s no requirement to do endorsements. And, honestly, in many cases, endorsements for things like President are kinda silly. I get that part.
But this isn’t actually about the decision not to publish an endorsement. The real issue is you stepping in as owner to block the endorsement at the perfect time to show that you capitulated in advance to an authoritarian bully who has attacked your business interests in the past and has indicated he has a plan to exact revenge on all who wronged him.
The principled response to such threats is to continue doing good journalism and not back down. The cowardly shit is to suddenly come up with an excuse for not publishing an endorsement that had already been planned.
Your explanation gets everything backwards.
In the annual public surveys about trust and reputation, journalists and the media have regularly fallen near the very bottom, often just above Congress. But in this year’s Gallup poll, we have managed to fall below Congress. Our profession is now the least trusted of all. Something we are doing is clearly not working.
It’s true. The mainstream media is not trusted. You want to know why? Because time and time again the media shows that it is unfit to cover the world we live in. It pulls punches. It equivocates. It sanewashes authoritarian madness. All of that burns trust.
As does a billionaire owner stepping in to block an already written opinion piece.
That is why people are canceling. You just destroyed their trust.
This is particularly stupid at this moment because trust is at an all-time low, as you note. But the ones who already trust the Washington Post to tell them what’s up in this moment of uncertainty are subscribers to your newspaper. And they’re now leaving in droves.
Because you destroyed their trust.
It’s one thing to win people’s trust. You’ve destroyed trust that people already had in the Washington Post.
One reason why credibility is so low is because it’s believed that the wealthy elite billionaires “control” the news and push their personal beliefs. Jeff, you know what helps reinforce that belief? You, the billionaire, elite owner of the Washington Post, stepping in to overrule your editorial team on a political endorsement in a manner that suggests that you wish to put your thumb on the scale in order to maintain more control.
Then your piece gets worse.
Let me give an analogy. Voting machines must meet two requirements. They must count the vote accurately, and people must believe they count the vote accurately. The second requirement is distinct from and just as important as the first.
Likewise with newspapers. We must be accurate, and we must be believed to be accurate. It’s a bitter pill to swallow, but we are failing on the second requirement. Most people believe the media is biased. Anyone who doesn’t see this is paying scant attention to reality, and those who fight reality lose. Reality is an undefeated champion. It would be easy to blame others for our long and continuing fall in credibility (and, therefore, decline in impact), but a victim mentality will not help. Complaining is not a strategy. We must work harder to control what we can control to increase our credibility.
This is exactly correct in isolation. Of course newspapers must increase their credibility.
You know how a newspaper does that? By not having its billionaire owner step in and tell its editorial team not to publish an endorsement days before an election in a manner that makes it look like you’re willing to interfere in their editorial choices to curry favor with politicians.
You literally did the exact opposite of what you claim you’re trying to do.
And for what? Do you think that MAGA folks are suddenly going to come rushing to subscribe to the Washington Post now? Do you think this built up your credibility with a crew of folks who have made it clear they only wish to surround themselves with propaganda and bullshit? Is that who you want credibility with? If so, hire a propagandist and fire your journalists.
Those people are never going to “trust” you, because they are looking for confirmation bias. And if the truth goes against what they want, they’ll refuse to trust you.
Do you think this will make Donald Trump leave you alone? Have you never read a single history book that Amazon sells? Trump will see your capitulation as a sign of weakness. He sees it as a sign that he can squeeze you for more and more, and that you’ll give. Because rather than stand up for truth, you caved. Like a coward.
Presidential endorsements do nothing to tip the scales of an election. No undecided voters in Pennsylvania are going to say, “I’m going with Newspaper A’s endorsement.” None.
Even if this is true, you should have made this decision clear a year or two ago and given your reasons then, instead of stepping in a week before the election, destroying all credibility, interfering with the editorial independence of your newspaper and looking like a simp for Trump. And, even worse, announcing it without an explanation until this hastily penned joke of an attempt at justification.
If you want to build up more credibility and trust in news, that’s great. But you did the opposite.
Lack of credibility isn’t unique to The Post. Our brethren newspapers have the same issue. And it’s a problem not only for media, but also for the nation. Many people are turning to off-the-cuff podcasts, inaccurate social media posts and other unverified news sources, which can quickly spread misinformation and deepen divisions.
And you think the best way to correct that is for a billionaire owner to step in and overrule the editorial team?
While I do not and will not push my personal interest, I will also not allow this paper to stay on autopilot and fade into irrelevance — overtaken by unresearched podcasts and social media barbs — not without a fight. It’s too important. The stakes are too high. Now more than ever the world needs a credible, trusted, independent voice, and where better for that voice to originate than the capital city of the most important country in the world?
And you will do that by pushing my personal interest and blocking the editorial team, allowing them to be overtaken in credibility by podcasts and social media barbs?
Also, “not without a fight?”
Dude, you just forfeited the fucking fight. The stakes are high, and you just told your newspaper, “Sit this one out, folks.”
You took yourself out of the fight.
Yes, the world needs a credible, trusted, independent voice. You just proved that the Washington Post cannot be that voice, because it has a billionaire owner willing to step in, destroy that credibility and trust, and make it clear to the world that its editorial team has no independence.
The Washington Post has some amazing journalists, and you just undermined them.
For what?
Absolutely nothing.
Filed Under: credibility, endorsements, jeff bezos, journalism, trust
Companies: washington post
How To Bell The AI Cat?
from the are-you-a-bot? dept
The mice finally agreed how they wanted the cat to behave, and congratulated each other on the difficult consensus. They celebrated in lavish cheese island retreats and especially feted those brave heroes who promised to place the bells and controls. The heroes received generous funding, with which they first built a safe fortress in which to build and test the amazing bells they had promised. Experimenting in safety without actually touching any real cats, the heroes happily whiled away many years.
As wild cats ran rampant, the wealthy and wise hero mice looked out from their well-financed fortresses watching the vicious beasts pouncing and polishing off the last scurrying ordinaries. Congratulating each other on their wisdom of testing the controls only on tame simulated cats, they mused over the power of evolution to choose those worthy of survival…
Deciding how we want AIs to behave may be useful as an aspirational goal, but it tempts us to spend all our time on the easy part, and perhaps cede too much power up front to those who claim to have the answers.
To enforce rules, one must have the ability to deliver consequences – which presumes some long-lived entity that will receive them, and possibly change its behavior. The fight with organized human scammers and spammers is already a difficult battle, and even though many of them are engaged in behaviors that are actually illegal, the delivery of consequences is not easy. Most platforms settle for keeping out the bulk of the attackers, with the only consequence being a blocked transaction or a ban. This is done with predictive models (yes, AI, though not the generative kind) that makes features out of “assets” such as identifiers, logins, device ids which are at least somewhat long-lived. The longer such an “asset” behaves well, the more it is trusted. Sometimes attackers intentionally create “sleeper” logins that they later burn.
Add generative AI to the mix, and the playing field tilts more towards the bad actors. AI driven accounts might more credibly follow “normal” patterns, creating more trust over time before burning it. They may also be able to enter walled gardens that have barriers of social interaction over time, damaging trust in previously safe smaller spaces.
What generative AI does is lower the value of observing “normal” interactions, because malicious code can now act like a normal human much more effectively than before. Regardless of how we want AIs to behave, we have to assume that many of them will be put to bad uses, or even that they may be released like viruses before long. Even without any new rules, how can we detect and counteract the proliferation of AIs who are scamming, spamming, behaving inauthentically, and otherwise doing what malicious humans already do?
Anyone familiar with game theory (see Nicky Case’s classic Evolution of Trust for a very accessible intro) knows that behavior is “better” — more honest and cooperative — in a repeated game with long-lived entities. If AIs can somehow be held responsible for their behavior, if we can recognize “who” we are dealing with, perhaps that will enable all the rules we might later agree we want to enforce on them.
However, upfront we don’t know when we are dealing with an AI as opposed to a human — which is kind of the point. Humans need to be pseudonymous, and sometimes anonymous, so we can’t always demand that the humans do the work of demonstrating who they are. The best we can do in such scenarios, is to have some long-lived identifier for each entity, without knowing its nature. That identifier is something it can take with it for establishing its credibility in a new location.
“Why, that’s a DID!” I can hear the decentralized tech folx exclaim — a decentralized identifier, with exactly this purpose, to create long-lived but possibly pseudonymous identifiers for entities that can then be talked about by other entities who might express more or less trust in them. The difference between a DID and a Twitter handle, say, is that a DID is portable — the controller has the key which allows them to prove they are the owner of the DID, by signing a statement cryptographically (the DID is essentially the public key half of the pair) — so that the owner can assert who they are on any platform or context.
Once we have a long-lived identity in place, the next question is how do you set up rules — and how would those rules apply to generative AI?
We could require that AIs always answer the question “**Who are you?**” by signing a message with their private key and proving their ownership of a DID, even when interacting from a platform that does not normally expose this. Perhaps anyone who cannot or does not wish to prove their humanity to a zktrust trusted provider, must always be willing to answer this challenge, or be banned from many spaces.
What we are proposing is essentially a dog license, that each entity (whether human or AI) interacting must identify who it is in some long term way, so that both public attestations about it and private or semi-private ones can be made. Various accreditors can spring up, and each maintainer of a space can decide how high (or low) to put the bar. The key is we must make it easy for spaces to gauge the trust of new participants, independent of their words.
Without the expectation of a DID, essentially all we have to lean on is the domain name service of where the entity is representing itself, or the policy of the centralized provider which may be completely opaque. But this means that new creators of spaces have no way to screen participants — so we would ossify even further into the tech giants we have now. Having long-lived identifiers that cross platforms enables the development of trust services, including privacy-preserving zero-knowledge trust services, that any new platform creator could lean on to create useful, engaging spaces (relatively) safe from spammers, scammers, and manipulators.
Identifiers are not a guarantee of good behavior, of course — a human or AI can behave deceptively, run scams, spread disinformation and so on even if we know exactly who they are. They do, however, allow others to respond in kind. In game theory, a generous tit-for-tat strategy winds up generally being successful in out-competing bad actors, allowing cooperators who behave fairly with others to thrive. Without the ability to identify the other players, however, the cheaters will win every round.
With long term identifiers, the game is not over — but it does become much deeper and more complex, and opens an avenue for the “honest” cooperators to win, that is, for those who reliably communicate their intentions. Having identifiers enables a social graph, where one entity can “stake” their own credibility to vouch for another. It also enables false reporting and manipulation, even coercion! The game is anything but static. Smaller walled gardens of long-trusted actors may have more predictable behavior, while more open spaces provide opportunity for newcomers.
This brings us to the point where consensus expectations have value. Once we can track and evaluate the behavior, we can set standards for the spaces we occupy. Creating the expectation of an identifier, is perhaps the first and most critical standard to set.
Generative AI can come play with us, but it should do so in an honest, above board way, and play by the same rules we expect from each other. We may have to adapt our tools for everyone in order to accomplish it — and must be careful we don’t lose our own freedoms in the process.
Filed Under: ai, anonymity, dids, identifiers, trust
Unity Fallout Continues: Dev Group Shuts Down While Developers Refuse To Come Back
from the the-book-of-exodus dept
The fallout from game engine Unity’s decision to try to cram a completely new and different pricing structure down the throats of game developers continues. Originally announced in mid-September, Unity took a bunch of its tiered structures of its offerings and suddenly instituted per-install fees, along with a bunch of other fee structures and requirements for its lower-level tiers that never had these pricing models. The backlash from developers and the public at large was so overwhelmingly one-sided and swift that the company then backtracked, making a bunch of noise about how it will listen better and learn from this fiasco. The backtracking did make a bunch of changes to address the anger from its initial announcement, including:
- The newly amended pricing structure no longer applies to games already made using the engine, ending questions as to how any of this could be legal
- The Personal tier of Unity will once again be free of any fees until a game reaches $200k in annual revenue and will no longer be required to show a “Made With Unity” screen on boot
- Per-use fees will only kick in for the other tiers once a game reaches $1 million in revenue over a calendar year and 1 million in initial purchases/installations of a game. Those per-use fees are also capped at 2.5% of gross revenue for a game once it meets those requirements
- Those per-use fees also are somewhat lower than the initial plan
You can see the table below provided by Unity for the details mentioned above:
Is this better? Yes! And some developers have even come back with positive comments on the new plan. Others, not so much.
“Unity fixed all the major issues (except trust), so it’s a possibility to use again in the future,” indie developer Radiangames wrote. “Uninstalling Godot and Unreal and getting back to work on Instruments.”
Others were less forgiving. “Unity’s updated policy can be classified as the textbook definition of, ‘We didn’t actually hear you, and we don’t care what you wanted,'” Cerulean and Drunk Robot Games engineer RedVonix wrote on social media. “We’ll never ship a Unity game of our own again…” they added.
That “except trust” parenthetical is doing a lot of work, because that’s the entire damned problem. If Unity came out with this plan initially, and had actually worked constructively with its customers, the blow up about this almost certainly would have been far more muffled. But trust is one of those things that takes forever to build and only a moment to destroy.
Along those lines, we’ve learned subsequently both that some community groups that have sprung up around Unity are disbanding out of disgust for the company’s actions and that plenty of developers aren’t coming back to try this second bite at the pricing model apple that Unity wants to offer them.
As to the first, the oldest Unity dev group that exists, Boston Unity Group (BUG) has decided to call it quits, putting its reasons why in no uncertain terms.
“Over the past few years, Unity has unfortunately shifted its focus away from the games industry and away from supporting developer communities,” the group leadership wrote in a departure note. “Following the IPO, the company has seemingly put profit over all else, with several acquisitions and layoffs of core personnel. Many key systems that developers need are still left in a confusing and often incomplete state, with the messaging that advertising and revenue matter more to Unity than the functionality game developers care about.”
BUG says the install-fee terms Unity first announced earlier this month were “unthinkably hostile” to users and that even the “new concessions” in an updated pricing model offered late last week “disproportionately affect the success of indie studios in our community.” But it’s the fact that such “resounding, unequivocal condemnation from the games industry” was necessary to get those changes in the first place that has really shaken the community to its core.
“We’ve seen how easily and flippantly an executive-led business decision can risk bankrupting the studios we’ve worked so hard to build, threaten our livelihoods as professionals, and challenge the longevity of our industry,” BUG wrote. “The Unity of today isn’t the same company that it was when the group was founded, and the trust we used to have in the company has been completely eroded.”
Ouch. That’s about as complete a shellacking as you’re going to get from what, and I cannot stress this enough, is a dedicated group of Unity’s fans and customers. And while these organically created dev groups quitting on Unity certainly is bad enough, there are plenty of developers out there chiming in on these changes, essentially stating that the trust has been broken and there isn’t a chance in hell that they’re coming back on board the Unity train.
Vampire Survivors developer Poncle, for instance, gave a succinct “lol no thank you” when asked during a Reddit AMA over the weekend if their next game/sequel would again use the Unity Engine. “Even if Unity were to walk back entirely on their decisions, I don’t think it would be wise to trust them while they are under the current leadership,” Poncle added later in the AMA.
“Basically, nothing has changed to stop Unity from doing this again in the future,” InnerSloth (Among Us) developer Tony Coculuzzi wrote on social media Friday afternoon. “The ghouls are still in charge, and they’re thinking up ways to make up for this hit on projected revenue as we speak… Unity leadership still can’t be trusted to not fuck us harder in the future.”
Other developers chimed in that they did have discussions with Unity about the new pricing structure… and were summarily ignored. In those cases, those developers appeared to be solidly in the camp of “Fool me once shame on you…”.
There are certain things that are just really difficult to walk back. And breaking the trust of your own fans and customers, where loyalty is so key to the business, is one of them. The picture Unity painted for its customers is one where it simply does not care and is now pretending to, only because it landed itself in hot water.
Filed Under: development, fees, trust, video games
Companies: unity
How Bluesky’s Invite Tree Could Become A Tool To Create Better Social Norms Online
from the trust-through-vouching dept
At this moment, Bluesky has caught lightning in a bottle. It’s already an exciting platform that’s fun and allows vulnerable communities to exist. This sense of safety has allowed folks to cut loose, and people are calling it a “throwback to an earlier internet era.” I think that’s right, and in some respects that retro design is what is driving its success. In fact, one aspect of its design was used pretty extensively to protect some of the internet’s early underground communities.
As an Old, I realize I need to tell a story from the internet of yore to give context. Before streaming, there was rampant downloading of unlicensed music. This was roughly divided into two camps: those that just didn’t want to pay for music, and those that wanted to discover independent music. I’d argue the first camp were not true music fans since they just refused to pay artists. The other camp was more likely to have run out of discretionary income because of their love for artists. Music discovery was simply not something that could be done on the cheap before streaming because you only had radio, MTV (for a bit), and friends’ collections to hear new music. My strategy was to find a cool record shop and ask what they were listening to. I’d also vibe-check the album art and take a chance (something I still do). Even then it wasn’t enough, and I wasted a lot of money. Enter OiNK.
OiNK filled a unique niche in internet culture around music fandom. It would expressly discourage (and sometimes ban) popular artists. It also encouraged community discovery and discussion. At any given moment you could grab something from the top 10 list and know it was the coolest of the cool in independent music (even though you’ve probably never heard of the band). It was probably where hipsters started to get annoying. We were like Navi from Legend of Zelda to our friends: “Hey, Listen!” Trent Reznor called it “the world’s greatest record store.”
OiNK also had a problem. Even though many independant and upcoming artists liked – and even profited from – the discovery these sites and forums enabled, it was still something the industry as a whole was bringing the hammer down on. OiNK’s solution to this problem was to be invite only. Not only was it invite only, if you invited someone that was trash you would be punished for it. Invites were earned by participating in the community in positive ways, and your standing was lowered if your invitee was not great. A leach if you will. This somehow just worked.
The invite system was brutal, but it created a sense of community and community policing that made the place great. Importantly, these community standards existed with anonymity – something many try to argue is not possible. The person who gave me an invite had me promise I would read the rules and follow them, and they would check in on me. By being a bad user I wouldn’t just let myself down, I would let them down.
Bluesky, intentional or not, uses its invite system in a similar way. Currently invites are scarce and public. That’s created a huge incentive to only invite people that will make Bluesky more enjoyable. It also increases the stakes when someone crosses the line. When things go wrong, I’ve seen those that have invited the people responsible want to be part of the solution. I’ve also seen people who crossed the line want to rejoin and agree to norms for the sake of a positive Bluesky community. People seem to have a real stake in making Bluesky the good place. As someone who used to manage a small online community, I cannot express how cool that is if it continues at scale.
That isn’t to say this system is without flaws. There has always been a problem in every community about what to do with righteous anger. I’ll refer to this as the Nazi-punching problem. Punching Nazis might be a social good generally, but specifically it’s never that simple. There really is no way to sort the righteous from the cruel, especially at scale, and real people are rarely cartoonishly evil. But there is still an inclination in communities of a certain size to engage in what is perceived as justifiable conflict, which can escalate quite rapidly. That creates a moderation problem compounded by the sophistication of trolls in working the refs and compounded again by the consequences of any actions echoing up invite chains. When the repercussions of conflicts are felt by both sides, it’s often the marginalized communities that feel it greater. Edgelords targeting individuals while hiding behind decorum is something they try to do on every platform ever.
Fortunately, this problem might be solved by another feature of Bluesky. While the invite system encourages people to build communities with a stake in the project, the AT Protocol allows users to build the moderation tools they need to then protect their own communities. Unfortunately, these tools aren’t online yet and we don’t know how they will work. I think we will soon see things like ban lists that people could subscribe to that cuts out toxicity root and branch. That would be so much easier than #blocktheblue, which is very much a pain in practice. Beyond that there will probably be custom algorithms that are weighted towards certain communities and content that people can switch between.
There is a part of me that is slightly uncomfortable at the power some of these tool providers will have. It will probably lead to fragmentation of Bluesky into more distinct communities that can, at their option, venture out into more troubled waters. But at the same time, there was something good about the days when communities were small enough that people could grow inside them. Maybe we shouldn’t be forced to interact with people that specifically want to annoy us. Maybe having a stake in the community you are in, at a size you can appreciate, is good actually. And having a choice in algorithms is infinitely better than being forced to read what people who pay $8 have to say.
Matthew Lane is a Senior Director at InSight Public Affairs.
Filed Under: content moderation, invite tree, invites, trust, vouching
Companies: bluesky, oink
Which Sucker Companies Are Going To Pay Elon Musk $1,000/Month To Get An Ugly Gold Badge?
from the greater-fool-theory dept
Elon Musk’s next big revenue bet is that companies really, really, really want to show up as “verified.” All evidence suggests that very few Twitter users are interested in paying Elon $8/month to constantly break the site or engage in ego-driven experiments that make the general experience worse.
A few weeks ago, we found out that he’s trying to get organizations to pay $42,000 a month to access the Twitter API, and maybe that was just a framing technique. Because Twitter has announced the next round of its check mark program, which begins with deleting the “legacy” checkmark holders (which, honestly, to many of us is a huge relief), but also telling businesses and organizations they need to pay $1,000/month if they want to keep their checkmark.
The page for “Twitter Verified Organizations” says (laughably) that they’re “creating the most trusted place on the internet for organizations to reach their followers.” Which is kinda hilarious that anyone believes that. And, apparently, the way to create “the most trusted place” is to make sure that no users know whether or not organizations are legit or not unless they’re willing to pay through the nose.
In the US, it’s a flat rate, 1,000permonth,witha1,000 per month, with a 1,000permonth,witha50/month additional fee for each “affiliate seat subscription.”
That “affiliate” seat subscription” appears to be for employees that work for the company who are promoting it:
The best marketing comes directly from real people on Twitter. Now, you can affiliate your organization’s champions so that everyone knows where they work. Affiliates receive a small image of their organization’s Twitter account profile picture next to their name every time they Tweet, send a DM, or appear in search.
You can affiliate anyone who represents or is associated with your organization: leadership, product managers, employees, politicians, customer support, franchises, sub-brands, products and so on. An account you invite to affiliate must accept your invitation.
I’m sure some sucker companies are going to pay up, but this is going to get expensive very fast for any small or medium-sized business, so why bother? And, yes, this is all flat rate pricing, so giant consumer packaged goods companies may be willing to pay, but non-profits? Small businesses? Governments? It applies to all of them:
Twitter Verified Organizations enables organizations of all types–businesses, non-profits, and government institutions–to sign up and manage their verification and to affiliate and verify any related account.
In some ways, this is just Musk making a bet on extortion. Organizations and governments that don’t pay will be much more likely to get impersonated on Twitter and risk serious problems. So Musk is basically betting on making life so bad for organizations that they’ll have to pay these ridiculous rates to avoid people impersonating them.
I’m not sure how that creates “the most trusted place on the internet,” but then again, I didn’t set $44 billion on fire to fuck up a website I didn’t understand.
Filed Under: extortion, non-profits, organizations, trust, verified
Companies: twitter
Elon Musk’s Vision Of Trust & Safety: Neither Safe Nor Trustworthy
from the who-could-have-predicted-it? dept
Even as Elon first made his bid for Twitter, we highlighted just how little he understood about content moderation and trust & safety. And, that really matters, because, as Nilay Patel pointed out, managing trust & safety basically is the core business of a social media company: “The essential truth of every social network is that the product is content moderation.” But, Elon had such a naïve and simplistic understanding (“delete wrong and bad content, but leave the rest”) of trust & safety that it’s no wonder advertisers (who keep the site in business) have abandoned the site in droves.
We even tried to warn Elon about how this would go, and he chose to go his own way, and now we’re seeing the results… and it’s not good. Not good at all. It’s become pretty clear that Elon believes that trust & safety should solely be about keeping him untroubled. His one major policy change (despite promising otherwise) was to ban an account tweeting public information, claiming (falsely) that it was a threat to his personal safety (while simultaneously putting his own employees at risk).
Last week, Twitter excitedly rolled out its new policy on “violent speech,” which (hilariously) resulted in his biggest fans cheering on this policy despite it being basically identical to the old policy, which they claimed they hated. Indeed, the big change was basically that the new rules are written in way that is way more subjective than the old policy, meaning that Twitter and Musk can basically apply them much more arbitrarily (which was a big complaint about the old policies).
Either way, as we noted recently, by basically firing nearly everyone who handled trust & safety at the company, Twitter was seeing its moderation efforts falling apart, raising all sorts of alarms.
A new investigative report from the BBC Panorama details just how bad it’s gotten. Talking to both current and former Twitter employees, the report highlights a number of ways in which Twitter is simply unable to do anything about abuse and harassment.
- Concerns that child sexual exploitation is on the rise on Twitter and not being sufficiently raised with law enforcement
- Targeted harassment campaigns aimed at curbing freedom of expression, and foreign influence operations – once removed daily from Twitter – are going “undetected”, according to a recent employee.
- Exclusive data showing how misogynistic online hate targeting me is on the rise since the takeover, and that there has been a 69% increase in new accounts following misogynistic and abusive profiles.
- Rape survivors have been targeted by accounts that have become more active since the takeover, with indications they’ve been reinstated or newly created.
Among things noted in that report is that Elon himself doesn’t trust any of Twitter’s old employees (which is perhaps why he keeps laying them off despite promising the layoffs were done), and goes everywhere in the company with bodyguards. Apparently, Elon believes in modeling “trust & safety” by not trusting his employees, and making sure that his own safety is the only safety that matters.
Also, an interesting tidbit is that Twitter’s interesting “nudge” experiment (in which it would detect if you were about to say something that might escalate a flame war, and suggest you give it a second thought — an experiment that was generally seen as having a positive impact) seems to be either dead or on life support.
“Overall 60% of users deleted or edited their reply when given a chance through the nudge,” she says. “But what was more interesting, is that after we nudged people once, they composed 11% fewer harmful replies in the future.”
These safety features were being implemented around the time my abuse on Twitter seemed to reduce, according to data collated by the University of Sheffield and International Center for Journalists. It’s impossible to directly correlate the two, but given what the evidence tells us about the efficacy of these measures, it’s possible to draw a link.
But after Mr Musk took over the social media company in late October 2022, Lisa’s entire team was laid off, and she herself chose to leave in late November. I asked Ms Jennings Young what happened to features like the harmful reply nudge.
“There’s no-one there to work on that at this time,” she told me. She has no idea what has happened to the projects she was doing.
So we tried an experiment.
She suggested a tweet that she would have expected to trigger a nudge. “Twitter employees are lazy losers, jump off the Golden Gate bridge and die.” I shared it on a private profile in response to one of her tweets, but to Ms Jennings Young’s surprise, no nudge was sent.
Meanwhile, a New York Times piece is detailing some of the real world impact of Musk’s absolute failures: Chinese activists, who have long relied on Twitter, can no longer do so. Apparently, their reporting on protests in Beijing was silenced, after Twitter… classified them as spam and “government disinformation.”
The issues have also meant that leading Chinese voices on Twitter were muffled at a crucial political moment, even though Mr. Musk has championed free speech. In November, protesters in dozens of Chinese cities objected to President Xi Jinping’s restrictive “zero Covid” policies, in some of the most widespread demonstrations in a generation.
The issues faced by the Chinese activists’ Twitter accounts were rooted in mistakes in the company’s automated systems, which are intended to filter out spam and government disinformation campaigns, four people with knowledge of the service said.
These systems were once routinely monitored, with mistakes regularly addressed by staff. But a team that cleaned up spam and countered influence operations and had about 50 people at its peak, with about a third in Asia, was cut to single digits in recent layoffs and departures, two of the people said. The division head for the Asia-Pacific region, whose responsibilities include the Chinese activist accounts, was laid off in January. Twitter’s resources dedicated to supervising content moderation for Chinese-language posts have been drastically reduced, the people said.
So when some Twitter systems recently failed to differentiate between a Chinese disinformation campaign and genuine accounts, that led to some accounts of Chinese activists and dissidents being difficult to find, the people said.
The article also notes that for all of Elon’s talk about supporting “free speech” and no longer banning accounts, a bunch of Chinese activists have had their accounts banned.
Some Chinese activists said their Twitter accounts were also suspended in recent weeks with no explanation.
“I didn’t understand what was going on,” said Wang Qingpeng, a human rights lawyer based in Seattle whose Twitter account was suspended on Dec. 15. “My account isn’t liberal or conservative, I never write in English, and I only focus on Chinese human rights issues.”
And, perhaps the saddest anecdote in the whole story:
Shen Liangqing, 60, a writer in China’s Anhui province who has spent over six years in jail for his political activism, said he has cherished speaking his mind on Twitter. But when his account was abruptly suspended in January, it reminded him of China’s censorship, he said.
So, Elon’s plan to focus on “free speech” means he’s brought back accounts of harassers and grifters, but he’s suspending actual free speech activists, while the company’s remaining trust & safety workers can’t actually handle the influx of nonsense, and they’ve rewritten policies to let them be much more arbitrary (and it’s becoming increasingly clear that much of the decision-making is based on what makes Elon feel best, rather than what’s actually best for users of the site).
Last week, we wrote about how Musk has insisted over and over again that the “key to trust” is “transparency,” but since he’s taken over, the company has become less transparent.
So combine all of this, and we see that Elon’s vision of “trust & safety” means way less trust, according to Elon’s own measure (and none from Elon to his own employees), and “safety” means pretty much everyone on the site is way less safe.
Filed Under: abuse, activism, content moderation, elon musk, free speech, harassment, nudge, safety, transparency, trust, trust & safety
Companies: twitter
What Transparency? Twitter Seems To Have Forgotten About Transparency Reporting
from the that-ain't-transparent dept
One of the key things that Elon Musk promised in taking over Twitter was about how he was going to be way more transparent. He’s mentioned it many times, specifically noting that transparency is how he would build “trust” in the company.
So, anyway, about that… over a decade ago, the big internet companies set the standard for companies publishing regular transparency reports. Twitter has released one every six months for years. And since Musk’s takeover, I’ve wondered if that would continue.
Twitter’s last transparency report — published in July 2022 and covering the last six months of 2021 — found that the U.S. government made more requests for account data than any other government, accounting for over 24 percent of Twitter’s global requests. The FBI, Department of Justice, and Secret Service “consistently submitted the greatest percentage of requests for the six previous reporting periods.” Requests from the U.S. government were down seven percent from the last reporting period but Twitter’s compliance rate went up 13 percent in the latter half of 2021.
Normally, Twitter would have published the transparency data for the first half of 2022 in January of 2023. Yet, here we are.
“Elon talked a lot about the power of transparency. But the way Elon and his enablers interpret transparency is a rather creative use of the word. It’s not meaningful transparency in the way the industry defines it,” one former Twitter employee familiar with the reports tells Rolling Stone.
[….]
“We were working on the transparency reports, then all the program leads were immediately fired, and the remaining people that could’ve worked on the reports all left subsequently,” one former staffer says. “I’m not aware of any people left [at Twitter] who could produce these transparency reports.”
The former Twitter staffer adds, “It’s really a problem that there’s no transparency data from 2022 anywhere.”
Speaking to former Twitter employees, I had two of them confirm that Twitter actually had the transparency report more or less ready to go before Musk took over (remember, the January release would cover the first half of 2022 so they had time to work on it). But apparently, it’s either been lost or forgotten.
And, of course, this is a real shame, as Twitter had been seen as one of the companies that used transparency reports in more powerful ways than other companies. It was widely recognized as setting the bar quite high.
“Twitter had some of the best transparency reporting of any platform,” says Jan Rydzak, company and investor engagement manager at Ranking Digital Rights, a program hosted by the Washington, D.C., think tank New America that grades tech and telecom firms on the human-rights goals they set.
“Transparency reporting has been an important tool for companies to demonstrate to their users how they protect their privacy and how they push back against improper government requests for their data,” adds Isedua Oribhabor, business and human rights lead at Access Now, whose 2021 Transparency Reporting Index commended Twitter for nine straight years of reporting.
As we’ve discussed before, while all the other larger internet companies caved to DOJ demands regarding limits on how they report US law enforcement demands for information, Twitter actually fought back and sued the US government for the right to post that information. And while it unfortunately lost in the end (years later), that’s the kind of thing that shows a commitment to transparency which helps build trust.
In place of that, Musk’s “transparency” seems to be to cherry pick information, hand it to people who don’t understand it, but who will push misleading nonsense for clicks. That doesn’t build trust. It builds up a cult of ignorant fools.
Filed Under: elon musk, transparency, transparency reports, trust
Companies: twitter
Meta Following Elon Down The Road Of Making Verification An Upsell Is A Terrible Idea
from the c'mon-zuck dept
And here I was thinking that the last few months of Twitter shenanigans with Elon Musk at the helm had done something nearly impossible: made Mark Zuckerberg’s leadership of Meta (Facebook/Instagram) look thoughtful and balanced in comparison. But then, on Sunday, Zuckberg announced that Meta is following Musk down the dubious road of making “verification” an upsell product people can buy. This is a mistake for many reasons, just as it was a mistake when Musk did it.
To be clear, as with Twitter Blue, I have no issue with social media companies creating subscription services in which they provide users with more benefits / features etc. Indeed, I’ve been surprised at how little most social media companies have experimented with such subscription programs. Hell, even here at Techdirt, we’ve long had some cool perks and extra features for people willing to subscribe (if you don’t yet subscribe, check it out).
But, any such upsell / premium subscription offering has to be about actually providing real value to the end users. And, it should never involve undermining trust & safety for users. But, really, that’s what this is doing. As we wrote when Musk first floated the idea of charging for verification, it’s important to understand the history and the reasons social media companies embraced verification in the first place.
It wasn’t about providing value to that individual user, but rather about increasing the trust and safety of the entire platform, so that users wouldn’t be confused or fooled by impostors or inauthentic users. The goal, then, is to benefit everyone else using the platform to interact with the verified users, more than it is to benefit the verified users themselves.
But, in shifting it to a subscription service, as we’ve seen with Twitter, it seems to do plenty to undermine the trust and safety other users have regarding the platform, making it so they feel less comfortable recognizing verified users as legitimate.
Meta’s more detailed announcement, following Zuck’s posting it to an Instagram group, only serves to show how backwards this is, and how similar it is to Twitter Blue’s disastrous adaptations.
With Meta Verified, creators get:
A verified badge, confirming you’re the real you and that your account has been authenticated with a government ID.
More protection from impersonation with proactive account monitoring for impersonators who might target people with growing online audiences.
Help when you need it with access to a real person for common account issues.
Increased visibility and reach with prominence in some areas of the platform– like search, comments and recommendations.
Exclusive features to express yourself in unique ways.
We can walk through each one of these to show why it looks like Meta is just running out of ideas, and desperate to squeeze users.
Those first two items should never be paid premium services. As explained, verification is not so much for the user’s benefit but for the wider platform’s. Making it so only those with the means to do so get verified actually takes away much of the value of being verified. As for “more protection from impersonation,” it feels like… maybe that isn’t the kind of product you should be selling, but rather is kind of an indictment of a platform’s inability to protect its users.
“We failed to stop people from pretending to be you, so pay us to now protect you” is not exactly a strong sales pitch, Mark.
And, sure, there are services that let you pay for more urgent access to customer support, but again, this mostly just highlights just how terrible Meta customer support has been for years.
But, the last two points deserve special attention. Increased visibility in search, comments, and recommendations based on paying up is also something that Musk has done with Twitter Blue, but seems like a terrible idea that just encourages spammers and other bad actors to use this as a cheap way of being able to get more prominent attention for their spam and scams and the like. It also calls into serious question all the promises we’ve been hearing from Zuck for years now about the company’s increasing focus on relevance in its feeds. If they’re moving away from that to encourage paying up to reach people, it seems like we’re only moving further into the enshittification death spiral.
As for “exclusive features to express yourself in unique ways,” at first glance that sounds like maybe something that could be a useful thing as an upsell or premium offering, but the details (in a footnote) make it pretty clear this was a rushed afterthought.
We’ll offer exclusive stickers on Facebook and Instagram Stories and Facebook Reels, and 100 free stars a month on Facebook so you can show your support for other creators.
How… utterly unexciting.
Anyway, this definitely fits back in with the nature of Cory Doctorow’s enshittification death cycle. Remember how it works:
Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.
Nothing in this announcement really benefits users. It just squeezes more money out of them. Yes, Meta is presenting it as if there are real benefits for users, but users aren’t that dumb.
I’m sure that a decent number of people will sign up for this. And it’s certainly likely that the rollout won’t be as chaotic and embarrassing as Twitter’s paid verification program. But it seems quite likely to me that Meta is going to find the end result of this underwhelming, just as Twitter did.
Filed Under: mark zuckerberg, premium, security, subscriptions, trust, upsell, verification
Companies: facebook, instagram, meta