trust – Techdirt (original) (raw)

How To Bell The AI Cat?

from the are-you-a-bot? dept

The mice finally agreed how they wanted the cat to behave, and congratulated each other on the difficult consensus. They celebrated in lavish cheese island retreats and especially feted those brave heroes who promised to place the bells and controls. The heroes received generous funding, with which they first built a safe fortress in which to build and test the amazing bells they had promised. Experimenting in safety without actually touching any real cats, the heroes happily whiled away many years.

As wild cats ran rampant, the wealthy and wise hero mice looked out from their well-financed fortresses watching the vicious beasts pouncing and polishing off the last scurrying ordinaries. Congratulating each other on their wisdom of testing the controls only on tame simulated cats, they mused over the power of evolution to choose those worthy of survival…

Deciding how we want AIs to behave may be useful as an aspirational goal, but it tempts us to spend all our time on the easy part, and perhaps cede too much power up front to those who claim to have the answers.

To enforce rules, one must have the ability to deliver consequences – which presumes some long-lived entity that will receive them, and possibly change its behavior. The fight with organized human scammers and spammers is already a difficult battle, and even though many of them are engaged in behaviors that are actually illegal, the delivery of consequences is not easy. Most platforms settle for keeping out the bulk of the attackers, with the only consequence being a blocked transaction or a ban. This is done with predictive models (yes, AI, though not the generative kind) that makes features out of “assets” such as identifiers, logins, device ids which are at least somewhat long-lived. The longer such an “asset” behaves well, the more it is trusted. Sometimes attackers intentionally create “sleeper” logins that they later burn.

Add generative AI to the mix, and the playing field tilts more towards the bad actors. AI driven accounts might more credibly follow “normal” patterns, creating more trust over time before burning it. They may also be able to enter walled gardens that have barriers of social interaction over time, damaging trust in previously safe smaller spaces.

What generative AI does is lower the value of observing “normal” interactions, because malicious code can now act like a normal human much more effectively than before. Regardless of how we want AIs to behave, we have to assume that many of them will be put to bad uses, or even that they may be released like viruses before long. Even without any new rules, how can we detect and counteract the proliferation of AIs who are scamming, spamming, behaving inauthentically, and otherwise doing what malicious humans already do?

Anyone familiar with game theory (see Nicky Case’s classic Evolution of Trust for a very accessible intro) knows that behavior is “better” — more honest and cooperative — in a repeated game with long-lived entities. If AIs can somehow be held responsible for their behavior, if we can recognize “who” we are dealing with, perhaps that will enable all the rules we might later agree we want to enforce on them.

However, upfront we don’t know when we are dealing with an AI as opposed to a human — which is kind of the point. Humans need to be pseudonymous, and sometimes anonymous, so we can’t always demand that the humans do the work of demonstrating who they are. The best we can do in such scenarios, is to have some long-lived identifier for each entity, without knowing its nature. That identifier is something it can take with it for establishing its credibility in a new location.

“Why, that’s a DID!” I can hear the decentralized tech folx exclaim — a decentralized identifier, with exactly this purpose, to create long-lived but possibly pseudonymous identifiers for entities that can then be talked about by other entities who might express more or less trust in them. The difference between a DID and a Twitter handle, say, is that a DID is portable — the controller has the key which allows them to prove they are the owner of the DID, by signing a statement cryptographically (the DID is essentially the public key half of the pair) — so that the owner can assert who they are on any platform or context.

Once we have a long-lived identity in place, the next question is how do you set up rules — and how would those rules apply to generative AI?

We could require that AIs always answer the question “**Who are you?**” by signing a message with their private key and proving their ownership of a DID, even when interacting from a platform that does not normally expose this. Perhaps anyone who cannot or does not wish to prove their humanity to a zktrust trusted provider, must always be willing to answer this challenge, or be banned from many spaces.

What we are proposing is essentially a dog license, that each entity (whether human or AI) interacting must identify who it is in some long term way, so that both public attestations about it and private or semi-private ones can be made. Various accreditors can spring up, and each maintainer of a space can decide how high (or low) to put the bar. The key is we must make it easy for spaces to gauge the trust of new participants, independent of their words.

Without the expectation of a DID, essentially all we have to lean on is the domain name service of where the entity is representing itself, or the policy of the centralized provider which may be completely opaque. But this means that new creators of spaces have no way to screen participants — so we would ossify even further into the tech giants we have now. Having long-lived identifiers that cross platforms enables the development of trust services, including privacy-preserving zero-knowledge trust services, that any new platform creator could lean on to create useful, engaging spaces (relatively) safe from spammers, scammers, and manipulators.

Identifiers are not a guarantee of good behavior, of course — a human or AI can behave deceptively, run scams, spread disinformation and so on even if we know exactly who they are. They do, however, allow others to respond in kind. In game theory, a generous tit-for-tat strategy winds up generally being successful in out-competing bad actors, allowing cooperators who behave fairly with others to thrive. Without the ability to identify the other players, however, the cheaters will win every round.

With long term identifiers, the game is not over — but it does become much deeper and more complex, and opens an avenue for the “honest” cooperators to win, that is, for those who reliably communicate their intentions. Having identifiers enables a social graph, where one entity can “stake” their own credibility to vouch for another. It also enables false reporting and manipulation, even coercion! The game is anything but static. Smaller walled gardens of long-trusted actors may have more predictable behavior, while more open spaces provide opportunity for newcomers.

This brings us to the point where consensus expectations have value. Once we can track and evaluate the behavior, we can set standards for the spaces we occupy. Creating the expectation of an identifier, is perhaps the first and most critical standard to set.

Generative AI can come play with us, but it should do so in an honest, above board way, and play by the same rules we expect from each other. We may have to adapt our tools for everyone in order to accomplish it — and must be careful we don’t lose our own freedoms in the process.

Filed Under: ai, anonymity, dids, identifiers, trust

Unity Fallout Continues: Dev Group Shuts Down While Developers Refuse To Come Back

from the the-book-of-exodus dept

The fallout from game engine Unity’s decision to try to cram a completely new and different pricing structure down the throats of game developers continues. Originally announced in mid-September, Unity took a bunch of its tiered structures of its offerings and suddenly instituted per-install fees, along with a bunch of other fee structures and requirements for its lower-level tiers that never had these pricing models. The backlash from developers and the public at large was so overwhelmingly one-sided and swift that the company then backtracked, making a bunch of noise about how it will listen better and learn from this fiasco. The backtracking did make a bunch of changes to address the anger from its initial announcement, including:

You can see the table below provided by Unity for the details mentioned above:

Is this better? Yes! And some developers have even come back with positive comments on the new plan. Others, not so much.

“Unity fixed all the major issues (except trust), so it’s a possibility to use again in the future,” indie developer Radiangames wrote. “Uninstalling Godot and Unreal and getting back to work on Instruments.”

Others were less forgiving. “Unity’s updated policy can be classified as the textbook definition of, ‘We didn’t actually hear you, and we don’t care what you wanted,'” Cerulean and Drunk Robot Games engineer RedVonix wrote on social media. “We’ll never ship a Unity game of our own again…” they added.

That “except trust” parenthetical is doing a lot of work, because that’s the entire damned problem. If Unity came out with this plan initially, and had actually worked constructively with its customers, the blow up about this almost certainly would have been far more muffled. But trust is one of those things that takes forever to build and only a moment to destroy.

Along those lines, we’ve learned subsequently both that some community groups that have sprung up around Unity are disbanding out of disgust for the company’s actions and that plenty of developers aren’t coming back to try this second bite at the pricing model apple that Unity wants to offer them.

As to the first, the oldest Unity dev group that exists, Boston Unity Group (BUG) has decided to call it quits, putting its reasons why in no uncertain terms.

“Over the past few years, Unity has unfortunately shifted its focus away from the games industry and away from supporting developer communities,” the group leadership wrote in a departure note. “Following the IPO, the company has seemingly put profit over all else, with several acquisitions and layoffs of core personnel. Many key systems that developers need are still left in a confusing and often incomplete state, with the messaging that advertising and revenue matter more to Unity than the functionality game developers care about.”

BUG says the install-fee terms Unity first announced earlier this month were “unthinkably hostile” to users and that even the “new concessions” in an updated pricing model offered late last week “disproportionately affect the success of indie studios in our community.” But it’s the fact that such “resounding, unequivocal condemnation from the games industry” was necessary to get those changes in the first place that has really shaken the community to its core.

“We’ve seen how easily and flippantly an executive-led business decision can risk bankrupting the studios we’ve worked so hard to build, threaten our livelihoods as professionals, and challenge the longevity of our industry,” BUG wrote. “The Unity of today isn’t the same company that it was when the group was founded, and the trust we used to have in the company has been completely eroded.”

Ouch. That’s about as complete a shellacking as you’re going to get from what, and I cannot stress this enough, is a dedicated group of Unity’s fans and customers. And while these organically created dev groups quitting on Unity certainly is bad enough, there are plenty of developers out there chiming in on these changes, essentially stating that the trust has been broken and there isn’t a chance in hell that they’re coming back on board the Unity train.

Vampire Survivors developer Poncle, for instance, gave a succinct “lol no thank you” when asked during a Reddit AMA over the weekend if their next game/sequel would again use the Unity Engine. “Even if Unity were to walk back entirely on their decisions, I don’t think it would be wise to trust them while they are under the current leadership,” Poncle added later in the AMA.

“Basically, nothing has changed to stop Unity from doing this again in the future,” InnerSloth (Among Us) developer Tony Coculuzzi wrote on social media Friday afternoon. “The ghouls are still in charge, and they’re thinking up ways to make up for this hit on projected revenue as we speak… Unity leadership still can’t be trusted to not fuck us harder in the future.”

Other developers chimed in that they did have discussions with Unity about the new pricing structure… and were summarily ignored. In those cases, those developers appeared to be solidly in the camp of “Fool me once shame on you…”.

There are certain things that are just really difficult to walk back. And breaking the trust of your own fans and customers, where loyalty is so key to the business, is one of them. The picture Unity painted for its customers is one where it simply does not care and is now pretending to, only because it landed itself in hot water.

Filed Under: development, fees, trust, video games
Companies: unity

How Bluesky’s Invite Tree Could Become A Tool To Create Better Social Norms Online

from the trust-through-vouching dept

At this moment, Bluesky has caught lightning in a bottle. It’s already an exciting platform that’s fun and allows vulnerable communities to exist. This sense of safety has allowed folks to cut loose, and people are calling it a “throwback to an earlier internet era.” I think that’s right, and in some respects that retro design is what is driving its success. In fact, one aspect of its design was used pretty extensively to protect some of the internet’s early underground communities.

As an Old, I realize I need to tell a story from the internet of yore to give context. Before streaming, there was rampant downloading of unlicensed music. This was roughly divided into two camps: those that just didn’t want to pay for music, and those that wanted to discover independent music. I’d argue the first camp were not true music fans since they just refused to pay artists. The other camp was more likely to have run out of discretionary income because of their love for artists. Music discovery was simply not something that could be done on the cheap before streaming because you only had radio, MTV (for a bit), and friends’ collections to hear new music. My strategy was to find a cool record shop and ask what they were listening to. I’d also vibe-check the album art and take a chance (something I still do). Even then it wasn’t enough, and I wasted a lot of money. Enter OiNK.

OiNK filled a unique niche in internet culture around music fandom. It would expressly discourage (and sometimes ban) popular artists. It also encouraged community discovery and discussion. At any given moment you could grab something from the top 10 list and know it was the coolest of the cool in independent music (even though you’ve probably never heard of the band). It was probably where hipsters started to get annoying. We were like Navi from Legend of Zelda to our friends: “Hey, Listen!” Trent Reznor called it “the world’s greatest record store.”

OiNK also had a problem. Even though many independant and upcoming artists liked – and even profited from – the discovery these sites and forums enabled, it was still something the industry as a whole was bringing the hammer down on. OiNK’s solution to this problem was to be invite only. Not only was it invite only, if you invited someone that was trash you would be punished for it. Invites were earned by participating in the community in positive ways, and your standing was lowered if your invitee was not great. A leach if you will. This somehow just worked.

The invite system was brutal, but it created a sense of community and community policing that made the place great. Importantly, these community standards existed with anonymity – something many try to argue is not possible. The person who gave me an invite had me promise I would read the rules and follow them, and they would check in on me. By being a bad user I wouldn’t just let myself down, I would let them down.

Bluesky, intentional or not, uses its invite system in a similar way. Currently invites are scarce and public. That’s created a huge incentive to only invite people that will make Bluesky more enjoyable. It also increases the stakes when someone crosses the line. When things go wrong, I’ve seen those that have invited the people responsible want to be part of the solution. I’ve also seen people who crossed the line want to rejoin and agree to norms for the sake of a positive Bluesky community. People seem to have a real stake in making Bluesky the good place. As someone who used to manage a small online community, I cannot express how cool that is if it continues at scale.

That isn’t to say this system is without flaws. There has always been a problem in every community about what to do with righteous anger. I’ll refer to this as the Nazi-punching problem. Punching Nazis might be a social good generally, but specifically it’s never that simple. There really is no way to sort the righteous from the cruel, especially at scale, and real people are rarely cartoonishly evil. But there is still an inclination in communities of a certain size to engage in what is perceived as justifiable conflict, which can escalate quite rapidly. That creates a moderation problem compounded by the sophistication of trolls in working the refs and compounded again by the consequences of any actions echoing up invite chains. When the repercussions of conflicts are felt by both sides, it’s often the marginalized communities that feel it greater. Edgelords targeting individuals while hiding behind decorum is something they try to do on every platform ever.

Fortunately, this problem might be solved by another feature of Bluesky. While the invite system encourages people to build communities with a stake in the project, the AT Protocol allows users to build the moderation tools they need to then protect their own communities. Unfortunately, these tools aren’t online yet and we don’t know how they will work. I think we will soon see things like ban lists that people could subscribe to that cuts out toxicity root and branch. That would be so much easier than #blocktheblue, which is very much a pain in practice. Beyond that there will probably be custom algorithms that are weighted towards certain communities and content that people can switch between.

There is a part of me that is slightly uncomfortable at the power some of these tool providers will have. It will probably lead to fragmentation of Bluesky into more distinct communities that can, at their option, venture out into more troubled waters. But at the same time, there was something good about the days when communities were small enough that people could grow inside them. Maybe we shouldn’t be forced to interact with people that specifically want to annoy us. Maybe having a stake in the community you are in, at a size you can appreciate, is good actually. And having a choice in algorithms is infinitely better than being forced to read what people who pay $8 have to say.

Matthew Lane is a Senior Director at InSight Public Affairs.

Filed Under: content moderation, invite tree, invites, trust, vouching
Companies: bluesky, oink

Which Sucker Companies Are Going To Pay Elon Musk $1,000/Month To Get An Ugly Gold Badge?

from the greater-fool-theory dept

Elon Musk’s next big revenue bet is that companies really, really, really want to show up as “verified.” All evidence suggests that very few Twitter users are interested in paying Elon $8/month to constantly break the site or engage in ego-driven experiments that make the general experience worse.

A few weeks ago, we found out that he’s trying to get organizations to pay $42,000 a month to access the Twitter API, and maybe that was just a framing technique. Because Twitter has announced the next round of its check mark program, which begins with deleting the “legacy” checkmark holders (which, honestly, to many of us is a huge relief), but also telling businesses and organizations they need to pay $1,000/month if they want to keep their checkmark.

The page for “Twitter Verified Organizations” says (laughably) that they’re “creating the most trusted place on the internet for organizations to reach their followers.” Which is kinda hilarious that anyone believes that. And, apparently, the way to create “the most trusted place” is to make sure that no users know whether or not organizations are legit or not unless they’re willing to pay through the nose.

In the US, it’s a flat rate, 1,000permonth,witha1,000 per month, with a 1,000permonth,witha50/month additional fee for each “affiliate seat subscription.”

That “affiliate” seat subscription” appears to be for employees that work for the company who are promoting it:

The best marketing comes directly from real people on Twitter. Now, you can affiliate your organization’s champions so that everyone knows where they work. Affiliates receive a small image of their organization’s Twitter account profile picture next to their name every time they Tweet, send a DM, or appear in search.

You can affiliate anyone who represents or is associated with your organization: leadership, product managers, employees, politicians, customer support, franchises, sub-brands, products and so on. An account you invite to affiliate must accept your invitation.

I’m sure some sucker companies are going to pay up, but this is going to get expensive very fast for any small or medium-sized business, so why bother? And, yes, this is all flat rate pricing, so giant consumer packaged goods companies may be willing to pay, but non-profits? Small businesses? Governments? It applies to all of them:

Twitter Verified Organizations enables organizations of all types–businesses, non-profits, and government institutions–to sign up and manage their verification and to affiliate and verify any related account.

In some ways, this is just Musk making a bet on extortion. Organizations and governments that don’t pay will be much more likely to get impersonated on Twitter and risk serious problems. So Musk is basically betting on making life so bad for organizations that they’ll have to pay these ridiculous rates to avoid people impersonating them.

I’m not sure how that creates “the most trusted place on the internet,” but then again, I didn’t set $44 billion on fire to fuck up a website I didn’t understand.

Filed Under: extortion, non-profits, organizations, trust, verified
Companies: twitter

Elon Musk’s Vision Of Trust & Safety: Neither Safe Nor Trustworthy

from the who-could-have-predicted-it? dept

Even as Elon first made his bid for Twitter, we highlighted just how little he understood about content moderation and trust & safety. And, that really matters, because, as Nilay Patel pointed out, managing trust & safety basically is the core business of a social media company: “The essential truth of every social network is that the product is content moderation.” But, Elon had such a naïve and simplistic understanding (“delete wrong and bad content, but leave the rest”) of trust & safety that it’s no wonder advertisers (who keep the site in business) have abandoned the site in droves.

We even tried to warn Elon about how this would go, and he chose to go his own way, and now we’re seeing the results… and it’s not good. Not good at all. It’s become pretty clear that Elon believes that trust & safety should solely be about keeping him untroubled. His one major policy change (despite promising otherwise) was to ban an account tweeting public information, claiming (falsely) that it was a threat to his personal safety (while simultaneously putting his own employees at risk).

Last week, Twitter excitedly rolled out its new policy on “violent speech,” which (hilariously) resulted in his biggest fans cheering on this policy despite it being basically identical to the old policy, which they claimed they hated. Indeed, the big change was basically that the new rules are written in way that is way more subjective than the old policy, meaning that Twitter and Musk can basically apply them much more arbitrarily (which was a big complaint about the old policies).

Either way, as we noted recently, by basically firing nearly everyone who handled trust & safety at the company, Twitter was seeing its moderation efforts falling apart, raising all sorts of alarms.

A new investigative report from the BBC Panorama details just how bad it’s gotten. Talking to both current and former Twitter employees, the report highlights a number of ways in which Twitter is simply unable to do anything about abuse and harassment.

Among things noted in that report is that Elon himself doesn’t trust any of Twitter’s old employees (which is perhaps why he keeps laying them off despite promising the layoffs were done), and goes everywhere in the company with bodyguards. Apparently, Elon believes in modeling “trust & safety” by not trusting his employees, and making sure that his own safety is the only safety that matters.

Also, an interesting tidbit is that Twitter’s interesting “nudge” experiment (in which it would detect if you were about to say something that might escalate a flame war, and suggest you give it a second thought — an experiment that was generally seen as having a positive impact) seems to be either dead or on life support.

“Overall 60% of users deleted or edited their reply when given a chance through the nudge,” she says. “But what was more interesting, is that after we nudged people once, they composed 11% fewer harmful replies in the future.”

These safety features were being implemented around the time my abuse on Twitter seemed to reduce, according to data collated by the University of Sheffield and International Center for Journalists. It’s impossible to directly correlate the two, but given what the evidence tells us about the efficacy of these measures, it’s possible to draw a link.

But after Mr Musk took over the social media company in late October 2022, Lisa’s entire team was laid off, and she herself chose to leave in late November. I asked Ms Jennings Young what happened to features like the harmful reply nudge.

“There’s no-one there to work on that at this time,” she told me. She has no idea what has happened to the projects she was doing.

So we tried an experiment.

She suggested a tweet that she would have expected to trigger a nudge. “Twitter employees are lazy losers, jump off the Golden Gate bridge and die.” I shared it on a private profile in response to one of her tweets, but to Ms Jennings Young’s surprise, no nudge was sent.

Meanwhile, a New York Times piece is detailing some of the real world impact of Musk’s absolute failures: Chinese activists, who have long relied on Twitter, can no longer do so. Apparently, their reporting on protests in Beijing was silenced, after Twitter… classified them as spam and “government disinformation.”

The issues have also meant that leading Chinese voices on Twitter were muffled at a crucial political moment, even though Mr. Musk has championed free speech. In November, protesters in dozens of Chinese cities objected to President Xi Jinping’s restrictive “zero Covid” policies, in some of the most widespread demonstrations in a generation.

The issues faced by the Chinese activists’ Twitter accounts were rooted in mistakes in the company’s automated systems, which are intended to filter out spam and government disinformation campaigns, four people with knowledge of the service said.

These systems were once routinely monitored, with mistakes regularly addressed by staff. But a team that cleaned up spam and countered influence operations and had about 50 people at its peak, with about a third in Asia, was cut to single digits in recent layoffs and departures, two of the people said. The division head for the Asia-Pacific region, whose responsibilities include the Chinese activist accounts, was laid off in January. Twitter’s resources dedicated to supervising content moderation for Chinese-language posts have been drastically reduced, the people said.

So when some Twitter systems recently failed to differentiate between a Chinese disinformation campaign and genuine accounts, that led to some accounts of Chinese activists and dissidents being difficult to find, the people said.

The article also notes that for all of Elon’s talk about supporting “free speech” and no longer banning accounts, a bunch of Chinese activists have had their accounts banned.

Some Chinese activists said their Twitter accounts were also suspended in recent weeks with no explanation.

“I didn’t understand what was going on,” said Wang Qingpeng, a human rights lawyer based in Seattle whose Twitter account was suspended on Dec. 15. “My account isn’t liberal or conservative, I never write in English, and I only focus on Chinese human rights issues.”

And, perhaps the saddest anecdote in the whole story:

Shen Liangqing, 60, a writer in China’s Anhui province who has spent over six years in jail for his political activism, said he has cherished speaking his mind on Twitter. But when his account was abruptly suspended in January, it reminded him of China’s censorship, he said.

So, Elon’s plan to focus on “free speech” means he’s brought back accounts of harassers and grifters, but he’s suspending actual free speech activists, while the company’s remaining trust & safety workers can’t actually handle the influx of nonsense, and they’ve rewritten policies to let them be much more arbitrary (and it’s becoming increasingly clear that much of the decision-making is based on what makes Elon feel best, rather than what’s actually best for users of the site).

Last week, we wrote about how Musk has insisted over and over again that the “key to trust” is “transparency,” but since he’s taken over, the company has become less transparent.

So combine all of this, and we see that Elon’s vision of “trust & safety” means way less trust, according to Elon’s own measure (and none from Elon to his own employees), and “safety” means pretty much everyone on the site is way less safe.

Filed Under: abuse, activism, content moderation, elon musk, free speech, harassment, nudge, safety, transparency, trust, trust & safety
Companies: twitter

What Transparency? Twitter Seems To Have Forgotten About Transparency Reporting

from the that-ain't-transparent dept

One of the key things that Elon Musk promised in taking over Twitter was about how he was going to be way more transparent. He’s mentioned it many times, specifically noting that transparency is how he would build “trust” in the company.

So, anyway, about that… over a decade ago, the big internet companies set the standard for companies publishing regular transparency reports. Twitter has released one every six months for years. And since Musk’s takeover, I’ve wondered if that would continue.

Apparently not.

Twitter’s last transparency report — published in July 2022 and covering the last six months of 2021 — found that the U.S. government made more requests for account data than any other government, accounting for over 24 percent of Twitter’s global requests. The FBI, Department of Justice, and Secret Service “consistently submitted the greatest percentage of requests for the six previous reporting periods.” Requests from the U.S. government were down seven percent from the last reporting period but Twitter’s compliance rate went up 13 percent in the latter half of 2021.

Normally, Twitter would have published the transparency data for the first half of 2022 in January of 2023. Yet, here we are.

“Elon talked a lot about the power of transparency. But the way Elon and his enablers interpret transparency is a rather creative use of the word. It’s not meaningful transparency in the way the industry defines it,” one former Twitter employee familiar with the reports tells Rolling Stone.

[….]

“We were working on the transparency reports, then all the program leads were immediately fired, and the remaining people that could’ve worked on the reports all left subsequently,” one former staffer says. “I’m not aware of any people left [at Twitter] who could produce these transparency reports.”

The former Twitter staffer adds, “It’s really a problem that there’s no transparency data from 2022 anywhere.”

Speaking to former Twitter employees, I had two of them confirm that Twitter actually had the transparency report more or less ready to go before Musk took over (remember, the January release would cover the first half of 2022 so they had time to work on it). But apparently, it’s either been lost or forgotten.

And, of course, this is a real shame, as Twitter had been seen as one of the companies that used transparency reports in more powerful ways than other companies. It was widely recognized as setting the bar quite high.

“Twitter had some of the best transparency reporting of any platform,” says Jan Rydzak, company and investor engagement manager at Ranking Digital Rights, a program hosted by the Washington, D.C., think tank New America that grades tech and telecom firms on the human-rights goals they set.

“Transparency reporting has been an important tool for companies to demonstrate to their users how they protect their privacy and how they push back against improper government requests for their data,” adds Isedua Oribhabor, business and human rights lead at Access Now, whose 2021 Transparency Reporting Index commended Twitter for nine straight years of reporting.

As we’ve discussed before, while all the other larger internet companies caved to DOJ demands regarding limits on how they report US law enforcement demands for information, Twitter actually fought back and sued the US government for the right to post that information. And while it unfortunately lost in the end (years later), that’s the kind of thing that shows a commitment to transparency which helps build trust.

In place of that, Musk’s “transparency” seems to be to cherry pick information, hand it to people who don’t understand it, but who will push misleading nonsense for clicks. That doesn’t build trust. It builds up a cult of ignorant fools.

Filed Under: elon musk, transparency, transparency reports, trust
Companies: twitter

Meta Following Elon Down The Road Of Making Verification An Upsell Is A Terrible Idea

from the c'mon-zuck dept

And here I was thinking that the last few months of Twitter shenanigans with Elon Musk at the helm had done something nearly impossible: made Mark Zuckerberg’s leadership of Meta (Facebook/Instagram) look thoughtful and balanced in comparison. But then, on Sunday, Zuckberg announced that Meta is following Musk down the dubious road of making “verification” an upsell product people can buy. This is a mistake for many reasons, just as it was a mistake when Musk did it.

To be clear, as with Twitter Blue, I have no issue with social media companies creating subscription services in which they provide users with more benefits / features etc. Indeed, I’ve been surprised at how little most social media companies have experimented with such subscription programs. Hell, even here at Techdirt, we’ve long had some cool perks and extra features for people willing to subscribe (if you don’t yet subscribe, check it out).

But, any such upsell / premium subscription offering has to be about actually providing real value to the end users. And, it should never involve undermining trust & safety for users. But, really, that’s what this is doing. As we wrote when Musk first floated the idea of charging for verification, it’s important to understand the history and the reasons social media companies embraced verification in the first place.

It wasn’t about providing value to that individual user, but rather about increasing the trust and safety of the entire platform, so that users wouldn’t be confused or fooled by impostors or inauthentic users. The goal, then, is to benefit everyone else using the platform to interact with the verified users, more than it is to benefit the verified users themselves.

But, in shifting it to a subscription service, as we’ve seen with Twitter, it seems to do plenty to undermine the trust and safety other users have regarding the platform, making it so they feel less comfortable recognizing verified users as legitimate.

Meta’s more detailed announcement, following Zuck’s posting it to an Instagram group, only serves to show how backwards this is, and how similar it is to Twitter Blue’s disastrous adaptations.

With Meta Verified, creators get:

A verified badge, confirming you’re the real you and that your account has been authenticated with a government ID.

More protection from impersonation with proactive account monitoring for impersonators who might target people with growing online audiences.

Help when you need it with access to a real person for common account issues.

Increased visibility and reach with prominence in some areas of the platform– like search, comments and recommendations.

Exclusive features to express yourself in unique ways.

We can walk through each one of these to show why it looks like Meta is just running out of ideas, and desperate to squeeze users.

Those first two items should never be paid premium services. As explained, verification is not so much for the user’s benefit but for the wider platform’s. Making it so only those with the means to do so get verified actually takes away much of the value of being verified. As for “more protection from impersonation,” it feels like… maybe that isn’t the kind of product you should be selling, but rather is kind of an indictment of a platform’s inability to protect its users.

“We failed to stop people from pretending to be you, so pay us to now protect you” is not exactly a strong sales pitch, Mark.

And, sure, there are services that let you pay for more urgent access to customer support, but again, this mostly just highlights just how terrible Meta customer support has been for years.

But, the last two points deserve special attention. Increased visibility in search, comments, and recommendations based on paying up is also something that Musk has done with Twitter Blue, but seems like a terrible idea that just encourages spammers and other bad actors to use this as a cheap way of being able to get more prominent attention for their spam and scams and the like. It also calls into serious question all the promises we’ve been hearing from Zuck for years now about the company’s increasing focus on relevance in its feeds. If they’re moving away from that to encourage paying up to reach people, it seems like we’re only moving further into the enshittification death spiral.

As for “exclusive features to express yourself in unique ways,” at first glance that sounds like maybe something that could be a useful thing as an upsell or premium offering, but the details (in a footnote) make it pretty clear this was a rushed afterthought.

We’ll offer exclusive stickers on Facebook and Instagram Stories and Facebook Reels, and 100 free stars a month on Facebook so you can show your support for other creators.

How… utterly unexciting.

Anyway, this definitely fits back in with the nature of Cory Doctorow’s enshittification death cycle. Remember how it works:

Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.

Nothing in this announcement really benefits users. It just squeezes more money out of them. Yes, Meta is presenting it as if there are real benefits for users, but users aren’t that dumb.

I’m sure that a decent number of people will sign up for this. And it’s certainly likely that the rollout won’t be as chaotic and embarrassing as Twitter’s paid verification program. But it seems quite likely to me that Meta is going to find the end result of this underwhelming, just as Twitter did.

Filed Under: mark zuckerberg, premium, security, subscriptions, trust, upsell, verification
Companies: facebook, instagram, meta

Utah Lawmakers Rushing Through Bills To Destroy The Internet… ‘For The Children’

from the utah-again? dept

The evidence-free moral panic over social media keeps getting stupider, and when things get particularly stupid about the internet, you can pretty much rely on Utah politicians being there to proudly embrace the terrible ideas. The latest are a pair of bills that seem to be on the fast track, even in Utah’s short legislative session. The bills are HB311 from Rep. Jordan Teuscher and SB152 from Senator Michael McKell (author of a number of previous bad bills about the internet).

Both of these bills continue the unfortunate (and bipartisan) trend of taking away the autonomy of teenagers, treating them as if they’re babies who need to be watched over at every moment. It’s part of the typical moral panic that suggests that rather than teaching kids how to handle the internet and how to be prepared for real life, kids should effectively only be allowed to access a Disneyfied version of the internet.

Following California’s recently passed and horrific AB 2273 (again, these are bipartisan bad ideas), both bills would require websites to age verify people who visit.

Beginning January 1, 2024, a social media company shall verify the age of a Utah resident before the Utah resident may:

> (a) continue to use the Utah resident’s account on the social media company’s social media platform if the account existed before January 1, 2024; or > (b) create an account with the social media company’s social media platform.

This is one of those things that people who have never actually studied this issue think is a good idea, but where everyone who has any experience at all in this knows it’s a terrible idea and a privacy nightmare. As we recently noted, even the French Data Protection Agency (which is one of the most anti-internet regulatory agencies around) has called out that every possible solution for age verification is terrible for privacy. And for good reason. As we’ve discussed, the biggest age verification tool around happens to be the one owned by the company behind Pornhub.

In other words, Utah politicians may be requiring every Utah resident to be giving their private data over to a porn company. Nice one, McKell!

Of course, there are new entrants in the age verification market who say it’s no big deal, you’ll just have to have your face scanned (for a few seconds, so they can get a “proof of liveness”) for every website you visit. How very dystopian.

Also, for all the talk of how scary and bad TikTok is because of how much data it collects and possibly sends back to China, it’s worth noting that this bill will require TikTok to collect way more information and private data on children. Way to go, Utah!

Believe it or not, McKell’s SB152 bill gets even worse. If someone is under the age of 18 (as “minor” is defined in the bill), then any social media company has to give parents access to their kids’ accounts.

Beginning January 1, 2024, a social media company shall provide a parent or guardian who has given parental consent for a Utah minor account holder under Section 13-63-102 with a password or other means for the parent or guardian to access the account, which shall allow the parent or guardian to view:

> (1) all posts the Utah minor account holder makes under the social media platform account; and > (2) all responses and messages sent to or by the Utah minor account holder in the social media platform account.

So many problems, so little time. First off, even while it says that “other means” should be available, the idea that a law is requiring a website to hand over passwords is absolutely ridiculous. Because no website should be able to access passwords themselves. They should be encrypted, meaning that even the website shouldn’t have access to the passwords in a form that they could give to parents.

Utah has actual computer security people living in the state, right? Because it’s clear McKell’s team spoke with none of them if this made it into the bill.

But, more substantially, this is nuts. No, parents should not be able to spy on everything their teenagers do and say to others. There may be times and places where that’s appropriate for some families and some kids on the younger end, when they’re just learning how to use these services, but teaching 17 year olds that they should be surveilled all the time by their parents is ridiculous, and takes away their own basic fundamental rights.

And, I mean, it’s pretty fucking rich that these Utah politicians claim they need to do this to protect the children, when, well, stories like this are common and easy to find with just a quick search. Maybe, just maybe, the problem isn’t the internet.

Finally, the bill says that social media cannot be accessed by kids between 10:30pm and 6:30am. The social media company has to block them. Does McKell not know that parents can install pretty easily accessible tools to do this for themselves? Why does the government need to do this?

As for HB 311, there’s some overlap here. It also requires age verification, and has all the problems we discussed above that come with that. But then it also says that no one under the age of 16 is allowed to have a social media account. Again, this is beyond stupid. We already know how this works out, and it appears that no one in the Utah legislature bothered to do even the slightest amount of research. Most websites these days ban children under 13, to avoid having to deal with federal COPPA (Children’s Online Privacy Protection Act) regulations. And, because many of these services are still useful, even for kids, we’ve built a system where parents are teaching their kids to lie about their age to access these sites.

I know plenty of families with kids who have set up Gmail accounts, Zoom accounts, Discord accounts and other such things by lying about their kids’ ages, because those tools (used properly) are really useful for kids to communicate with family members, especially grandparents. From the link above (from over a decade ago):

Many parents want their children to have access to free email accounts, like those provided by Yahoo!, Hotmail, and Gmail. Instant messaging access is often important to parents and video and voice chat services like Skype are especially important to immigrant parents who have extended family outside of the U.S. When Dr. boyd asked parents why they wanted their children to have access to email, IM, and other chat services at a young age, the explanation was consistent: to keep in touch with family. Grandparents were most frequently cited as the reason why parents created accounts for their young children. Many parents will create accounts for children even before they are literate. One parent explained that “giggle vision” was an extremely important way for his daughter to communicate with her grandparents. Although some parents create accounts for children as young as 6 or 8, these parents are very involved in when and how these accounts are used.

By middle school, communication tools like email and IM are quite popular among tweens (ages 10-12). Tweens pressure their parents for permission to get access to accounts on these services because they want to communicate with their classmates, church friends, and friends who have moved away. Although parents in the wealthiest and most educated segments of society often forbid their children from signing up to social network sites until they turn 13, most parents support their children’s desires to acquire email and IM. To join, tweens consistently lie about their age when asked to provide it. When Dr. boyd interviewed youth about who taught them to lie, the overwhelming answer was parents. Dr. boyd interviewed parents who consistently admitted to helping their children circumvent the age restriction by teaching them that they needed to choose a birth year that would make them over 13. Even in households where an older sibling or friend was the educator, parents knew their children had email and IM accounts.

Of course, the Utah politicians pushing this will say that such studies don’t matter, because with “age verification” they won’t be able to lie. But beyond the privacy issues, you can bet that people will quickly figure out how to get around those things as well.

And the idea of keeping kids away from the tools to communicate with schoolmates is laughable. Kids always find a way. I’m reminded of the story from a few years back when schools had banned social media to avoid kids communicating that way… so the kids essentially turned Google Docs (which they needed access to for school) into their own private social media network.

Kids are going to find a way to communicate, no matter how much the overly paternalistic Utah politicians wish they’d stay silent.

These bills are about taking away the rights of teenagers, because a few stuffy Utah politicians have forgotten what it’s like to be a teenager and wish to inflict maximum pain on teens, and take away their basic rights to communicate with one another.

And, boy, are they being obnoxious about it. At a hearing earlier this week, apparently some politicians demanded to know whether or not those speaking out against the bill had children — as if that was any of their business.

At least some people were speaking out against the bill… including teenagers!

Not everyone was in support of the bill, and some — including 13-year-old Lucy Loewen — said the benefits of social media can outweigh the downside. Lucy said teenagers can use social media to connect with friends and that those connections can help them deal with depression and suicidal thoughts.

“Will this really be creating responsible teenagers and adults if the government is just taking over and not letting us choose for ourselves?” she asked the committee. “We want to stop government intervention, so why would we let the government control our lives?”

Lucy’s right, on multiple levels. As we discussed recently, new research from the Pew Research Center found that most teens get real value out of social media. And it seems like, as Lucy notes, the way to deal with those who struggle with it is to help create responsible teens: that means not just teaching them how to handle difficult situations, but also showing that you trust them. That doesn’t mean spying on their every move, helicoptering over them and demanding their private info. It means preparing them for the real world instead of totally sheltering them from it.

And, of course, with bills like this already being passed in California and now being considered in Utah, you can expect them to spread, perhaps even to the federal level. Already, Rep. Chris Stewart seems to have copied HB311 at the federal level, introducing a bill in Congress to ban social media for kids under the age of 16. We didn’t even mention it above, but this would be pretty blatantly unconstitutional anyway, and I’d celebrate the teenager who took that case to the Supreme Court to remind these busybody politicians that teenagers have 1st Amendment rights themselves.

Tragically, it seems that actually trusting kids and teaching them how to be good citizens is not the sort of thing that Utah elected officials believe in. Which is a real shame.

Filed Under: age verification, hb311, jordan teuscher, michael mckell, sb152, social media, surveillance, teenagers, trust, utah

DoNotPay’s CEO Appears To Modify Donation Receipt After Being Called Out On Unfulfilled Promise

from the donotpay...-for-the-promises-that-you-made? dept

We’ve written a few stories lately about DoNotPay, the “robot lawyer” service whose gimmick of an automated AI-driven tool that would help users deal with challenges like getting out of parking tickets or cancelling subscription services that are difficult to get out of sounds like a really enticing idea. But there have long been questions about the service. While we’ve seen a bunch of truly impressive AI-generation tools in the last year or so, for years many companies claiming to offer AI-powered services often seemed to be doing little more than finding someone to hack together a complicated spreadsheet that the marketing folks would labels as “artificial intelligence.” It’s unclear how sophisticated DoNotPay’s technology actually is, though as guest poster Kathryn Tewson discovered last week, it sure seemed sketchy.

Kathryn, a paralegal with a preternatural skill at dismantling legal bullshit from people who pretend to understand the ins and outs of the law, sought to test the service’s ability to craft legal documents, and found that the whole thing raised a lot more questions than it answered with weird, potentially problematic language, questionable promises, and just the fact that out of multiple tries, the only document she actually received appeared to be produced by little more than legal madlibs, filling in a template. Furthermore, with the more “sophisticated” documents she requested, she was told they would take hours to send over, which seems strange for a robot lawyer. Of course, as her writeup got more attention, rather than deliver those documents, DoNotPay’s CEO, Joshua Browder, announced that he was shutting down these more sophisticated legal offerings, claiming that they were a “distraction.”

He claimed that he did this after various state bars suggested that his marketing stunt to have a lawyer argue in court while his “AI” whispered into the lawyer’s ear via an AirPod might result in him going to jail. Browder then made the rounds in the press implying that the criticism was from lawyers who were worried DoNotPay was going to cut into their business. In that interview he claims the pushback on his nonsense publicity stunts was “from lawyers,” but that because “there’s not a lawyer who will get out of bed for a $500 refund,” the company will instead focus on that area of business “so that they don’t come after us…”

But, the concerns from Kathryn and others are not about it cutting into the legal profession. I mean, personally, I’d love to see technology disrupt the legal business. It’s a business that could use quite a lot of disruption. The problem is that Browder’s propensity for publicity stunts means it often appears he’s vastly exaggerating what his company can do, and that’s a real concern when he’s advertising it for people involved in serious legal matters, like trying to navigate the immigration system (which, yes, was another offering from the company, which generated a lot of publicity but raised serious concerns from actual lawyers about what could go wrong).

Over the last few days, however, Kathryn keeps turning up more and more questionable behavior by Browder that is making him look like a naive, inexperienced kid pretending to run a serious company, rather than the CEO of a sophisticated “robot lawyer” company that has raised millions of dollars from sophisticated investors (and Sam Bankman-Fried).

Last week, for example, she found that almost immediately after a conversation with her in which she noted that she had not violated the company’s terms of service in running her test documents, DoNotPay’s terms of service were changed to say you were no longer allowed to create “test” documents that were not part of an authentic dispute. That was both oddly specific, and oddly… stupid. Who would ever want to use a Robot Lawyer you couldn’t first test to make sure it works well?

Over the weekend, things got even dumber. Kathryn noticed an earlier publicity stunt from Browder (who seems to spend more time thinking up dumb publicity stunts than making sure his robot lawyer actually works). He had promised to buy up medical debt for every retweet or follow of one of his tweets.

![Tweet from Joshua Browder: Nobody should have to declare bankruptcy or have their life ruined over medical debt.

For every RT + follow this tweet gets, I will buy $10 of medical debt and forgive it.

Will post receipts. Please don't RT this too much.](https://i0.wp.com/www.techdirt.com/wp-content/uploads/2023/01/image-75.png?resize=604%2C387&ssl=1)

For what it’s worth, it appears he just deleted this tweet, and the rest of this article may explain why. First off, we should note that he’s correct: medical debt is a scourge. We’ve discussed how medical billing is a complete economic scam. For a variety of reasons, the US healthcare space is simply designed to siphon away every penny someone has by the time they die (sometimes hastened by that same medical system). It’s… not great. So, hey, I appreciate efforts to forgive medical debt (though I’d appreciate efforts to fix the underlying system more).

But Kathryn noticed that despite the promise to “post receipts” there were no such receipts published:

After discussing the related issue of how medical debt is often sold for pennies on the dollar, meaning that he could appear to be a lot more generous than he was in reality, Browder jumped into the conversation to claim that he absolutely did make the donation in question for 500,whichhelaterclaimedboughtup500, which he later claimed bought up 500,whichhelaterclaimedboughtup50,000 worth of debt.

In that (also since deleted!) tweet, Browder presents a receipt from the non-profit RIP Medical Debt (which was created for this kind of purchase-and-forgiveness of medical debt), showing that he paid $500, allegedly on December 2nd of last year.

But (again, never try lying to Kathryn, who seems to be the living embodiment of Natasha Lyonne’s character in Poker Face), Kathryn noticed that something was a little odd in the receipt: while the font of the dates matched the font of the rest of the notice, they did not line up properly in the image, suggesting that he might have photoshopped the date. In an amazing bit of sleuthing, Kathryn highlighted how the dates were posted a little below the line where they should be. It’s something you could really only see if you inserted guidelines and zoomed in close:

She then purchased some debt herself just to see how the email receipt shows up, and found that on her own donation, the dates lined up perfectly with the guidelines:

At the very least, it’s pretty strong circumstantial evidence that the dates on Browder’s screenshots were faked.

But then Kathryn took it up on a notch. She reached out to RIP Medical Debt and asked about Browder’s donation. RIP Medical Debt confirmed to her that Browder’s donation was not made on December 2nd, but rather it was made on January 29th, at 12:36 am EST (Kathryn shared the email with me so I can confirm RIP Medical Debt’s statement on this). 12:36 am EST was exactly four minutes after Kathryn originally tweeted her concern as to whether or not Browder ever actually did buy up the debt he promised.

Four minutes.

He posted the screenshot 17 minutes later (which would be enough time to sloppily edit the receipt to change the date).

Browder (who appears to have then gone back and deleted all of the tweets mentioned in this article) did complain about how people were “criticizing a donation.” Except no one is criticizing the donation. The donation is great. Make more of them, Josh.

What Kathryn was criticizing was how you used the claim of paying off medical debt as a publicity stunt when it was unclear that you had actually followed through and which, it now seems clear, you only followed through on months afterwards, and four minutes after Kathryn called it out. And then it appears that you fudged the date to hide that fact. Also, the fact that since you can buy medical debt for pennies on the dollar, you can appear to be way more generous than you were actually being, especially since your original tweet did not promise to pay $10 for each retweet or follow, which would have been more significant.

This would appear to be extremely questionable behavior, and not the kind of behavior that makes one say “yes, I’m going to trust this company to help me resolve legal disputes.”

Perhaps Browder’s next project should be building the “world’s first AI CEO” to replace himself. At this point, I’m not sure it could be much worse or less trustworthy than the human currently in that position. Or, hell, maybe he should ask his “AI lawyer” what it thinks of all this. I decided to ask ChatGPT what it thinks and got a pretty good answer:

Is it a good idea to make a public promise to buy up medical debt in order to forgive it, and then fail to follow through?
No, it is not a good idea to make a public promise to buy up medical debt and then fail to follow through. This could lead to a loss of trust and credibility, as well as potentially damaging consequences for those who were counting on the promise being fulfilled. It is always better to be honest and transparent about one's intentions and limitations, rather than making promises that cannot be kept.

And modifying a date to make it appear that you did the thing you promised is also not a good look according to ChatGPT:

If people called out this failure to follow through, is it a good idea to then make the payment (much later than expected) and then photoshop the date to pretend it was made much earlier?
No, it is not a good idea to photoshop the date of a payment made much later than expected and then pretend that it was made much earlier. This is unethical and deceptive, and it could have serious consequences if discovered, such as loss of credibility, legal consequences, and damage to reputation. It is always better to be transparent and honest about one's actions, even if they fall short of expectations.

Not bad, ChatGPT. Not bad at all.

Again, I wish that DoNotPay actually could do much of what it claims to do. It sounds like it could be a really useful service, one that we would actually like to see more widely implemented. But the antics and shenanigans over the last few months should raise serious concerns about why anyone would trust the company with literally anything. Browder’s seeming unwillingness to be truthful in his discussions on all of these things does not bode well.

If someone, say a CEO of an AI company, did those things, how do you think he should deal with this situation?
If a CEO of an AI company engaged in unethical behavior such as making false promises and then attempting to deceive people by photoshopping payment dates, it would be important for them to take responsibility for their actions and work to make things right. This could involve apologizing and acknowledging the harm that was caused, making restitution to those who were affected, and committing to doing better in the future. It may also be necessary to take steps to restore public trust, such as transparently addressing the issue and implementing changes to ensure that similar incidents do not occur again in the future. It is essential for a CEO in this situation to act with integrity and to take concrete steps to demonstrate their commitment to doing the right thing.

Filed Under: ai lawyer, donations, joshua browder, medical debt, modified receipts, publicity stunts, robot lawyer, trust
Companies: donotpay, rip medical debt

Ben Smith’s New Media Venture Ably Demonstrates Why Platforming Authoritarian Propagandists Is A Lose-Lose Scenario

from the what-are-we-even-doing-here? dept

Fri, Jul 8th 2022 12:17pm - Karl Bode

Former Buzzfeed and New York Times reporter Ben Smith is poised to launch a new media company named Semafor on the back of $25 million in donations. To grab some attention for the venture’s looming launch, Semafor recently partnered with the Knight Foundation to launch the company’s first event: The Future of News: Trust and Polarization.

The event featured folks like former Wall Street Journal editor Gerald Seib, Al Jazeera host Femi Oke, Washington Post columnist Taylor Lorenz, and Politico’s John Harris. Absent from the event was any academic or outside expert actually versed in why trust in US news has deteriorated. In their place, Smith announced he’d be doing an exclusive interview with… Fox News’ Tucker Carlson.

The decision to platform a bigot and propagandist as part of an event on trust in news didn’t go over particularly well among people actually trying to, you know, restore trust in news. Such as media reform activist Nandini Jammi, who co-founded Sleeping Giants and Check My Ads (both campaigns to limit the power and wealth of COVID-denying, conspiracy-heavy, race-baiting Fox News):

Ben’s response was fairly typical: he had to interview Tucker Carlson because Tucker Carlson is a very important man who doesn’t provide many interviews. It would be journalistic malpractice, Smith implied, to do anything else:

Our plans are to ask hard questions of powerful people — I don't think there are a lot of journalists who would refuse to do that interview?

— Ben Smith (@semaforben) June 30, 2022

The idea that this was being done to generate controversy and attention for a media venture, itself an act likely to reduce trust in news (at a conference purportedly about trust in news), was just skipped over. Also not considered: that one might just not give Carlson an even bigger bullhorn, instead giving that mental real estate to any number of media reform activists or academics laboring in obscurity.

After weeks of criticism and promises that Smith would hold Carlson’s feet to the fire, the interview arrived and Smith did… exactly none of that.

You can watch the interview itself here. Carlson, on webcam from his Manhattan or Hamptons closet (probably because his mansion kitchen wouldn’t project the desired man of the people persona) ran roughshod over Smith for a good half an hour, all to Carlson’s amusement.

At no point did Smith demonstrate real control over the interview, letting Carlson ramble on at length about how terrible middle-aged liberal women are, how he’s not actually a racist, how his critics in the press are the _actual propagandists_… without Smith seriously challenging the claims. Smith himself seems uncomfortable throughout, nervously fiddling with his notes in between lobbing softballs.

At one point, Smith repeats Carlson’s core claim that he’s “effectively just misunderstood.” At other points, attempted gotcha questions don’t land, such as asking if Carlson’s ever been discriminated against at work as a white Protestant. Almost every time Smith has an opportunity to press Carlson on outright lies, he either changes the subject or lets Carlson change the subject for him.

You then have to ask: what was the actual benefit in terms of the event’s premise? Ben’s promise, that he’d hold Carlson accountable with hard questions, never materialized. So the end result was little more than further amplification of Carlson’s falsehoods, the validation of Carlson’s role as a pseudo-journalist, and the perpetuation of the false idea that fascism is a valid platform that’s up for debate.

Before the event, Smith’s noble dedication to journalism was lauded by numerous folks in media, who agreed that you simply have to give a white supremacist authoritarian pretending to be a journalist an even bigger platform — at a trust in news conference. You just don’t have a choice!

Some folks in media suggested that turning down an interview with Carlson would be akin to turning down an interview with Hitler, and you just don’t do that. Others tried to make the point that because Carlson already has a massive nightly platform, there’s really no harm in elevating him further at an event specifically dedicated to solving sagging trust in U.S. journalism.

According to Smith and friends, platforming Carlson was the right call because it created the opportunity to challenge Carlson’s positions, be they agitating deep-rooted racial divisions for ratings, harming public health by amplifying COVID conspiracy theories and vaccine skepticism, or parroting the incoherent ramblings of the country’s surging, conspiratorial, and increasingly violent authoritarian right.

But at no point did an actual, competent challenge to Carlson’s falsehoods find its way to the stage.

Worse, that’s a half hour that could have been given any number of academics and experts with actual solutions to the problem. But actual media scholars well versed in why trust in US media is flailing weren’t just under-represented at the event, they were completely absent. It was a choice to embrace controversy over substance, ironically and inadvertently illustrating why trust in U.S. media is falling apart.

There are numerous reasons for eroded trust in US news. The death of quality local news opened the door wide to propagandists, foreign intelligence, and pink slime. Tone-deaf Luddite classism rules at major outlets like the New York Times. The shift toward an ad-based engagement model financially incentivized an entire industry to prioritize controversy and hysteria over boring substance and expertise.

Like so many others, Carlson has weaponized this dysfunction, feeding a steady diet of increasingly hysterical outrage drivel to partisans for clout. He’s perfected the act of media trolling at scale; making unhinged claims he knows will then be hate retweeted by outraged critics oblivious they’re being exploited as a human amplifier (a favorite pastime of Carlson predecessor Ann Coulter).

Platforming, debunking, or even debating fascist propagandists is a lose-lose scenario. You can’t defeat it with “gotcha” questions, because fascists have zero compulsion about lying, and no incentive to meet you in honest dialogue. Their goal is simple: to platform fascist ideology, to expose that ideology to as broad as audience as possible, and to frame fascism itself as a valid policy that’s up for debate.

The very second you’ve entered into this arrangement you’ve already lost.

Don’t try to debunk. Don’t try to debate. Don’t think you’re helping by dunking on Carlson with a hate retweet. Don’t get caught in a fight over whether an obvious fascist is a fascist. Instead find somebody under-represented who’s actually pushing real solutions and amplify them instead. Don’t feed the trolls.

That’s not to say fascists should be completely ignored and never challenged. But at some point, if democracy, trust in media, and foundational institutions are to be preserved, you have to enter into a savvy calculus about which signals are worth boosting, and which are harmful and exploitative. This was a trust in news event. Host actual experts with a good faith interest in solving the problem.

Somebody ignorant to modern discourse could easily walk away from the interview believing that Carlson, a millionaire frozen food empire heir turned opportunistic propagandist, is actually a brave, truth-telling journalist unfairly forced to hide in his closet by the powers that be. And that the real propagandists are anyone that would dare question Carlson’s noble intentions.

That we’re six-plus years into a massive surge in trumpist propaganda-soaked authoritarianism — and affluent, influential media leaders still don’t understand how any of this works — isn’t a great sign for what comes next. You win the game that fascists are trying to play by not playing it, giving the valuable mental real estate they hope to occupy to voices genuinely interested in real solutions and reform.

Filed Under: ben smith, fascism, lies, propaganda, trust, trust in news, truth, tucker carlson
Companies: semafor