medium – Techdirt (original) (raw)

Protocol-Based Social Media Is Having A Moment As Meta, Medium, Flipboard, And Mozilla All Get On Board

from the make-it-so dept

Over the last couple of weeks there have been a number of interesting developments regarding protocol-based, decentralized social media, and each time I plot out an article about it, something else pops up to add to the story, including Thursday evening as I finally started writing this and news broke that Meta (parent company of Facebook and Instagram) is at least in the early stages of creating an ActivityPub-compatible social media protocol and app, that it considers to be something of a Twitter competitor.

Meta, the parent firm of Facebook and Instagram, is hashing out a plan to build a standalone text-based content app that will support ActivityPub, the decentralised social networking protocol powering Twitter rival Mastodon and other federated apps, people familiar with the matter told Moneycontrol.

The app will be Instagram-branded and will allow users to register/login to the app through their Instagram credentials, they said. Moneycontrol has seen a copy of an internal product brief that elaborates on the functioning and various product features of the app.

The program is apparently codenamed P92, and conceptually it makes sense. Platformer got the company on the record confirming the effort:

We’re exploring a standalone decentralized social network for sharing text updates,” the company told Platformer exclusively in an email. “We believe there’s an opportunity for a separate space where creators and public figures can share timely updates about their interests.”

I’m at least a little amused, because I’ve had multiple conversations with Meta/Facebook execs over the years regarding my “Protocols, Not Platforms” paper, explaining to them why it would make sense for the company to explore the space, and was told repeatedly why they didn’t think it would ever make sense for a company like Meta.

How times change.

Back in December, we predicted this sort of thing, asking when ActivityPub might have its “Gmail moment” and discussing how Google single-handedly changed email when it entered the market with Gmail on April 1, 2004.

And in the last couple of weeks there have been a bunch of really interesting moves from companies with long internet histories. It started last week when news aggregator Flipboard announced not just a tepid ActivityPub integration, but that it was going to fully embrace it. Flipboard founder Mike McCue stopped by my office the day before to talk about the company’s plans, and this isn’t just a random side-project. McCue recognizes that betting on protocols is the way to bring back the promise of the early internet, and taking us away from being solely reliant on internet giants. While early on, the company has already launched its own instance for Flipboard users to sign up (if they’re not already on another instance), and deeply integrated Mastodon into the app in ways that feel completely organic and natural (to the point that I, as a lapsed Flipboard user, have begun exploring the app again).

Days later, the ever popular site for hosting long-form writing, Medium (which was founded by Twitter and Blogger co-founder Ev Williams) announced that it, too, had launched its own Mastodon instance at me.dm for members of its $5/month premium subscription.

And, just around the time that the Meta news became public, Mozilla (which had previously announced such plans) turned on its own instance, mozilla.social.

All of these are important moves, and all of them happening within a two week period suggests that momentum is building towards recognizing how important a protocol-based world is, over a centralized-siloed world.

Also, having these larger companies embrace the space will do a bunch of important things to drive a protocol-driven world forward. For starters, they will hopefully help with the onboarding process — one of the major things that new users complain about in trying to get set up with Mastodon. The dreaded “but what server should I use?” question seems to stump many — but with more recognized and trusted brands entering the space, that question becomes less of an issue.

With these companies entering the fediverse, we’re also likely to see much greater improvement in other areas as well, including new efforts to improve features and UI. We’ve already seen a bunch of mobile and web app developers creating more beautiful front ends for Mastodon, but I’m expecting a lot more of that as well.

I also expect that this will filter down into the core code and protocol. With more companies working to join the fediverse, it creates something of a virtuous cycle that should benefit the wider space. It also should allow for much greater experimentation with new ideas and features (and that might lead to busting some old myths that resulted in poor initial design choices).

There are also lots of important features — especially tools for admins — that really haven’t received nearly enough attention and development, and having these bigger companies, who understand the space and the need, will hopefully spur more development.

Of course, as noted, as I started to plan out this article, I was mostly focused on the companies like Flipboard, Medium, and Mozilla and their efforts. All three have been extremely respectful in how they’ve been exploring and entering the fediverse. All three seemed to focus on participating and listening as they figured out their plans, and doing so in a way that fits with the fediverse, rather than trying to bend it to their will (and even so, they did upset some people).

Meta, somewhat obviously, is a bit of a different beast. And certainly some on Mastodon and other ActivityPub platforms are worried. I’d argue, however, that Meta embracing ActivityPub is a phenomenal thing. First: it’s validation. It shows that Meta recognizes that something is happening. Second, everything I noted above about spurring needed improvements also applies here and Meta could provide a lot of help. Third, even as there are some who want to keep Mastodon smaller, if it’s really going to thrive, it needs to continue to grow and be introduced to more people. The nice thing about the fediverse is that you can craft it to meet your own needs, so if you really want to keep it small, there are ways for you to do that yourself, and create a smaller community.

But the biggest reason why I think it’s so important that Meta is now even willing to explore the fediverse, is because it shows (as my paper suggested) that the largest most siloed companies can absolutely benefit from moving away from that model and towards a more open, distributed, protocol-based world. The old Twitter had suggested that could be the case when it embraced protocols and set up the independent Bluesky project, which Jack Dorsey and Parag Agrawal intended to eventually replace Twitter’s infrastructure. But seeing Meta explore it as well is obviously even bigger. And, honestly, I’d be shocked if Google weren’t similarly playing around with something.

Of course, this is Meta we’re talking about. There’s just as much likelihood that P92 never amounts to anything. There’s also the possibility that Meta tries the old “embrace, extend, extinguish” playbook of Microsoft. However, one of the nice things about ActivityPub is that it should be somewhat resistant to such efforts. And, as such, it also creates its own incentives to keep companies like Meta in check. Because if it starts acting “evil,” then the fact that it’s easy to move elsewhere (without losing contact with everyone) acts as a natural pressure valve, creating incentives to keep even the most evil companies in check.

And, speaking of Bluesky, last week, it also opened the (invite-only, currently) doors to the beta version of its app. While I’m excited about ActivityPub and Mastodon, I’m also excited about Bluesky. As I’ve discussed, the folks working on it are incredibly thoughtful in how they’ve been approaching this, and I think that the underlying AT protocol they’ve created actually solves many of the protocol-level limitations found in ActivityPub that have frustrated some folks in the fediverse. I believe that the Bluesky team explored ActvityPub and recognized its limitations, and that was the reason it chose to work on AT Protocol instead.

I do wonder, however, if Bluesky is going to end up deciding that it somehow needs to embrace ActivityPub in some form or another as well, especially as it has been building a larger and more entrenched userbase (which may continue to grow as more companies move in). I’m still optimistic about Bluesky, because I think the approach is even better than ActivityPub, but in the end, having a critical mass of users is the most important thing.

All that said, this much activity in the last few weeks shows that protocol-based social media is having a moment. I’m not saying that it’s the moment that inevitably leads to a bigger shift in how we view the internet, because it could still all come crashing down. But, something’s happening, and it’s pretty exciting.

And it brings me back to a question I asked a few months ago: why would anyone spend time embracing/using another centralized social media service after this? This is your opportunity to contribute to a better future internet. For all the complaining about “big tech” and the lack of competition, here’s the chance to make a difference, to embrace an internet that is more about the users than the companies, where power and control are moved to the ends of the network (the users) rather than the owners of the walled gardens.

There’s a real opportunity now to help make that better future. I recognize that there’s a decent contingent of cynical people out there who keep telling me it will never work, and we’re all locked into this world of big awful companies. And, who knows, perhaps things will go that way. But, why give in to that when there’s at least a real chance for something better? Something that more approximates the end-to-end internet we were promised?

Something is happening right now, and its success or failure is dependent on what people do next. So why would we not join in and try and build something better? Join a fediverse instance, encourage others to join, or even create your own. Participate in the myriad discussions about how to make things better for everyone. Generate ideas of how the technology can be put to use for good, and then put those ideas into action.

Filed Under: activitypub, at protocol, decentralized, fediverse, platforms, protocols, protocols not platforms, social media
Companies: bluesky, flipboard, instagram, medium, meta, mozilla, twitter

Man Sues Multiple Social Media Services, Claims Banning His Accounts Violates The Civil Rights Act

from the new-twist,-but-not-a-smarter-twise dept

Everybody wants to sue social media platforms for (allegedly) violating the First Amendment by removing content that most platforms don’t feel compelled to host. Most of what’s sued over is a mixture of abusive trolling, misinformation, bigoted rhetoric, and harassment. Plaintiffs ignore the fact that private companies can’t violate the First Amendment. The First Amendment does not guarantee anyone the right to an audience or the continued use of someone’s services.

Then there’s Section 230 immunity, which shields platforms from lawsuits filed over content posted by users as well as their own moderation decisions. This immunity has angered everyone from the lowliest troll to the lowliest President of the United States of America. No number of complete losses appears capable of deterring the next hopeful plaintiff from lobbing a sueball into court with the hope that the presiding judge will be as batshit crazy as the allegations and arguments contained in the lawsuit.

Some litigants (and some of our stupider legislators) continue to insist platforms like Twitter are indistinguishable from phone companies. Ignoring the transitive nature of “carrying” fleeting communications, these hopefuls insist Big Tech is just Ma Bell and must be compelled to “carry” their content… forever. No court has agreed with this argument, the occasional word dump by the usually silent Justice Clarence Thomas notwithstanding.

Maybe the solution is to short-circuit this determination by presumptively declaring social media companies to be common carriers, like the plaintiff in this lawsuit, who’s angry a number of online services deleted his Zionist conspiracy theory content. This suit [PDF], filed in Massachusetts, kicks things off by declaring platforms to be common carriers, using boldface type to drive the point home.

The defendants in this case are Twitter (drink!), Facebook (drink!)… um… LinkedIn (drink?), Medium (you have reached your limit of free drinks for this month), The Stanford Daily Publishing Corp. (please create an account to drink), and The Harvard Crimson, Inc. (I graduated drunk, he casually dropped into the unrelated conversation). Plaintiff Joachim Martillo insists at least the first three are common carriers. His legal arguments for this theory are mostly the subheads.

Defendant Twitter Inc (A Common Carrier, Defendant 1)

Twitter Inc (Twitter) operates as a platform for public self-expression and conversation in real time. The company offers various products and services, including the Twitter platform that allows users to consume, create, distribute, and discover content. Twitter provides common carriage for a fee and in exchange for work.

And so it goes for both Facebook and LinkedIn. The remaining defendants are not declared to be common carriers. Martillo also notes he has filed similar lawsuits (one for each defendant in this lawsuit) in [checks filing] Dorchester Municipal Court.

After quoting Justice Clarence Thomas’ recent ramblings about how much “power” he feels these private companies have over public discourse, Martillo moves on to claim Section 230 of the CDA allows platforms to avoid their obligations under other federal anti-discrimination laws like the Civil Rights Act of 1964, the Americans with Disabilities Act, and… the Fair Housing Act (go home lawsuit, you’re drunk).

Martillo actually makes the argument that a social media platform is a physical entity that should be accessible to everyone, using verbiage apparently cribbed from the Time Cube website.

It is not necessary to consider the public accommodation that Facebook provides to be virtual. Computer scientists use virtualization to describe complex electronic structures including transient gate state structures created by a logic device like a microprocessor. These structures are completely material…

Achievement unlocked: red pill consumed.

How does this all connect?

The plain text of the CDA (Communications Decency Act) provides no indication that the CDA is meant to override civil rights law.

Martillo is correct, but not in the way he thinks. The CDA does not allow platforms to engage in discriminatory hiring practices or discriminate against certain users because of their race or other immutable characteristics. (It also does not protect them from being prosecuted or sued for federal law violations.) This does not mean they cannot moderate content, even if some users might perceive their moderation efforts to be discriminatory. And that’s the crux of Martillo’s arguments. He feels he’s been discriminated against because he is, shall we say, “anti-Zionist.”

The Title II violation by Facebook seems to be directed primarily at Palestinians, Arabs, Muslims, and Diaspora Jews that reject Zionism. No other groups protected under the CRA seems to be subject to harassment by organized persecutors attempting to establish or to maintain a cultural hegemony.

[…]

The response [of Facebook] is more akin to the behavior of a restaurateur that bans blacks from his restaurant because the KKK has threatened him or his restaurant.

What follows from this is Martillo attempting to make the case that his pro-Palestinian content was taken down by the Zionist collectives that handle Big Tech social media moderation. That includes non-Big Tech defendants like sites run by Stanford and Harvard, which removed comments of his suggesting (in a circular fashion) that Zionists are evil, resulting in the removal of comments by alleged Zionists staffing those student sites.

Martillo also apparently startled LinkedIn by sharing content on its platform, forcing it to wake up its on-call moderators to do some moderating.

According to Martillo, these moderation efforts violate the Civil Rights Act, although he is unable to explain how he’s being discriminated against. Nor does he specify which protected group he’s a member of. There’s a “denial of common carriage” claim in there (because of course there is) that Martillo feels is worth at least $3.65 million at the time of this filing.

Needless to say, this lawsuit won’t go anywhere, even if the plaintiff feels Clarence Thomas’s off-hand remarks on the power of social media companies mean something. Social media services aren’t common carriers. Section 230 will immunize all of the defendants. And the First Amendment ensures they can’t be forced to carry Martillo’s content, no matter how fervently believes he’s being discriminated against by a Zionist cabal.

Filed Under: 1st amendment, common carriers, content moderation, discrimination, joachim martillo, section 230, zionism
Companies: facebook, harvard, linkedin, medium, stanford, twitter

Content Moderation Case Study: Dealing With Demands From Foreign Governments (January 2016)

from the gets-tricky-quickly dept

Summary: US companies obviously need to obey US laws, but dealing with demands from foreign governments can present challenging dilemmas. The Sarawak Report, a London-based investigative journalism operation that reports on issues and corruption in Malaysia, was banned by the Malaysian government in the summer of 2015. The publication chose to republish its own articles on the US-based Medium.com website (beyond its own website) in an effort to get around the Malaysian ban.

In January of 2016, the Sarawak Report had an article about Najib Razak, then prime minister of Malaysia, entitled: ?Najib Negotiates His Exit BUT He Wants Safe Passage AND All The Money!? related to allegations of corruption that were first published in the Wall Street Journal, regarding money flows from the state owned 1MDB investment firm.

The Malaysian government sent Medium a letter demanding that the article be taken down. The letter claimed that the article contained false information and that it violated Section 233 of the Communications and Multimedia Act, a 1998 law that prohibits the sharing of offensive and menacing content. In response, Medium requested further evidence of what was false in the article.

Rather than responding to Medium?s request for the full ?content assessment? from the Malaysian Communications and Multimedia Commission (MCMC), the MCMC instructed all Malaysian ISPs to block all of Medium throughout Malaysia.

Decisions to be made by Medium:

Questions and policy implications to consider:

Resolution: The entire Medium.com domain remained blocked in Malaysia for over two years. In May of 2018, Najib Razak was replaced as Prime Minister by Mahathir Mohamad (who had been Prime Minister from 1981 to 2003). However, in 2018, he was representing the Pakatan Harapan coalition, which was the first opposition party to the Barisan Nasional coalition to win a Malaysian election since Malaysian independence (Mahathir Mohamad had previously ruled as part of the Barisan Nasional). Part of Pakatan Harapan?s platform was to allow for more press freedom.

Later that month, people noticed that Medium.com was no longer blocked in Malaysia. Soon after, the MCMC put out a statement saying that Medium no longer needed to be blocked because an audit of 1MDB had been declassified days earlier, and once that report was out, there no longer was a need to block the website: ?In the case of Sarawak Report and Medium, there is no need to restrict when the 1MDB report has been made public.?

Originally published on the Trust & Safety Foundation website.

Filed Under: blocking, content moderation, foreign governments, malaysia, takedown orders
Companies: medium, sarawak report

Content Moderation Case Study: Dealing With Misinformation During A Pandemic (2020)

from the misinfo-is-hard dept

This series of case studies is published in partnership with the Trust & Safety Foundation to examine the difficult choices and tradeoffs involved in content moderation. Learn more »

Summary: In early 2020, with the world trying to figure out how to deal with the COVID-19 pandemic, one of the big questions faced by internet platforms was how to combat mis- or disinformation regarding the pandemic. This was especially complex, given that everyone — including global health experts were trying to figure out what was accurate themselves, and as more information has come in, the understanding of the disease, how it spread, how to treat it, the level of risk, and much, much, has kept changing.

Given the fact that no one fully understood what was going on, plenty of people rushed in to try to fill the void with information. Most social media firms put in place policies to try to limit or take down misinformation or disinformation using a variety of policies and tactics. But determining what is misinformation as opposed to legitimate truth-seeking can be very tricky in the midst of a pandemic.

In late March, as the pandemic was hitting full swing, an article appeared on the website Medium by Aaron Ginn, a self-described Silicon Valley ?growth hacker,? arguing that the response to COVID-19 was overblown and driven by ?hysteria.? The piece included many citations of credible data and reports, but also included a few quotes significantly downplaying the risk of COVID-19, including saying that its ?transmission rates are very similar to seasonal flu.?

The story started to spread widely, mainly after a number of Fox News hosts started tweeting it. As the story got more and more attention, Carl Bergstrom, a professor of biology at the University of Washington, decided to critique Ginn?s article in great detail via an extended Twitter thread. Bergstrom makes a fairly compelling case that Ginn?s lack of expertise in epidemiology led him to making a number of mistakes in his analysis, in particular, not understanding how viruses spread, and how that information is tracked. He also argued that Ginn cherry-picked certain data to support a thesis. Bergstrom and others started arguing that Ginn?s Medium piece was (perhaps not intentionally) misinformation.

Decisions to be made by Medium:

Questions and policy implications to consider:

Resolution: Medium chose to quickly take down Ginn?s piece about a day after it went up and 13 hours after it went ?viral.? In fact, the article was taken down while Bergstrom was writing out his tweet thread critiquing it. Indeed, Bergstrom ended his thread early upon learning that the article was taken down.

That was not the end of things, however. The article was reposted to the site ZeroHedge, and copies were stored and reposted in other places as well. It also created a short-lived uproar among those who felt that Medium?s moderation decision was unfair. The Wall Street Journal celebrated Ginn, saying that after being ?targeted for censorship,? it only made Ginn more influential. Other publications, including the NY Times, the Washington Post, and Slate, all wrote about the dangers of amateurish analysis in the midst of a pandemic.

Ginn, at one point, appeared to be fine with Medium?s decision, saying that internet platforms ?are free to associate with whom they want,? (though he has since removed the tweet saying that). He has continued to use other social media to argue that the claims about COVID-19 were overblown.

Filed Under: aaron ginn, carl bergstrom, case study, content moderation, covid-19, misinformation
Companies: medium

Activism & Doxing: Stephen Miller, ICE And How Internet Platforms Have No Good Options

from the and-for-fun,-the-cfaa-and-scraping dept

Last month, at the COMO Content Moderation Summit in Washington DC, I co-ran a “You Make the Call” session with Emma Llanso from CDT. The idea was to turn the audience into a content moderation/trust & safety team of a fictionalized social media platform. We showed numerous examples of content or accounts that were “flagged” and then showed the associated terms of service, and had the entire audience vote on what to do. One of the fictional examples involved someone posting a link to a third-party website “contactinfo.com” claiming to have the personal phone and email contact info of Harvey Weinstein and urging people “you know what to do!” with a hashtag. The relevant terms of service included this: “You may not post personal information about others without their consent.”

The audience voting was pretty mixed on this. 47% of the audience punted on the question, choosing to escalate it to a supervisor as they felt they couldn’t decide whether to leave the content up or take it down. 32% felt it should just be taken down. 10% said to just leave it up and another 10% said to put a content warning flag on the content. We joked a bit during the session that some of these examples were “ripped from the headlines” but apparently we predicted the headlines in this case, because there are two stories this week that touch on exactly this kind of thing.

Example one is the story that came out yesterday, in which Twitter chose to start locking the accounts of users who were either tweeting Trump senior advisor Stephen Miller’s cell phone number, or merely linking to a Splinternews article that published his cell phone number (which I’m guessing has since been changed…).

Splinternews decided to publish Miller’s phone number after multiple news reports attributed the inhumane* decision to separate children of asylum seekers from their parents to Miller, who has defended the plan. Other reports noted that Miller is enjoying all of the controversy over this policy. Splinternews, citing Donald Trump’s own history of giving out the phone numbers of people who anger him, thought it was only fair that people be able to reach out to Miller.

This is — for fairly obvious reasons — a controversial decision. I think most news organizations would never do such a thing. Not surprisingly, the number spread rapidly on Twitter, and Twitter started locking all of those accounts until the tweets were removed. That seems at least well within reason under Twitter’s rules that explicitly state:

You may not publish or post other people’s private information without their express authorization and permission.

But, that question gets a lot sketchier when it comes to locking the accounts of people who merely linked to the Splinternews article. A la our fictionalized example, those people are not actually publishing or posting anyone’s private info. They are posting a link to a third party that purports to have that information. And, of course, in this case, the situation is complicated even more than our fictionalized example because Splinternews is a news organization (owned by Univision), and Twitter also has said that it has a “newsworthy” exception to its rules.

Personally, I think it’s the wrong call to lock the accounts of those linking to the news story, but… as we discovered in our own sample version, it’s not an easy call and lots of people have strong opinions one way or the other. Indeed, part of the reason why Twitter may have decided to do this was that supporters of Trump/Miller started calling out the article as an example of doxxing and claiming that leaving it up showed that Twitter was unfairly biased against them. It is a no win situation.

And, of course, it wouldn’t take long before people started coming up with clever workarounds, such as Parker Higgins (citing the infamous 09F9 controversy in which the MPAA tried to censor the revelation of a cryptographic key that broke the MPAA’s preferred DRM, and people responded by posting variations on the code, including a color chart in which the hex codes of the colors were the code), who posted the following:

Would Twitter lock his account for posting a two color image? At some point, the whole thing gets… crazy. That’s not to argue that revealing someone’s private cell phone number is a good thing — no matter how you feel about Miller or the border policy. But just on the content moderation side, it puts Twitter in a no win situation in which people are going to be pissed off no matter what it does. Oh, and of course, it also helped create something of a Streisand Effect. I certainly hadn’t heard about the Splinternews article or that people were passing around Miller’s phone number until the story broke about Twitter whac’ing at moles on its site.

And that takes us to the second example, which happened a day earlier — and was also in response to people’s quite reasonable* anger about the border policy. Sam Lavigne decided to make something of a public statement about how he felt about ICE by scraping** LinkedIn for profile information on everyone who works at ICE (and who has a LinkedIn public profile). His database included 1595 ICE employees. He wrote a Medium blog post about this, posted the repository to Github and another user, Russel Neiss, created a Twitter account (@iceHRgov) that tweeted out info about each of those employees from that database. Notice that none of those are linked. That’s because all three companies took them down (though you can still find archives of the Medium post). There was also an archive of the Github repository, but it has since been memory-holed as well.

Again… this raises a lot of questions. Github claimed that it removed the page for “violating community guidelines” — specifically around “doxxing and harassment, and violating a third party’s privacy.” Medium claimed that the post violated rules against “doxxing” and specifically the “aggregation of publicly available information to target, shame, blackmail, harass, intimidate, threaten or endanger.” Twitter, in Twitter’s usual way, is not commenting. LinkedIn put out a statement saying: “We do not support or condone what immigration authorities are doing at the border, but we can?t allow the illegal use of our member data. We will take appropriate action to ensure our members? data is protected and used properly.”

Many people point out that all of this feels kind of ridiculous, seeing as this is all public info that the individuals chose to reveal about themselves on a public website. While Medium’s expansive definition of doxxing makes things interesting by including an intent standard in releasing the info, even if it is publicly available, the whole thing, again, demonstrates how complex this is. I know that some people will claim that these are easy calls — but, just for fun, try flipping the equation a bit. If you’re anti-Trump, how would you feel if a prominent alt-right person compiled and posted your info — even if publicly available — on a site where alt-right folks gather, with the clear intent of having hoards of Trump trolls harassing you. Be careful the precedent you set.

If it were up to me, I think I would have come down differently than Medium, Github and Twitter in this case. My rationale: (1) all of this info was public information (2) that those individuals chose to place on a public website, knowing it was public (3) they are all employed by the federal government, meaning they are public servants and (4) while the compilation was done by someone who is clearly against the border policy, Lavigne never encouraged or suggested harassment of ICE agents. Instead, he wrote: “While I don?t have a precise idea of what should be done with this data set, I leave it here with the hope that researchers, journalists and activists will find it useful.” He separately noted that he believed “it’s important to document what’s happening, and by whom.” That seems to actually make a strong point in favor of leaving the data up, as there is value in documenting what’s happening.

That said, reasonable people can disagree on this question (even if there should be no disagreement about how inhumane the policy at the border has been*) of what is the appropriate way for different platforms to handle these situations — taking into account that this situation could play out with very different players in the future, and there is value in being consistent.

This is the very point that we were demonstrating with that game that we ran at COMO. Many people seem to think that content moderation decisions are easy: you just take down the content that is bad, and leave up the content that is good. But it’s pretty rare that the content is easily classified in one of those categories. There is an enormous gray area — and much of it involves nuance and context, which is not always easy to come by — and which may look incredibly different depending on where you sit and what kind of world you think we live in. I still think there are strong arguments that the platforms should have left much of the content discussed in this post up, but I’m not the one making that call.

When we ran that game in DC last month, it was notable that on every single example we used — even the ones we thought were “easy calls” — there were some audience members who selected every option in the game. That is, there was not a single situation in our examples in which everyone agreed what should be done. Indeed, since there were four options, and all four were chosen by at least one person in every single example, it shows just how difficult it really is to make these calls. They are subjective. And what plays into that subjective decision making includes your own views, your own perspective, your own reading of the content and the rules — and sometimes third party factors, including how people are reacting and what public pressure you’re getting (in both directions). It is an impossible situation.

This is also why the various calls to mandate that platforms do this or face legal liability are even more ridiculous and dangerous. There are no “right” answers to these decisions. There are solutions that seem better to lots of people, but plenty of others will disagree. If you think you know the “right” way that all of these questions should be handled, I guarantee you’re wrong, and if you were in charge of these platforms, you’d end up feeling just as conflicted as well.

This is why it’s really time to start thinking about and talking about better solutions. Simply calling on platforms to be the final arbiters of what goes online and what stays offline is not a workable solution.

* Just a side note: if you are among the small minority of ethically-challenged individuals who gets upset that I describe the policy as inhumane: fuck off. The policy is inhumane and if you’re defending it, you should seriously take time to re-evaluate your ethics and your life choices. On a separate note, if you are among the people who are then going to try to justify this policy as “but Obama/others did it too,” the same applies. Whataboutism is no argument here. The policy is inhumane no matter who did it, and pointing out that others did it too doesn’t change that. And, as inhumane as it may have been in the past, it has been severely ramped up. There is no defense for it. Attempting to defend it only serves to out yourself as a horrible person who has issues. Seriously: get help.

** This doesn’t even fit anywhere in with this story, but scraping LinkedIn is (stupidly) incredibly dangerous. Linkedin has a history of suing people for scraping public info off of LinkedIn. And even if it’s lost some of those cases, the company appears to take a pretty aggressive stance towards scrapers. We can argue about how ridiculous this is, but, dammit, this post is already too long talking about other stuff, so discuss it separately.

Filed Under: activism, content moderation, doxing, harassment, ice, internet platforms, phone numbers, stephen miller, takedowns
Companies: github, linkedin, medium, twitter

Incentivizing Better Speech, Rather Than Censoring 'Bad' Speech

from the there-are-other-solutions dept

This has gone on for a while, but in the last year especially, the complaints about “bad” speech online have gotten louder and louder. While we have serious concerns with the idea so-called “hate speech” should be illegal — in large part because any such laws are almost inevitably used against those the government wishes to silence — that doesn’t mean that we condone and support speech designed to intimidate, harass or abuse people. We recognize that some speech can, indeed, create negative outcomes, and even chill the speech of others. However, we’re increasingly concerned that people think the only possible way to respond to such speech is through outright censorship (often to the point of requiring online services, like Facebook and Twitter to silence any speech that is deemed “bad”).

As we’ve discussed before, we believe that there are alternatives. Sometimes that involves counterspeech — including a wide spectrum of ideas from making jokes, to community shaming, to simple point-for-point factual refutation. But that’s on the community side. On the platform side — for some reason — many people seem to think there are only two options: censorship or free for all. That’s simply not true, and focusing on just those two solutions (neither of which tend to be that effective) shows a real failure of imagination, and often leads to unproductive conversations.

Thankfully, some people are finally starting to think through the larger spectrum of possibilities. On the “fake news” front, we’ve seen more and more suggestions that the best “pro-speech” way to deal with such things is with more speech as well (though there are at least some concerns about how effective this can be). Over at Quartz, reporter Karen Hao recently put together a nice article about how some platforms are thinking about this from a design perspective… and uses Techdirt as one example, in how we’ve created small incentives in our comment system for better comments. The system is far from perfect, and we certainly don’t suggest that every comment we receive is fantastic. But I think that we do a pretty good job of having generally good discussions in our comments that are interesting to read. Certainly a lot more interesting than other sites.

The article also discusses how Medium has experimented with different design ideas to encourage more thoughtful comments as well, and quotes professor Susan Benesch (who we’ve mentioned many times in the past), discussing some other creative efforts to encourage better conversations online, including Parlio (which sadly was shut down after being purchased by Quora) and League of Legends — which used some feedback loops to deal with abusive behavior:

In one experiment, Lin measured the impact of giving players who engaged in toxic behavior specific feedback. Previously, if a player received a suspension for making racist, homophobic, sexist, or harassing comments, they were given an error message during login with no specifics on why the punishment had occurred. Consequently, players often got angry and engaged in worse behavior once they returned to the game. League of Legends reform card.

As a response, Lin implemented ?reformation cards? to tell players exactly what they had said or done to earn their suspension and included evidence of the player engaging in that behavior. This time, if a player got angry and posted complaints about their reformation card on the community forum, other members of the community would reinforce the card with comments like, ?You deserve every ban you got with language like that.? The team saw a 70% increase in their success with avoiding repeat offenses from suspended users.

However, the key thing, as Benesch notes, is getting past the idea that the only responses to speech a large majority of people think is “bad” is to take it down and/or punish the individual who made it:

?There is often the assumption in public discourse and in government policymaking and so forth that there are only two things you can do to respond to harmful speech online,? says Benesch. ?One of those is to censor the speech, and the other is to punish the person who has said or distributed it.? Instead, she says, we could be persuading people not to post the content in the first place, rank it lower in a feed, or even convince people to take it down and apologize for it themselves.

Obviously, there are limits on all of these options — and anything can and will be abused over time. But by at least thinking through a wider range of possibilities than “censor” or “leave everything exactly as is” we can hopefully get to a better overall solution for many internet discussion platforms.

Meanwhile, Josh Constine, at TechCrunch recently had some good suggestions as well specifically for Twitter and Facebook for ways that they can encourage more civility, without resorting to censorship. Here’s one example:

Practically, Twitter needs to change how replies work, as they are the primary vector of abuse. Abusers can @ reply you and show up in your notifications, even if you don?t follow them. If you block or mute them, they can create a new throwaway account and continue the abuse. If you block all notifications from people you don?t follow, you sever your connection to considerate discussion with strangers or potential friends ? what was supposed to be a core value-add of these services.

A powerful way to prevent this @ reply abuse would be to prevent accounts that aren?t completely registered with a valid phone number, haven?t demonstrated enough rule-abiding behavior or have been reported for policy violations from having their replies appear in recipients? notifications.

This would at least make it harder for harassers to continue their abuse, and to create new throwaway accounts that circumvent previous blocks and bans in order to spread hatred.

There may be concerns with that as well, but it’s encouraging that more people are thinking about ways that design decision can make things better, rather than resorting to just out and out censorship.

Filed Under: abuse, comments, design, free speech, harassment
Companies: facebook, medium, twitter

What If You Published Half Your Book For Free Online?

from the interesting-experiments-in-publishing dept

Almost exactly 17 years ago, we wrote about an interesting experiment in the movie world, in which the film Chicken Run freely chose to put the first 7 minutes of the film online (in my head, I remember it being the first 20 minutes, but I’ll chalk that up to inflation). I thought it was a pretty clever experiment and am still surprised that this didn’t become the norm. The idea is pretty straightforward — rather than just doing a flashy trailer that may give away much of the movie anyway — you give people the beginning of the actual movie, get them hooked, and convince them it’s worthwhile to go pay to see the whole thing. Of course, that only works with good movies where the beginning hooks people. But… it’s also interesting to think about whether or not this kind of thing might work for books as well.

In this always on world, where some fear that people are so hooked on short attention span bits of information raining down from Facebook, Twitter and Snapchat, there’s a reasonable concern that people just aren’t willing or able to disconnect for long enough to actually read a full book. Some argue that we may be reading more, but getting less out of all of this. But now author/entrepreneur Rob Reid and Random House are experimenting with something similar. If they have to convince people to put down the internet to read a full book, why not go to the internet first. Rob has announced that he (and Random House) has teamed up with Medium to publish the first 40% of his latest novel, After On, which is coming out in full on August 1st.

Now, you may recall, five years ago, Rob came out with a fun book called Year Zero, a hilarious comic sci-fi story about aliens needing to destroy the earth… because of massive copyright infringement (no, really). With that book, we were able to publish a short excerpt, but that isn’t always enough to get people hooked. With After On, a massive chunk of the (admittedly massive!) book will be published online in a dozen segments over the next few weeks leading up to the release of the actual book (the first few segments are entirely free — and after that, they want you to become a “member” of Medium, but you can get your first two weeks free for membership — or you can just go buy the book by that point.

Rob has written a blog post talking about this experiment and what went into it — and he’ll also be on the Techdirt podcast tomorrow to talk more about it. In this book, while not about copyright, it does touch on a number of other issues that we frequently write about here, including patents, privacy, AI, terms of service and… the general nature of startup culture. The book is super interesting and engaging, but this experiment is interesting in its own way as well:

After putting 7,500 hours of my life into it, I want After On to reach lots of people. But I?m even more interested in reaching the people it will truly resonate with. It?s quirky, costs money, and entails a real time commitment. So if it?s not right for you, I?d rather not take your dollars or hours (which is arguably bad for business???but good businesspeople don?t write sprawling novels for a living). Whereas if it is right for you, I want you to discover it with as little friction as possible. Both goals made a big excerpt on Medium seem like a good idea.

My pitch to Random House evoked the largely bygone practice of US magazines excerpting new books. Licensing fees cost editors less than a major article, and publishers were pleased to generate income while promoting new titles. This practice is now rare. Reasons include the drop-off in print advertising, which has lowered magazine page counts, squeezing content. So why not transplant this pillar of the publishing ecosystem? Without trees to topple or ink to smear, we can release much longer excerpts online. Digital excerpts travel globally, and widespread excerpts will help books reach their most natural audiences. Better fits between books and readers will make reading more delightful, which means more books should sell???and hey, presto, everybody wins!

Anyway, check out the first excerpt that just went a little while ago, in which Rob (or the book’s narrator…?) dares you to read the whole damn thing…

Filed Under: always on, books, publishing, rob reid
Companies: medium, random house

Reddit, Mozilla, Others Urge FCC To Formally Investigate Broadband Usage Caps And Zero Rating

from the get-off-your-duff dept

Tue, May 24th 2016 11:44am - Karl Bode

We’ve noted how the FCC’s latest net neutrality rules do a lot of things right, but they failed to seriously address zero rating or broadband usage caps, opening the door to ISPs violently abusing net neutrality — just as long as they’re relatively clever about it. And plenty of companies have been walking right through that open door. Both Verizon and Comcast for example now exempt their own streaming services from these caps, giving them an unfair leg up in the marketplace. AT&T meanwhile is now using usage caps to force customers to subscribe to TV services if they want to enjoy unlimited data.

In each instance you’ve got companies using usage caps for clear anti-competitive advantage, while industry-associated think tanks push misleading studies and news outlet editorials claiming that zero rating’s a great boon to consumers and innovation alike.

The FCC’s net neutrality rules don’t ban usage caps or zero rating, unlike rules in Chile, Slovenia, Japan, India, Norway and The Netherlands. The FCC did however state that the agency would examine such practices on a “case by case” basis under the “general conduct” portion of the rules. But so far, that has consisted of closed door meetings and a casual, informal letter sent to a handful of carriers as part of what the FCC says is an “information exercise,” not a formal inquiry.

But in a letter sent to FCC Commissioners (pdf) this week, a coalition of companies including Yelp, Vimeo, Foursquare, Kickstarter, Medium, Mozilla and Reddit have urged the agency to launch a more formal — but also transparent — probe of ISP behavior on this front:

“Zero? rating profoundly affects Internet users’ choices. Giving ISPs the power to favor some sites or services over others would let ISPs pick winners and losers online?precisely what the Open Internet rules exist to prevent…Given how many stakeholders participated in the process to make these rules, including nearly 4 million members of the public, it would be unacceptable not to seek and incorporate broad input and expertise at this critical stage.”

Given the FCC’s decision to ban usage caps at Charter as a merger condition, the agency is clearly aware of the threat zero rating and caps pose to a healthy Internet. It’s possible the FCC is waiting for the courts to settle the broadband industry’s lawsuit against the FCC, which could gut some or all of the net neutrality rules. But it’s also entirely possible that the FCC does nothing. Usage caps are a glorified price hike, and even in its latest more consumer friendly iteration, the FCC has historically been afraid to so much as even acknowledge high prices are a problem in the sector.

Things have been muddied further by T-Mobile’s Binge On program, which gives users the illusion of “free data” by setting arbitrary usage caps, then exempting the biggest video services from usage caps. And while many consumers applaud the idea, even T-Mobile’s implementation sets a potentially dangerous precedent in that it fails to whitelist smaller video providers and non-profits — most of which have no idea they’re even being discriminated against. There’s a contingent at the FCC and elsewhere that believes efforts like this are a positive example of “creative pricing experimentation.”

Either way it’s increasingly clear that the FCC needs to take some public position on the subject as ISPs continue to test the agency’s murky boundaries to the detriment of users and small companies alike. Should the FCC win its court case, pressure will grow exponentially for the FCC to actually put its money where its mouth is — and put the rules so many people fought for to actual use.

Filed Under: broadband, data caps, fcc, usage caps, zero rating
Companies: foursquare, kickstarter, medium, mozilla, reddit, vimeo, yelp

Techdirt Podcast Episode 72: The Tough Choices Platforms Make

from the balancing-act dept

Back in March, Mike moderated a panel at RightsCon on the subject of intermediary liability and the delicate balancing act that platform providers have to play on that front, with lawyers from Meetup, Change.org, and Medium. This week, in lieu of a regular podcast episode we’ve got a recording of that discussion, which delves deeply into some of the difficult choices companies like this have to make.

Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.

Filed Under: intermediary liability, podcast
Companies: change.org, medium, meetup

Tech Companies Ask European Commission Not To Wreck The Internet — And You Can Too

from the step-up dept

Late last year, we told you about a worrisome effort by the European Commission to saddle the internet with unnecessary regulations. They had released an online “consultation” which was ostensibly part of the effort to create a “Digital Single Market” (a good idea in the world of a borderless internet), but which appears to have been hijacked by some bureaucrats who saw it as an opportunity to attack big, successful internet companies and saddle them with extra regulations. It’s pretty clear from the statements and the questions that the Commission is very much focused on somehow attacking Google and Facebook (and we won’t even get into the fact that the people who are looking to regulate the internet couldn’t even program a working online survey form properly). However, as we noted, Google and Facebook are big enough that they can handle the hurdles the EU seems intent on putting on them: it’s the startups and smaller tech firms that cannot. The end result, then, would actually be to entrench the more dominant players.

We helped created a “survival guide” for those who wished to fill out the (long, arduous) survey, and many of you did. As a follow up to that, via our think tank, the Copia Institute, we’ve now spearheaded a followup effort, which we’ve put up on the Don’t Wreck The Net site. It’s a letter to the EU Commission, signed by a number of internet companies and investors who care deeply about keeping the internet open and competitive. You can see the letter on that site, and it has already been signed by investors such as Union Square Ventures and Homebrew and a bunch of great internet companies, including Reddit, Medium, DuckDuckGo, Patreon, Automattic (WordPress), Yelp, CloudFlare, Shapeways and more.

Before sending it on to the EU, however, we’d love to get more companies, entrepreneurs, technologists, investors and more signed on. So if you go to the Don’t Wreck The Net site, not only can you see the letter we’re sending, but also the ability to sign on. If you’re signing on as yourself, that’s easy. If you’re signing on on behalf of your organization, then we’ll need to reach back out to you to obtain proof that you have the ability to sign on behalf of that organization. No matter what, please look it over and consider signing on, as it’s important for the EU to recognize the consequences of what regulations they may place on the internet for the wider tech and startup ecosystem.

Filed Under: competition, don't wreck the net, eu commission, innovation, intermediary liability
Companies: automattic, cloudflare, duckduckgo, indiegogo, medium, patreon, reddit, shapeways, topix, yelp