narrative – Techdirt (original) (raw)

Academic Paper Shows How Badly The Mainstream Media Misled You About Section 230

from the proving-the-media-distortion-field dept

We’ve had to publish many, many articles highlighting just how badly the mainstream media has misrepresented Section 230, with two of the worst culprits being the NY Times and the Wall Street Journal. Professor Eric Goldman now points us to an incredible 200 page masters thesis by a journalism student at UNC named Kathryn Alexandria Johnson, who did an analysis entirely about how badly both the NYT and the WSJ flubbed their reporting on Section 230.

The paper is actually more than just that, though. It includes a really useful description of Section 230 itself, along with its history, and some of the often confused nuances around the law. Johnson clearly did her homework here, and it actually is one of the best summaries of the issues around 230 I’ve seen. The paper is worth reading for just that section (the first half of the paper) alone.

But then we get to the analysis. Johnson notes that the Times and the Journal are basically the most powerful “agenda setting” newspapers in the US, so how they cover issues like Section 230 can have a huge impact on actual policy. And they failed. Badly.

The thesis explores the data in multiple ways, but one chart stands out: when talking about the impact of 230, both newspapers almost always frame the law as having a negative impact. They almost never describe it as having a positive impact.

That is, out of 116 articles in the NY Times that talk about the impact of Section 230, 107 described it negatively. Another six gave a combination of negative and positive, and only two (two!) described the impact positively. For the WSJ, it’s basically the same story: 88 articles discussing the impact of Section 230, 80 of them purely negative. Another four with a combination of negative and positive, and just three describing the law’s impact positively. That means, grand total, 91.7% of the articles in these two agenda-setting newspapers described the law’s impact as negative, with another 4.9% describing both negative and positive impacts, and just 2.5% describing the impact positively.

That’s pretty amazing. Now, some may argue that if you truly believe that the impact of Section 230 is negative, then these two publications are only being accurate in their descriptions. But, for those of us who have studied Section 230, and understand its broadly positive aspects, the whole thing seems crazy.

I’ve had many people argue over the years that the big newspapers like the Times and the Journal have an institutional interest in trashing social media and the internet, because it takes away from their gatekeeping powers. And I’ve always brushed that aside as an exaggeration. But the numbers here are pretty damn stark.

The paper also explores how these newspapers sought to frame Section 230, and found that they did a very poor job explaining how it has multiple functions, often choosing to focus on one framing — rather than a more accurate framing of how Section 230 is structured to encourage multiple things. It protects websites from being held liable as a publisher of third party content, which encourages more websites to allow for more speech, and it protects them from content moderation decisions creating liability, enabling them to cultivate their communities in the way they see fit. Understanding both of these is kind of important to understanding Section 230, but it appears that these papers rarely gave a complete description. Also, perhaps oddly (or perhaps because they’re just super confused themselves), they often used the publisher framing, even though they were really talking about the content moderation function — which may very well be why so many others, including politicians, are so confused about 230.

As previously discussed, the majority of definitions including only the “publisher” frame. Interestingly, despite a majority of definitions referencing only platforms’ protection from liability for the content posted by third-parties (59.5%), a large majority of articles were focused on the societal impacts of censorship and deplatforming. Such issues most closely map to the “content moderation” frame. And despite many of the articles’ focus on censorship and deplatforming, very few articles included definitions with only the “content moderation” frame.

For the purposes of creating the most informed electorate, the most helpful definitions are those that present both of Section 230’s functions. These articles were coded as “Both” when discussed above. Only a third of the definitions of Section 230 included both the publisher and content moderation frame, indicating a weakness in journalists’ reporting on this issue. Coverage in The Wall Street Journal more frequently defined Section 230 in terms of both publisher activity and content moderation activity than The New York Times, but coverage in The Wall Street Journal still mentioned both legal frames less than half the time. Journalists could improve coverage could be improved by including definitions that explain both legal frames associated with Section 230, regardless of the focus of the article.

Then there’s the question of how often these two famed newspapers just flat out got things wrong about Section 230. The data may be lower than you might expect, as Johnson found it happened 16.2% of the time, but that’s still kind of astounding. This is a fundamental issue that has gotten a ton of attention and to still get it wrong in about one out of every six articles is indefensible.

It is interesting, though, to note that the WSJ misrepresented the law at nearly double the rate of the NY Times. Again, people have pointed out that Rupert Murdoch, who owns the WSJ, has more or less declared war on the entire internet, and noted that could impact the coverage of things like Section 230. I always assumed that would be a stretch, but the data here is, once again, noteworthy.

As Johnson notes in her paper, many of the misrepresentations were not necessarily outright falsehoods (though there were some of those), but “rather statements lacking enough important context or requiring clarification.”

Then there’s this:

Every misrepresentation identified in the entire sample could be credited to an unattributed source. Therefore, journalists themselves were the source of each misrepresentation. This finding suggests that either journalists themselves do not fully understand the nuance of how Section 230 is applied or that journalists do understand how Section 230 functions but are not accurately conveying that knowledge to the reader

For what it’s worth, it may also be the fault of the editors, rather than the journalists. I am familiar with at least one situation in which a major newspaper misrepresented Section 230, and the journalist later explained to me that they had fought for the correct representation, but their editor insisted on running a misleading one.

Johnson’s paper also highlights how these misrepresentations can lead to further misunderstanding of Section 230.

Understanding that the First Amendment, and not Section 230, enables platforms to moderate content is important to social understanding regarding how platforms would function if Section 230 was reformed or repealed. Without the portion of Section 230 that precludes publisher liability, platforms would still be able to remove content, that for example violated their community standards; however, platforms would be less likely to do so because they would once again, be liable for any unlawful content that they did not remove.

Johnson also, correctly, summarizes what would actually happen with the removal of Section 230: there would be fewer places to speak online.

In fact, Australia’s high court recently ruled that news media outlets are to be treated as “publishers” of the unlawful content that is posted in comments sections on social media. In response, news media outlets began disabling their comments sections due to their inability to constantly moderate all comments. Removing the comments section was the easiest way to protect themselves from legal liability. This anecdote suggests that if Section 230 was changed and platforms were treated as publishers of third-party content, platforms would begin restricting users’ ability to post on their sites—severely stifling the ability of the public to share content and ideas online. Limiting the public’s ability to communicate online has negative implication for self-governance beyond just debate and discussion regarding Section 230. The internet provides a forum for citizens to ask questions, seek answers, and engage in debate about important policy issues. As a “vast democratic forum[ ]” the internet has democratized speech by lowering the barrier of entry for individuals to speak, be heard, and engage in debates about important issues facing society. In this way, Section 230 creates a causality dilemma. Section 230 is necessary to create the speech environment online that is required for individuals to debate and discuss issues related to Section 230.

Johnson’s paper also highlights how many stories about 230 inaccurately refer to it as a “safe harbor” rather than an “immunity.” As it notes, this is an important distinction. DMCA 512 is a safe harbor, and in order to make use of it, you need to meet a bunch of qualifications. This is why there is a long history of case law involving extensive litigation about a bunch of different factors to determine if a site qualifies for the DMCA safe harbor or if it “loses” the safe harbor. But 230 is an immunity, which is different. You can’t lose an immunity. You don’t have to take any steps to get the immunity. And one of the biggest misconceptions about 230 is that sites can take some sort of action that loses them the protections. That’s not true, but when news organizations report on it as a safe harbor, they support that misconception.

There’s much, much more in the paper, but it’s quite an excellent thesis, incredibly detailed, including getting a lot of very nuanced and complex topics correct that (as the paper itself shows) journalists often get very, very wrong. And it also adds clear data to the discussion. Just an all around excellent piece of scholarship.

Filed Under: 1st amendment, immunity, kathryn johnson, misleading, narrative, reporting, safe harbor, section 230
Companies: ny times, wall street journal, wsj

Media Spends Years Insisting Facebook Makes Society Worse; Then Trumpets A Poll Saying People Think Facebook Makes Society Worse

from the nice-work-there dept

It still is amazing to me how many people in the more traditional media insist that social media is bad and dangerous and infecting people’s brains with misinformation… but who don’t seem to recognize that every single such claim made about Facebook applies equally to their own media houses. Take, for example, CNN. Last week it excitedly blasted out the results of a poll that showed three fourths of adults believe Facebook is making society worse.

Now, there is an argument that Facebook has made society worse, though I don’t think it’s a particularly strong one. For many, many people, Facebook has been a great way to connect and communicate with friends and family — especially during a pandemic when many of us have been unable to see many friends and family in person.

Either way it’s undeniable that the traditional media — which, it needs to be noted, compete with social media for ad dollars — has spent the last five years blasting out the story over and over again that pretty much everything bad in the world can be traced back to Facebook, despite little to no actual evidence to support this. So, then, if CNN after reporting about how terrible and evil Facebook is for five years, turns around and polls people, of course most of them are going to parrot back what CNN and other media have been saying all this time. Hell, I’m kind of surprised that it’s only 76% of people who claim Facebook has made society worse.

I mean, just in the past couple months, every CNN story I can find about Facebook seems to be completely exaggerated, with somewhat misleading claims blaming pretty much everything wrong in the world on Facebook. It’s almost like CNN (and other media organizations) are in the business of hyping up stories to manipulate emotions — the very thing that everyone accuses Facebook of doing. Except with CNN, there are actual human employees making those decisions about what you see. Which is not how Facebook works. Here are just a few recent CNN stories I found:

I mean, if all my info about Facebook came from CNN, I’d agree that it was making society worse. But I could just as easily argue that CNN is making society worse by presenting a very misleading and one-sided analysis of anything having to do with Facebook. Hell, CNN is owned by AT&T, which (1) has been trying and failing to compete with Facebook in the internet ads business, and (2) literally paid to set up an outright propaganda network known as OAN. I think there’s tremendous evidence to suggest that AT&T is making society way worse than anything that Facebook has ever done.

This is not a defense of Facebook, because I still believe the company has lots and lots of problems. But the idea that a poll from CNN tells us anything even remotely useful or enlightening is just pure misinformation.

Filed Under: media, narrative, polls, society
Companies: cnn, facebook

The Privacy Paradox: When Big Tech Is Good On Privacy, They're Attacked As Being Bad For Competition

from the tradeoffs.-it's-all-tradeoffs dept

For many years I’ve tried to point out that no one seems to have a very good conceptual framework for “privacy.” Many people act as if privacy is a concrete thing — and that we want our information kept private. But as I’ve pointed out for years, that doesn’t make much sense. Privacy is a set of tradeoffs. It’s information about ourselves, that we often offer up freely, if we feel that the tradeoff is worth it. And, related to that, there’s a big question about who is controlling the data in question. On top of that, things get confusing when we consider just who is controlling what data. If we’re controlling our own data, then we have some degree of autonomy over our privacy trade-offs. But when we hand that data off to a third party, then they have much more say over our privacy — and even if they agree to “lock down and protect” that data, the end result might not be what we want. For one, we’re giving those companies more power of our data than we, ourselves have. And that can be a problem!

Because of this, privacy questions are often highly contextual — and often conflict with other issues. For example, after the Cambridge Analytica scandal, Facebook was yelled at over and over again regarding its poor data privacy efforts — leading the company to say “okay, fine we’ll lock down your data, and just keep it for ourselves.” Which is a totally reasonable response to the complaints that “Oh, Facebook leaked our data.” But, of course, the end result of that is… worse. Then we’ve handed Facebook even more control over our data, and given significantly less ability for competitors to come along. That’s not good!

There’s a similar issue with advertising and privacy, that we discussed just last month. Google clarified its plans to block 3rd party cookies. In many ways, this is good for privacy. 3rd party cookies are often abused in creepy ways to track people. So it’s good that Google won’t support them (Firefox and Safari already made this move earlier). But lots of people then vocally complained that this would only give more power to Google, because it can deal with the lack of data, while competitive (smaller) advertising firms cannot.

These issues are often in conflict — and many of the big tech critics out there don’t want to recognize that. In fact, it lets them attack these companies no matter what they do. If they do something that’s good for privacy, but bad for competition, focus on how it’s bad for competition. If they do something that’s good for competition, but bad for privacy, focus on how it’s bad for privacy.

A recent article in Wired by Gilad Edelman highlights this tension in the antitrust context. Noting that in the big antitrust fights against Facebook and against Google, the two companies are being attacked in very different ways: one for being more protective of private data in a way that gives the company more power, and one for violating privacy of users.

HERE?S SOMETHING TO puzzle over. In December, the Federal Trade Commission and a coalition of states filed antitrust lawsuits against Facebook, alleging that as the company grew more dominant and faced less competition, it reneged on its promises to protect user privacy. In March, a different coalition of states, led by Texas, accused Google of exclusionary conduct related to its plan to get rid of third-party cookies in Chrome. In other words, one tech giant is being sued for weakening privacy protections while another is being sued for strengthening them. How can this be?

Edelman tries to solve this seeming paradox by suggesting that there might be a way to sort out the actual intent of these actions:

Maybe, then, the right way to think about what should happen when the privacy and competition dials diverge is to ask whether a company is cutting off access to personal data that it intends to keep using itself. That could help distinguish between a case like the Privacy Sandbox, on the one hand, and Apple?s App Tracking Transparency framework, on the other. Apple?s new policy will force all iPhone app developers to ask for permission before tracking users. That is expected to hurt companies that make money by tracking users across the web, most notably Facebook, which has reportedly considered filing an antitrust lawsuit to block the change. But since Apple doesn?t make its money by selling personalized ads based on surveilling user behavior, it?s harder to argue that it is hoarding access to user data for its own purposes. That makes the tension between privacy and competition easier to resolve.

Not that Apple will always come out ahead in this analysis. Contrast the Facebook spat with the ongoing feud between Apple and Tile, which sells tracking technology to help users find lost stuff and thus competes with Apple?s own ?Find My? software. According to Tile, Apple has discriminated against the company by prohibiting certain practices, like background location tracking, that it requires to function. Apple says the rules are meant to protect user privacy. If it were to sue, Tile might have a stronger case than Facebook because its product competes more directly with Apple.

That’s an interesting idea — but I’m not sure it would be so easy in practice. There are so many competing interests at play, and so many actions may seem good for one particular concern, but less good for others.

Obviously, I’ve long been an advocate of simply removing much of the data from these large companies’ control entirely — via a system of protocols that moves control out to the end users. But in most versions of that system, most users are going to eventually entrust that data to some third party company, and in some ways that puts us back where we’re started. My hope is that such a world would end up with more neutral 3rd party “data banks” and you could even suggest that there could be an information fiduciary model, in which these companies are legally required to act in your best interests.

But, even that model runs into some trouble, and we end up talking about questionable ideas like a DMCA for privacy, which seems like it would be a true horror online.

It would be nice, though, if we could have this kind of debate and conversation in a reasonable manner, rather than everyone jumping immediately to their own corners about who’s evil and who’s good. Every one of these decisions has tradeoffs, and it would be more productive if we could recognize that and debate the relative merits of all of those tradeoffs. But, having nuanced discussions about subjects with no easy answers does not seem to be in fashion these days.

Filed Under: antitrust, big tech, competition, narrative, privacy
Companies: facebook, google

Donald Trump Caused The Techlash

from the 2016-election-was-the-tipping-point dept

In October 2016, I pitched USC a research proposal about the tech coverage’s non-investigative nature and the influence of corporate PR. I thought that at the end of this project, I’d have indictive documentation of how the tech media is too promotional and not tough enough. When I sat down to analyze a full year of tech coverage, the data presented quite the opposite. 2017 was suddenly full of tech scandals and mounting scrutiny. The flattering stories about consumer products evolved into investigative pieces on business practices, which caught the tech companies and their communications teams off guard.

Like any good startup, I needed to pivot. I changed my research entirely and focused on this new type of backlash against Big Tech. The research was based on an AI-media monitoring tool (by MIT and Harvard), content analysis, and in-depth interviews. I had amazing interviewees: senior tech PR executives and leading tech journalists from BuzzFeed News, CNET, Recode, Reuters News, TechCrunch, Techdirt, The Atlantic, The Information, The New York Times, The Verge, and Wired magazine. Together, they illuminated the power dynamics between the media and the tech giants it covers. Here are some of the conclusions regarding the roots of the shift in coverage and the tech companies’ crisis responses.

The election of Donald Trump

After the U.K.’s Brexit referendum in June 2016, and specifically, after Donald Trump became the president at the end of 2016, the media blamed the tech platforms for widespread misinformation and disinformation. The most influential article, from November 2016, was _BuzzFeed_‘s piece entitled, “This analysis shows how viral fake election news stories outperformed real news on Facebook.” It was the first domino to topple.

When I asked what was the story that formed the Techlash, all the interviewees answered, in one way or the other, that it was the election of Donald Trump. “Even though it wasn’t the story that people wrote about the most, it was the underlying theme.” Then, new revelations regarding the Russian interference with the U.S. election evolved into a bigger story. On November 1, 2017, Facebook, Google, and Twitter, testified in front of the U.S. Congress. The alarming effect was from combining the three testimonies together.

In the tech sector, there’s a sentence that you hear a lot: “change happens gradually then suddenly.” There were years and years of “build-up” for the flip, but the flip itself was in the pivotal moment of Donald Trump’s victory and the post-presidential election reckoning that followed it. The main discussion was the role of social media in helping him win the election.

If Hillary Clinton had been elected in November 2016, the Techlash might have been much smaller. “We would not have seen the amount of negative coverage. It is not just because almost every tech journalist is reflectively anti-Donald Trump; it is that almost every tech person is anti-Donald Trump.” As a result, Silicon Valley began to regret the foundational elements of its own success. The most dire warnings started to come from inside the industry as more sources spoke up and exposed misdeeds.

Then, in 2018, the Cambridge Analytica scandal unlocked larger concerns about social media’s influence and the careless approach toward user privacy. It also shed light on the fact that technology is progressing faster than consumers’ ability to process it and faster than the government’s ability to regulate it.

The companies’ bigness and scandals around fake news, data breaches, and sexual harassment

There were more factors at play here. It was also the tech companies’ scale and bigness, being too big to fail. All the tech giants are at a place where they are getting scrutiny, if nothing else, because of how big and powerful they are. On the one hand, growth-at-all-cost is a mandate. On the other, there are unforeseen consequences of that same growth.

According to the tech journalists, those unintended consequences are due to the companies’ profound lack of foresight. They were blind, and this blindness came back to bite them. Thus, it’s the companies’ fault for not listening to the journalists’ concerns.

However, the big data analytics and content analysis showed that focusing only on the post-election reckoning or the tech platforms’ growing power won’t fully explain the Techlash. A large number of events in a variety of issues shaped it. Their combination led to the “It’s enough” feeling, the mounting calls for tougher regulation, and the #BreakUpBigTech proposition.

We had cases of extremist content and hate speech, and misinformation/disinformation, like the fake news after the Las Vegas shooting; privacy and data security issues, following major cyber-attacks, like “WannaCry” or data breaches, like Equifax, but also at Facebook, Uber, and Yahoo, which raised the alarm about data privacy and data protection challenges; and also allegations of an anti-diversity, sexual harassment, and discrimination culture. It was in February 2017 that Susan Fowler published her revelations against Uber (prior to the #MeToo movement). It symbolized the toxicity in Silicon Valley. All of those time-bombs started to detonate at once.

The tech companies’ responses didn’t help

When I analyzed the tech companies’ crisis responses, I had different companies and a variety of negative stories, and yet the responses were very much alike. It created what I call “The Tech PR Template for Crises.” The companies rolled out the same playbook, over and over again. It was clear; big tech got used to resting on their laurels and was not ready to give real answers to tough questions. Instead, they published the responses they kept under “open in case of emergency.”

One strategy was “The Victim-Villain framing”: “We’ve built something good, with good intentions/ previous good deeds and great policies -but- our product/ platform was manipulated/misused by bad/malicious actors.”

The second was pseudo-apologies: Many responses included messages of “we apologize,” “deeply regret,” and “ask for forgiveness.” They were usually intertwined with “we need to do better.” This message typically comes in this order: “While we’ve made steady progress … we have much more work to do, and … we know we need to do better.” Every tech reporter heard this specific combination a million times by now.

They said, “sorry,” so why pseudo-apologies? Well, because they repeatedly tried to reduce their responsibility, with all the elements identified in number one: reminder strategy (past good work), excuse strategy (good intention), victimization (basically saying, “We are the victim of the crisis”), scapegoating (blaming others). They emphasized their suffering since they were “an unfair victim of some malicious, outside entity.”

The third thing was to state that they are proactive: “We are currently working on those immediate actions to fix this. Looking forward, we are working on those steps for improvements, minimizing the chances that it will happen again.” It’s Crisis Communication 101. But then, they added, “But our work will never be done.” I think those seven words encapsulate everything. Is the work never done because, by now, the problems are too big to fix?

It is the art of avoiding responsibility

One way to look at the companies’ PR template is to say: “Well, of course, that this is their messaging. They are being asked to stop big, difficult societal problems, and that is an impossible request.”

In reality, all of those Techlash responses backlashed. Tech companies should know (as Spider-Man fans already know) that “with great power comes great responsibility.” Since they tried to reduce their responsibility, the critics claimed that tech companies need to stop taking the role of the victim and stop blaming others. The apology tours received comments such as “don’t ask for forgiveness, ask for permission.” The critics also said that “actions should follow words.” Even after the companies specified their corrective actions, the critics claimed the companies “ignore the system” because they have no incentive for dramatic changes, like their business models. In such cases, where the media push for fundamental changes, PR can’t fix it.

The Techlash coverage is deterministic

On the one hand, there’s the theme of: “We are at a point where the baby is being thrown out with the bathwater. There was a perhaps ridiculous utopianism. But it has become just as ridiculous – if not more so – on the flip side now, of being dystopian. The pendulum has swung too far” (Evil List articles, for example). On the other hand, there’s the theme of “Journalism’s role is to hold power to account. We are just doing our job, speak truth to power, reveal wrongdoing, and put a stop to it. Whoever is saying that the media is over-correcting doesn’t understand journalism at all.”

While I articulated both themes in the book, one of the concepts that helped me organize my thoughts was ‘technological determinism.’ In a nutshell, some argue that technology is deterministic: the state of technological advancement is the determining factor of society. Others dispute that view, claiming the opposite: social forces shape and design technology, and thus, it is the society that affects technology. I realized that we could describe the Techlash coverage as deterministic: technology drives society in bad directions. Period.

Then, perhaps what the few tech advocates are pointing out is that this narrative doesn’t consider the social context or human agency. A good example was the Social Dilemma. The tech critics targeted the scare tactics used to enrage people in a documentary filled with scare tactics used to enrage people. And they didn’t even notice the irony. Sadly, since they exaggerated and the arguments were too simplistic, they made it easier to dismiss the claims, even though they were extremely important. My fear here is that the exaggerations overshadow the real concerns, and the companies become even more tone-deaf. So, perhaps, we deserve a more nuanced discussion.

“It’s cool — it’s evil” “saviors — threats”

From the glorious days and the dot-com bubble to today’s Techlash, there were two pendulum swings; the first between “It’s cool” and “It’s evil,” the second between “saviors” and “threats.” Moving forward, I would suggest dropping them altogether. Tech is not an evil threat, nor our ultimate savior. The reality is not those extremes, but somewhere in the middle.

Dr. Nirit Weiss-Blatt is the author of The Techlash and Tech Crisis Communication

Filed Under: donald trump, journalism, narrative, techlash

Turns Out Most People Still Don't Hate 'Big Internet' As Much As Politicians And The Media Want Them To

from the these-things-make-our-lives-better dept

“The narrative” over the past few years concerning internet companies has clearly shifted. It went from one that generally praised the wonders and power of the internet to one that now blames the internet for everything. The hagiographc coverage of the past clearly went too far, but the current “techlash” seems to have gone way too far in the other direction as well — much of it from people grasping at straws over why things they don’t like have happened in the world. The good folks over at The Verge have done a big consumer survey of people’s general opinions of various big internet companies and it shows that most people still like these internet services, and believe, on the whole, that they make their lives better, not worse. Even the services that get the “worst” grades, still get over a 60% “favorable” rating, while Amazon, Google, YouTube, Netflix, Microsoft, and Apple all come in over 80% positive (with Amazon, Google, and YouTube breaking 90%).

A separate question asked how people view these companies’ impact on society, and again, they are mostly positive — and even in the cases where there is some level of negativity (mainly: Facebook, Instagram, and Twitter), the positive feelings greatly outweigh the negative:

There are many more fascinating findings and I recommend checking out the full Verge story on this, though I will note a bit of generational shock, as someone who lived through the 90s era of everyone in tech absolutely hating Microsoft and not trusting the company one bit, to Microsoft now being listed as the company that people trust the most with their data. Times sure have changed.

Still, as the general narrative — and a lot of political rhetoric — is focused on how awful these companies are and how “something must be done” about them, it does seem worth noting that most of the public seems to really like these services and feel the world is a better place because of them.

Now, take that information and compare it to just how little people trust companies in the telecom sector, and you might wonder why none of the narrative seems to focus on those companies. Indeed, the only political pressure on those companies seems to be to get them to merge and consolidate faster. Also, I should note that as fond as people are of repeating the silly and misleading line that “if you’re not paying for it, you’re the product,” compare the levels of trust between all of these free internet services (very high) and the telco services you pay for (very low), and perhaps realize that it’s not the “free” or “not free” part that engenders trust.

Filed Under: big internet, big tech, consumers, narrative, techlash, users
Companies: amazon, apple, facebook, google, microsoft, twitter, youtube

Music Piracy Continues To Drop Dramatically, But The Industry Hates To Admit That Because It Ruins The Narrative

from the let's-try-this-again dept

This was wholly predictable, of course. Back in 2015, we released a detailed analytical report showing that the absolute easiest and most effective way to reduce piracy was to to enable more and better licensed services that actually gave users what they were seeking for reasonable prices and fewer restrictions. The data in that report showed that focusing on greater legal enforcement had no long term effects on piracy, but more and better authorized services did the trick every time. Then, earlier this year, we released another report showing that the music industry is in the midst of a massive upswing thanks almost entirely to the rapidly increasing success of licensed music streaming platforms. It was incredibly dramatic to look at the numbers.

Put two and two together, and you’d full expect to see a corresponding dramatic drop in piracy. And, indeed, it appears that’s exactly what happened, but the recording industry doesn’t want you to realize that. In IFPI’s latest release, they play up the idea that piracy is still this huge existential problem.

Sounds bad, right? Later in the report it insists that:

Using unlicensed sources to listen to or download music, otherwise known as copyright infringement, remains a threat to the music ecosystem.

A “threat to the music ecosystem”? It also attacks stream ripping: “Stream ripping is the illegal practice of creating a downloadable file from content that is available to stream online. It is now the most prevalent form of online music copyright infringement.” Of course, place shifting/time shifting copyright content has been found to be fair use in the past, so it’s pretty rich for the industry to act like it’s all bad. My own love of music was fueled from back in the day when I was a kid carefully setting up a tape player to tape my favorite songs off the radio. But, hey, to IFPI it’s all evil.

Of course, what IFPI conveniently left out of its report is that these piracy numbers are dropping dramatically. Indeed, IFPI doesn’t bother to mention the historical numbers here, because, boy would that really upset the narrative they’re pushing.

This year 27% of Internet users classify themselves as music pirates, compared to 38% last year. Similarly, the percentage of stream-rippers dropped from 32% to 23% between 2018 and 2019, which is a rather dramatic decrease.

To put this into perspective, out of every 100 persons who were classified as music pirates last year, 29 kicked the habit. And for every 100 stream-rippers, 28 stopped. These groups obviously overlap, but it?s certainly a major shift.

It is, indeed, a major shift. And certainly correlates quite closely with the similarly dramatic rise in the use of licensed services. And this is during a period of time prior to draconian new copyright enforcement laws were put in place, so it’s not like the IFPI has a story to tell about how its new legal regimes helped out here. It seems that the most likely story is exactly what we’ve said for years. Invest in giving the public what they want, in a reasonable manner at a reasonable price, and piracy kinda goes mostly away as a problem.

What an idea.

If only the IFPI would actually recognize that.

Instead, as Torrentfreak notes, IFPI seems to conveniently ignore its historical narratives when the data proves their fear-mongering was exaggerated or wrong:

Another thing we observed is that the role of search engines is no longer highlighted. This used to be a top priority. In 2016 IFPI reported that 66% of all music pirates used general search engines (e.g. Google) to find pirated music. A year later this went down to 54%, last year it dipped under 50%, and in 2019 it?s not mentioned at all.

For some reason, we think this may have been different if these trends had gone in the other direction. For example, in 2016, IFPI sounded the alarm bell when stream-ripping grew 10% while the 28% drop this year isn?t mentioned.

One wonders why a 10% increase was worth setting off the alarm bells, but a much more massive decrease is wholly ignored or, worse, still presented as evidence of a problem. Actually, no, no one wonders why. We know. It would just be nice if politicians finally recognized that IFPI isn’t particularly honest in its framing of all of this. Might have saved us quite a bit of trouble.

Filed Under: music, narrative, piracy, streaming
Companies: ifpi

Mistakes And Strategic Failures: The Killing Of The Open Internet

from the unfortunate dept

Sometime tomorrow, it’s widely expected that the House will approve a terrible Frankenstein bill that merges two separate bills we’ve spoken about, FOSTA and SESTA. The bills are bad. They will not actually do what the passionate and vocal supporters of those bills claim they will do — which is take on the problem of sex trafficking. Neither bill actually targets sex traffickers (which, you know, one would think would be a prime consideration in pushing a bill that you claim will take on sex trafficking). Instead, they seek to hold third parties (websites) responsible if people involved in sex trafficking use them. This has all sorts of problems that we’ve been discussing for months, so I won’t reiterate all of them here, but suffice it to say if these bills were really about stopping sex trafficking, they sure do a horrible job of it. If you want to try to stop these bills, check out EFF’s action page and please call your Congressional Rep., and let them know they’re about to do a really bad thing. If you want more in-depth information, CDT has you covered as well. Finally, Professor Eric Goldman details piece by piece what this Frankenstein bill does and how bolting SESTA and FOSTA together make two bad bills… even worse, and even less clear as to what it actually does.

Over the last week, I’ve spoken, either on background or off the record, to over a dozen different people on a variety of sides and in a variety of different positions concerning these bills, trying to understand how we got to the point that horrible bills that will undoubtedly do serious harm to the internet — without actually doing much of anything to stop sex trafficking — are actually likely to get passed. And the story that emerges is one of a series of blunders, misunderstandings, strategic errors and outside forces that drove things in this direction — helped along quietly by some anti-internet industries that were all too willing (if not eager) to exploit legitimate concerns about sex trafficking to get what they wanted (without actually helping sex trafficking victims).

Let’s start with the blundering. There were both large scale blunders and small scale ones. The large scale blunder is that too many folks who work at the big internet companies failed to recognize how the narrative was shifting on “the internet” over the past two years or so. Despite some efforts to warn people that the tide was shifting, many in the internet world insisted it was all overblown. And, to some extent, they are right. Recent polls show that the public still views all the big internet firms very favorably. But, sometimes a narrative can trump reality and, over the past year especially, the “narrative” is that the public doesn’t trust those companies anymore. Some of that is driven by the results of the 2016 election and the (exaggerated) claims of “fake news.”

But a narrative can be so powerful that even if it doesn’t match up with reality, it can become reality as more and more people buy into it. And, right now, many in the media and in politics have both grabbed onto the “people no longer trust big internet” narrative with a chokehold and won’t let go. And the big internet companies seemed wholly unprepared for this.

The second blunder appears to be more specific to Facebook — and it involves a complete misunderstanding of CDA 230. Last week, I pointed to a big Wired cover story about Facebook, where I called out the reporters for explaining CDA 230 exactly backwards — falsely claiming that CDA 230 meant they couldn’t take a more proactive role in moderating the site. Of course, that’s wrong. CDA 230 is explicitly why they can take a more active role.

However, since posting that article, I’ve heard from a few people at Facebook who told me that the view expressed in the article was actually the view within Facebook. That is, Facebook’s own legal and policy team pushed the idea internally that heavy moderation may run afoul of CDA 230. This is wrong. But, incredibly, Facebook’s own confusion about how the law works may now make their incorrect belief a reality, as it may have helped lead to the tech backlash, leading to things like SESTA, which would put in place a “knowledge” standard for losing CDA 230 immunity… meaning that companies will be much less proactive in monitoring.

That’s a huge, huge blunder.

Next up were the strategic errors. Back in November, the Internet Association — the trade group that represents the largest internet companies (but not the smaller ones) surprised many people by coming out in favor of a modestly update version of SESTA. As we pointed out at the time, this was selling out the internet way too early and way too cheaply. There are a few different explanations of how this happened making the rounds, but one that has come up repeatedly is that Facebook threw in the towel, believing two things (1) that it’s getting hit so hard on so many things, it couldn’t risk (falsely) being labeled as “soft on sex trafficking” and (2) it knew that it could survive whatever legal mess was created by SESTA. Some smaller internet companies believe that this second point is one that Facebook actually likes because it knows that smaller competitors will be hobbled. To say that these companies are pissed off at Facebook and the Internet Association would not accurately convey the level of anger that came across. But it wasn’t just Facebook. We heard that a few other Internet Association members — mainly those who don’t rely quite as much on CDA 230 — wanted to just “get past” the issue, and supported the Internet Association cutting whatever deal it could and moving on.

This has greatly pissed off a lot of people — including many other (smaller) Internet Association members who feel that their own trade association sold them out. And it has greatly pissed off many other groups, including other trade groups representing internet organizations and especially public interest, civil society and free speech organizations, who historically have aligned well with the Internet Association on efforts to protect an open internet. Within these groups, a feeling of trust with the Internet Association has been broken. There is plenty of support for the idea that the Internet Association, with the help of Facebook, got played and made a huge strategic mistake in settling. The Internet Association wouldn’t go on record with me, but suffice it to say the organization disputes my characterization of what happened and would really, really prefer I didn’t write this post. However, after talking to multiple other people who were deeply involved in negotiations over SESTA, there is a general feeling that the Internet Association caved and did so way too quickly when better, more workable solutions were still on the table. But, in caving, most of those discussions were tossed aside. Many people are mad that the Internet Association, with the help of Facebook, seemed to get desperate and got played right into a bad deal that harms the internet.

And note that unlike the RIAA/MPAA, which the Internet Association was basically set up to mimic as an opposing force, the Internet Association refused to take a hard line stance on this. The RIAA and MPAA don’t exactly have a history of caving on issues (even when they should). The Internet Association folded, and many people involved in protecting and building the internet are not at all happy about this. And just as the internet companies failed to recognize the power of the narrative, I’d argue that the Internet Association has failed to grasp the level of anger it has generated with its moves over the last few months as well.

Speaking of the MPAA, its fingerprints are all over SESTA, even as it’s tried to keep them mostly out of sight. For years, part of the MPAA’s “strategy” against the internet disrupting its business was to tar and feather internet companies for enabling illegal activity totally unrelated to copyright infringement (after realizing that whining about piracy wasn’t winning them any sympathy). They tried to focus on drug sales for a while. And terrorism. But it appears that sex trafficking was finally the one that caught on in Congress.

And that leads to the final point: the convenient exploitation of all of the above by “foes” of the open internet and free speech. The MPAA, officially, has been pretty quiet about SESTA, though some of its studios officially endorsed the bill. Going through lobbying records, Disney appears to be the only major studio that officially lobbied on behalf of SESTA, but multiple people suggested that former top 20th Century Fox lobbyist Rick Lane was heavily involved as well. While I don’t see his name in any official lobbying disclosure forms, a group pushing for SESTA officially thanked Lane for helping them go around Capitol Hill to stump for SESTA, calling him an “extraordinary partner.” And, not surprisingly, Lane recently posted a giddy LinkedIn post, excited about tomorrow’s vote, while totally misrepresnting both what SESTA does and the reasons many are concerned about it. Oh, and let’s not forget Oracle. The company that has seemingly decided that attacking internet companies is more important than actually innovating has been one of the most vocal supporters of SESTA, and also lobbied heavily in favor of it in Congress.

Thus, a key aspect of how the internet works — which many of this bill’s supporters don’t actually understand — is at serious risk. The internet companies probably should have realized sooner how the narrative was shifting. They probably should have better understood — and explained — how CDA 230 actually enables more monitoring and filtering, not less. But, that’s not what happened. The Internet Association could have continued to fight, rather than giving in. But none of that happened, creating an unfortunate perfect storm to do serious harm to the internet. And, again, perhaps that would all be worth it if SESTA would actually help stop sex trafficking. But it will almost certainly make the problem worse.

And, that doesn’t even get into the fact that the company almost always cited as an example of why we need SESTA, Backpage.com, is almost certainly about to face a ruling in a case saying that Backpage is not protected by CDA 230. The fact that Congress is unwilling to even wait and see how that case turns out (or what a grand jury that is supposedly investigating Backpage decides) suggests that this bill has never actually been about stopping sites like Backpage, but about punching a huge hole in CDA 230 and creating havoc for tons of internet platforms — especially smaller ones.

This situation is a pretty big mess, and it wasn’t helped by misjudgments and strategic errors by various internet companies and the Internet Association. But the effort to undermine aspects of the internet also has some “help” from those who are gleeful about how this is all working out. And it’s not because they think this will do a damn thing to stop sex trafficking. And it’s really too bad, as the end result of this bill may make it that much harder to actually deal with sex trafficking online.

Filed Under: cda 230, fosta, intermediary liability, narrative, rick lane, section 230, sesta
Companies: facebook, internet association, mpaa