frances haugen – Techdirt (original) (raw)

Facebook Whistleblower Testifies Before 'Grand Committee On Disinformation'; Which Includes Countries That Lock People Up For Criticizing The Gov't

from the sure-that's-wise? dept

It didn’t get as much press as some of Facebook whistleblower Frances Haugen’s other high profile talks to government inquisitors, but last week, Haugen testified before the rather Orwellian International Grand Committee on Disinformation. This is a bizarre “committee” set up by regulators around the world, but its focus — and its members — are kind of notable. Considering that tons of evidence shows that cable news is a much larger vector of disinformation flows to the general public, it seems notable that the “International Grand Committee on Disinformation” seems to only want to pay attention to online disinformation. I mean, it’s right in the group’s mission:

The rapid, unregulated expansion of social media is causing lasting harm to the world?s societies and democracies. So long as the technology giants who own these platforms are permitted to put profits ahead of people, malevolent actors will continue to be able to use social media to spread disinformation, spew hate, and disrupt elections.

Hmm. Only online? Only social media? No traditional media? No cable news? How… interesting.

Ah, but it gets even more interesting. Because the International Grand Committee on Disinformation this time included Singapore Parliamentarians who were very excited to show how bad social media is.

Amidst growing international convergence on the need to regulate the internet to protect vulnerable communities from online harms, two Singapore Members of Parliament participated in the fifth meeting of the International Grand Committee on Disinformation (IGC5) in Brussels, Belgium on 9 November 2021. They were Ms Sim Ann, Member of Parliament for Holland-Bukit Timah GRC, and Senior Minister of State, Ministry of Foreign Affairs and Ministry of National Development; and Ms Rahayu Mahzam, Member of Parliament for Jurong GRC, and Parliamentary Secretary for Ministry of Communications and Information and Ministry of Health. Ms Sim and Ms Rahayu are also the co-leads of Singapore?s Sunlight Alliance for Action

Great. Great. The readout from these Singaporean Parliamentarians made it clear that they are very, very concerned:

The key themes of this year?s IGC meeting were (i) COVID-19 misinformation and (ii) online hate directed towards historically marginalised groups.

They also note that it’s clear that Facebook must be regulated:

There was a clear consensus among the participants that self-regulation by social media companies has not been effective, and regulation is necessary. There was also general endorsement of Singapore?s position that beyond regulation, multi-pronged, multi-stakeholder approaches such as the Sunlight Alliance for Action were necessary to combat online harms effectively.

Oh, great. So, let’s see, how is Singapore fighting disinformation again? Oh, right, by jailing anyone who criticizes the Singaporean government.

In 2019, Singapore “regulated disinformation” online with its Protection from Online Falsehoods and Manipulation Act (POFMA). And how exactly has that worked there? According to Human Rights Watch, it’s been a total disaster for free speech and has been used against opposition politicians and critics of the government:

Ministers issued several correction notices to opposition politicians or political parties during the nine-day election campaign in July.

Singapore authorities also use existing laws to penalize peaceful expression and protest, with activists, lawyers, and online media facing prosecution, civil defamation suits, and threats of contempt of court charges. In March, the Court of Appeal upheld the conviction of activist Jolovan Wham for contempt of court for stating on Facebook that ?Malaysia?s judges are more independent than Singapore?s for cases with political implications.?

On July 28, Prime Minister Lee Hsien Loong?s nephew, Li Shengwu, was found guilty of contempt and fined S$15,000 (US$11,000) for a 2017 private Facebook post in which he said the Singapore government is ?very litigious and has a pliant court system.?

Ah. Notably, while opposition party members kept getting notices about how they violated the law, the ruling party politicians were somehow free of such notices. How odd.

This same “regulation” against “disinformation” was used to block access to a website criticizing the Singaporean government’s response to COVID.

So, yeah, sure, we can highlight the problems of misinformation online, but it’s difficult to take the “International Grand Committee on Disinformation” particularly seriously when its members include nations that are using “disinformation” regulations as an excuse to suppress opposition political parties and those who criticize the government. It kind of undermines any credibility such a group might have.

Filed Under: disinformation, fake news, frances haugen, international grand committee on disinformation, singapore
Companies: facebook

Techdirt Podcast Episode 303: The Facebook Papers & The Media

from the truth-and-coverage dept

The documents revealed by Facebook whistleblower Frances Haugen are full of important information — but the media hasn’t been doing the best job of covering that information and all its nuances. There are plenty of examples of reporters taking one aspect out of context and presenting it in the worst possible light, while ignoring the full picture. This week, we’re joined by law professor Kate Klonick to discuss the media’s failings in covering the Facebook Papers, and the unwanted outcomes this could produce.

Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.

Filed Under: frances haugen, journalism, kate klonick, podcast, social media
Companies: facebook

It's Ridiculous The 'Developing World' Wasn't Given Access To The Facebook Files

from the do-this-the-right-way dept

Mon, Nov 1st 2021 05:53am - Karl Bode

By now it’s fairly clear the Facebook leaks showcase a company that prioritized near-mindless international growth over the warnings of their own experts. They also show a company that continues to painfully struggle to be even marginally competent at scale, whether we’re talking about content moderation or rudimentary customer service. While this has become an all-encompassing media spectacle, the real underlying story isn’t particularly unique. It’s just a “growth for growth’s sake” mindset, where profit and expansion trumped all reason. It just happens to be online, and at unprecedented international scale.

One thing I haven’t seen talked a lot about is the fact that if you look back a few years, an awful lot of of folks in developing nations saw these problems coming a mile away, long before their Western counterparts. For a decade, international activists warned repeatedly about the perils of Facebook’s total failure to understand the culture/regulations/language/norms of the countries they rapidly flooded into. Yet bizarrely, Frances Haugen’s PR team somehow excluded most of these countries when it came time to recently release access to the Facebook files:

re: #facebookpapers — we're hearing from * a ton * of reporters in latin america / india and other regions outside of the west that aren't getting access. please, please dm or email dell/myself and we'll do what we can to help https://t.co/S3Vlw9w19A

— shoshana wodinsky (she/her) (@swodinsky) October 29, 2021

The “consortium” handling release of the files is basically a loose collaboration between 17 U.S. news orgs and a handful of European outlets. They’re all being given access to redacted versions of documents Haugen provided the Securities and Exchange Commission, showing Facebook repeatedly prioritized growth and profit over, well, everything else. The whole thing was handled by Haugen’s PR team via an ordinary embargo, which some oddly saw as itself somehow nefarious (it’s not, embargoes, though often kind of stupid, are commonly used to maximize impact).

The real problem was who was included in that consortium. And a bigger problem, oddly not talked about until a month into the Facebook leak news cycle, was that much of the developed world was just… excluded… from the coalition by her PR reps. The exclusion of academics and researchers that could make the most sense of the data was a problem. But restricting analysis to most white, western newsrooms, (despite Haugen’s very clear understanding that most of Facebook’s impact problems disproportionately harmed developing nations) is particularly odd and tasteless.

For all the problems Facebook (sorry, Meta) has had in the United States in regards to managing the company’s platform at scale, those problems have been dramatically worse internationally. Facebook was so excited to flood into dozens of international locations to wow investors, they didn’t dedicate the time, resources, or attention needed to actually understand what they were doing. Even if they had (and the whistleblowers keep showing they absolutely didn’t), the sheer scale of the expansion made it impossible to do well. That Facebook did so anyway despite being warned about it is an act of greed and hubris.

It was actually a net neutrality debate that keyed many overseas activists into Facebook’s problems more than a decade ago. Activists in India were particularly sensitive to Facebook’s attempts to conflate “the internet” with Facebook in developing nations. If you recall, activists in India successfully derailed Facebook’s Free Basics program, which was Facebook’s attempt to corner developing nation ad markets under the banner of altruism.

Basically, it involved Facebook striking deals with local wireless companies to offer discounted access to Facebook, under claims that aggressively curated version of “online” was better than no online access at all. It was a highly curated walled garden bastardized variation of AOL or CompuServe, in which Facebook decided what information, services, and resources mattered (they initially even banned encrypted services). But activists and international experts were quick to see the problem with giving Facebook this kind of power, especially in countries they didn’t take the time to understand.

We’ve seen repeatedly how conflating “Facebook” with “the internet” (or Whatsapp with “the internet”) has created a laundry list of problems that are especially pronounced in developing nations. The centralized approach of programs like Free Basics defeated the purpose of the open internet, reduced transparency, was a big boon for authoritarian governments, and helped create an even more potent funnel for propaganda. One recurring theme in whistleblower accounts is that Facebook’s own researchers generally warned about all of this, repeatedly, but were ignored for profit and growth’s sake.

As early as 2015 organizations like Mozilla were busy arguing that if Facebook genuinely cared about information access in developing nations, they should simply fund access to the internet itself. Facebook was ultimately forced to back off its plan in some countries like India and Egypt, but if you were a reporter or activist in these countries who pointed out the problems with Facebook’s ambition, you were accused of being an enemy of the poor. When the Free Basics brand became toxic, Facebook just named Free Basics something else (sound familiar?).

There has been some valid and not so valid criticism of the way Haugen handled these latest revelations. Some have tried to argue that because she was smart enough to hire lawyers and a PR team to maximize impact she can’t possibly be technically seen as a “whistleblower.” There was also some brief hyperventilation over a Politico report, with some trying to claim that because she had received some money from investor Pierre Omidyar, she shouldn’t be taken seriously. But a shrewd, organized whistleblower is still a whistleblower, and Omidyar proxy groups actually just funded whistleblower orgs Haugen was part of after she went public. It’s actually a good thing to see a whistleblower do the right thing and not be economically and reputationally devastated for once.

But it’s both weird and telling that people freaked out about these perceived injustices, but didn’t notice that the whistleblower’s PR coalition apparently just forgot the developing world existed:

I have massive respect for a lot of the reporters working on the files but whoever is keeping the document access limited to Americans is in the absolute wrong.

— Chillian J. Yikes! (@jilliancyork) October 29, 2021

Nobody I’ve talked to so far at news organizations seems clear why this seems to have happened (a strange decision for an effort geared toward greater transparency). It’s not like it would be particularly difficult to coordinate the release via the same organizations in places like India that warned about Facebook’s consolidated power almost a decade ago (see: IFEX). Some outlets, like Gizmodo, have been trying to expand access to the source documents to everyone. That’s apparently to the chagrin of Haugen’s PR team, who seems to think they can put the genie back in the bottle.

There was a certain hubris in Facebook stumbling its way across the developing world in a quest for growth without bothering to understand the impact their platform would have on foreign cultures. But there’s a fairly substantial amount of hubris in excluding these developing nations from accessing raw data on a problem they’ve disproportionally been harmed by.

Filed Under: bias, developing nations, facebook files, facebook papers, frances haugen, india, journalists
Companies: facebook, meta

Let Me Rewrite That For You: Washington Post Misinforms You About How Facebook Weighted Emoji Reactions

from the let's-clean-it-up dept

Journalist Dan Froomkin, who is one of the most insightful commentators on the state of the media today, recently began a new effort, which he calls “let me rewrite that for you,” in which he takes a piece of journalism that he believes misled readers, and rewrites parts of them — mainly the headline and the lede — to better present the story. I think it’s a brilliant and useful form of media criticism that I figured I might experiment with as well — and I’m going to start it out with a recent Washington Post piece, one of many the Post has written about the leaked Facebook Files from whistleblower Frances Haugen.

The piece is written by reporters Jeremy Merrill and Will Oremus — and I’m assuming that, like many mainstream news orgs, editors write the headlines and subheads, rather than the reporters. I don’t know Merrill, but I will note that I find Oremus to be one of the most astute and thoughtful journalists out there today, and not one prone to fall into some of the usual traps that journalists fall for — so this one surprised me a bit (though, I’m also using this format on an Oremus piece, because I’m pretty sure he’ll take the criticism in the spirit intended — to push for better overall journalism on these kinds of topics). The article’s headline tells a story in and of itself: Five points for anger, one for a ?like?: How Facebook?s formula fostered rage and misinformation, with a subhead that implies something similar: “Facebook engineers gave extra value to emoji reactions, including ?angry,? pushing more emotional and provocative content into users? news feeds.” There’s also a graphic that reinforces this suggested point: Facebook weighted “anger” much more than happy reactions. And it’s all under the “Facebook under fire” designation:

Seeing this headline and image, it would be pretty normal for you to assume the pretty clear implication: people reacting happily (e.g. with “likes”) on Facebook had those shows of emotions weighted at 1/5th the intensity of people reacting angrily (e.g. with “anger” emojis) and that is obviously why Facebook stokes tremendous anger, hatred and divisiveness (as the story goes).

But… that’s not actually what the details show. The actual details show that initially when Facebook introduced its list of five different “emoji” reactions (to be added to the long iconic “like” button), it weighted all five of them as five times as impactful as a like. That means that “love,” “haha,” “wow,” and “sad” also were weighted at 5 times a single like, and identical to “angry.” And while the article does mention this in the first paragraph, it immediately pivots to focus only on the “angry” weighting and what that means. When combined with the headline and the rest of the article, it’s entirely possible to read the article and not even realize that “love,” “sad,” “haha,” and “wow” were also ranked at 5x a single “like” and to believe that Facebook deliberately chose to ramp up promotion of “anger” inducing content. It’s not only possible, it’s quite likely. Hell, it’s how I read the article the first time through, completely missing the fact that it applied to the other emojis as well.

The article also completely buries how quickly Facebook realized this was an issue and adjusted the policy. While it does mention it, it’s very much buried late in the story, as are some other relevant facts that paint the entire story in a very different light than the way many people are reading it.

As some people highlighted this, Oremus pointed out that the bigger story here is “how arbitrary initial decisions, set by humans for business reasons, become reified as the status quo.” And he’s right. That is the more interesting story and one worth exploring. But that’s not how this article is presented at all! And, his own article suggested the “reified as the status quo” part is inaccurate as well, though, again, that’s buried further down in the story. The article is very much written in a way where the takeaway for most people is going to be “Facebook highly ranks posts that made you angry, because stoking divisiveness was good for business, and that’s still true today.” Except none of that is accurate.

So… let’s rewrite that, and try to better get across the point that Oremus claims was the intended point of the story.

The original title, again is:

Five points for anger, one for a ?like?: How Facebook?s formula fostered rage and misinformation

Let’s rewrite that:

Facebook weighted new emojis much more than likes, leading to unintended consequences

Then there’s the opening of the piece, which does mention very quickly that it applied to all five new emojis, but quickly pivots to just focusing on the anger:

Five years ago, Facebook gave its users five new ways to react to a post in their news feed beyond the iconic ?like? thumbs-up: ?love,? ?haha,? ?wow,? ?sad? and ?angry.?

Behind the scenes, Facebook programmed the algorithm that decides what people see in their news feeds to use the reaction emoji as signals to push more emotional and provocative content ? including content likely to make them angry. Starting in 2017, Facebook?s ranking algorithm treated emoji reactions as five times more valuable than ?likes,? internal documents reveal. The theory was simple: Posts that prompted lots of reaction emoji tended to keep users more engaged, and keeping users engaged was the key to Facebook?s business.

Facebook?s own researchers were quick to suspect a critical flaw. Favoring ?controversial? posts ? including those that make users angry ? could open ?the door to more spam/abuse/clickbait inadvertently,? a staffer, whose name was redacted, wrote in one of the internal documents. A colleague responded, ?It?s possible.?

The warning proved prescient. The company?s data scientists confirmed in 2019 that posts that sparked angry reaction emoji were disproportionately likely to include misinformation, toxicity and low-quality news.

Let’s rewrite that, both using what Oremus claims was the “bigger story” in the article, and some of the information that is buried much later.

Five years ago, Facebook expanded the ways that users could react to posts beyond the iconic “like” thumbs-up, adding five more emojis: “love,” “haha,” “wow,” “sad,” and “angry.” With this new addition, Facebook engineers needed to determine how to weight these new engagement signals. Given the stronger emotions portrayed in these emojis, the engineers made a decision that had a large impact on how the use of those emojis would be weighted in determining how to rank a story: each of those reactions would count for five times the weight of the classic “like” button. While Facebook did publicly say at the time that the new emojis would be weighted “a little more” than likes, and that all the new emojis would be weighted equally, it did not reveal that the weighting was actually five times as much.

This move came around the same time as Facebook’s publicly announced plans to move away from promoting clickbait-style news to users, and to try to focus more on engagement with content posted by friends and family. However, it turned out that friends and family don’t always post the most trustworthy information, and by overweighting the “emotional” reactions, this new move by Facebook often ended up putting the most emotionally charged content in front of users. Some of that content was joyful — people reacting with “love” to engagements and births — but some of it was disruptive and divisive, people reacting with “anger” to false or misleading content.

Facebook struggled internally with this result — while also raising important points about how “anger” as a “core human emotion” is not always tied to bad things, and could be important for giving rise to protest movements against autocratic and corrupt governments. However, since other signals were weighted significantly more than even these emojis — for example, replies to posts had a weight up to 30 times a single “like” click — not much was initially done to respond to the concerns about how the weighting on anger might impact the kinds of content users were prone to see.

However, one year after launch, in 2018, Facebook realized weighting “anger” so highly was a problem, and downgraded the weighting on the “anger” emoji to four times a “like” while keeping the four other emoji, including “love,” “wow,” and “haha” at five times a like. A year later, the company realized this was not enough and even though “anger” is the least used emoji, by 2019 the company had put in place a mechanism to “demote” content that was receiving a disproportionate level of “anger” reactions. There were also internal debates about reranking all of the emoji reactions to create better news feeds, though there was not widespread agreement within the company about how best to do this. Eventually, in 2020, following more internal research on the impact of this weighting, Facebook reweighted all of the emoji. By the end of 2020 it had cut the weight of the “anger” emoji to zero — taking it out of the equation entirely. The “haha” and “wow” emojis were weighted to one and a half times a like, and the “love” and “sad” were weighted to two likes.

From there, the article could then discuss a lot of what other parts of the article does discuss, about some of the internal debates and research, and also the point that Oremus raised separately, about the somewhat arbitrary nature of some of these ranking systems. But I’d argue that my rewrite presents a much more accurate and honest portrayal of the information than the current Washington Post article.

Anyone know how I can send the Washington Post an invoice?

Filed Under: algorithm, emoji, facebook papers, framing, frances haugen, journalism, let me rewrite that for you, ranking, reactions
Companies: facebook

When Facebook Turned Off Its News Feed Algorithm, It Made Everyone's Experience Worse… But Made Facebook More Money

from the oh,-look-at-that dept

For reasons I don’t fully understand, over the last few months, many critics of “big tech” and Facebook, in particular, have latched onto the idea that “the algorithm” is the problem. It’s been almost weird how frequently people insist to me that if only social media got rid of algorithmically recommending stuff, and went back to the old fashioned chronological news feed order, all would be good in the world again. Some of this seems based on the idea that algorithms are primed to lead people down a garden path from one type of video to ever more extreme videos (which certainly has happened, though how often is never made clear). Some of it seems to be a bit of a kneejerk reaction to simply disliking the fact that these companies (which many people don’t really trust) are making decisions about what you may and may not like — and that feels kinda creepy.

In the past few weeks, there’s been a bit of a fever pitch on this topic, partly in response to whistleblower Frances Haugen’s leak of documents, in which she argues that Facebook’s algorithm is a big part of the problem. And then there’s the recent attempt by some Democrats in Congress to take away Section 230 from algorithmically recommended information. As I noted, the bill is so problematic that it’s not clear what it’s actually solving.

But underlying all of this is a general opinion that “algorithms” and “algorithmic recommendations” are inherently bad and problematic. And, frankly, I’m confused by this. At a personal level, the tools I’ve used that do algorithmic recommendations (mainly: Google News, Twitter, and YouTube) have been… really, really useful? And also pretty accurate over time in learning what I want, and thus providing me more useful content in a more efficient manner, which has been pretty good for me, personally. I recognize that not everyone has that experience, but at the very least, before we unilaterally declare algorithms and recommendation engines as bad, it might help to understand how often they’re recommending stuff that’s useful and helpful, as compared to how often they’re causing problems.

And, for all the talk about how Haugen’s leaking has shown a light on the “dangers” of algorithms, the actual documents that she’s leaked might suggest something else entirely. Reporter Alex Kantrowitz has reported on one of the leaked documents, regarding a study Facebook did on what happens when Facebook turns off the algorithmic rankings and… it was not pretty. But, contrary to common belief, Facebook actually made more money without the News Feed algorithm.

In February 2018, a Facebook researcher all but shut off the News Feed ranking algorithm for .05% of Facebook users. ?What happens if we delete ranked News Feed?? they asked in an internal report summing up the experiment. Their findings: Without a News Feed algorithm, engagement on Facebook drops significantly, people hide 50% more posts, content from Facebook Groups rises to the top, and ? surprisingly ? Facebook makes even more money from users scrolling through the News Feed.

Considering how often we’ve heard, including from Haugen herself, that Facebook’s decision-making is almost always driven by what will beneficially impact the bottom line the most, this deserves some consideration. Because the document… suggests something quite different. In fact, what the researchers seemed to find was that people hated it, but it made them spend more time on the site and see more ads because they had to poke around to try to find the interesting stuff they wanted to see, and that drove up ad rates. If Facebook were truly focused on just the bottom line, then, they should consider turning off the news feed algorithm — or, just supporting the awful JAMA bill in Congress which will create incentives for the same result:

Turning off the News Feed ranking algorithm, the researcher found, led to a worse experience almost across the board. People spent more time scrolling through the News Feed searching for interesting stuff, and saw more advertisements as they went (hence the revenue spike). They hid 50% more posts, indicating they weren?t thrilled with what they were seeing. They saw more Groups content, because Groups is one of the few places on Facebook that remains vibrant. And they saw double the amount of posts from public pages they don?t follow, often because friends commented on those pages. ?We reduce the distribution of these posts massively as they seem to be a constant quality compliant,? the researcher said of the public pages.

As always, there are lots of factors that go into this, and one experiment may not be enough to tell us much. Also, it’s entirely possible that over time, the long term result would be less revenue because the increasing annoyances of not finding the more interesting stuff causes people to leave the platform entirely. But, at the very least, this leaked research pokes a pretty big hole in the idea that getting rid of algorithmic recommendations does anything particularly useful.

Filed Under: algorithms, chronological, facebook files, facebook papers, frances haugen, jama, leaks, news feed, section 230, user experience
Companies: facebook

The Whistleblower And Encryption: Everyone Has An Angle, And Not Everyone Is A Policy Expert

from the nuance,-nuance,-nuance dept

Over the weekend, the Telegraph (not the most trustworthy or reliable in a batch of UK news organizations that have long had issues with accuracy in reporting) claimed that the latest (and most high profile) Facebook whistleblower, Frances Haugen, was prepared to come out against encryption. This (quite rightly) raised the hackles of multiple encryption experts. As people were getting pretty worked up about it, the Telegraph (silently, and without notice) changed the headline of the piece (from “Facebook whistleblower warns ?dangerous? encryption will aid espionage by hostile nations” to “Facebook whistleblower warns company’s encryption will aid espionage by hostile nations”) as well as the actual text of the story, to suggest a slightly more nuanced (but still not great) view — effectively saying she supported encryption, but was concerned that Facebook would use encryption as a “see no evil” kind of blindfold to problems on its platform.

Ms Haugen said that she is generally pro-encryption, which enhances users? privacy. However, she added that Facebook?s plan was also way for the company to ?sidestep? harmful content happening on its platform rather than address it.

She said: ?End-to-end encryption definitely lets them sidestep and go ?look we can?t see it, not our problem?.?

Of course, context and motives matter here, and the Telegraph — which tends to be quite supportive of the current UK government, seemed to be twisting Haugen’s (admittedly confused) statement in support of UK Home Secretary Priti Patel’s positively dangerous plan to get rid of end-to-end encryption in the UK. It sure looks like the Telegraph went looking for a way to support that argument, and used Haugen’s words to that effect.

A few hours later, Haugen actually testified before a UK Parliamentary committee and claimed her words were taken out of context. She said that she’s strongly pro-encryption… but then tried to claim that her comments to the Telegraph were more about how she doesn’t trust Facebook to actually implement encryption. Which is… a strange and almost nonsensical claim.

?I want to be very, very clear. I was mischaracterised in the Telegraph yesterday on my opinions around end-to-end encryption,? she said. ?I am a strong supporter of access to open source end to end encryption software.

?I support access to end-to-end encryption and I use open source end-to-end encryption every day. My social support network is currently on an open source end-to-end encryption service.?

[….]

?Facebook?s plan for end-to-end encryption ? I think ? is concerning because we have no idea what they?re doing to do. We don?t know what it means, we don?t if people?s privacy is actually protected. It?s super nuanced and it?s also a different context. On the open source end-to-end encryption product that I like to use there is no directory where you can find 14 year olds, there is no directory where you can go and find the Uighur community in Bangkok. On Facebook it is trivially easy to access vulnerable populations and there are national state actors that are doing this.

?So I want to be clear, I am not against end-to-end encryption in Messenger but I do believe the public has a right to know what does that even mean? Are they really going to produce end-to-end encryption? Because if they say they?re doing end-to-end encryption and they don?t really do that people?s lives are in danger. And I personally don?t trust Facebook currently to tell the truth? I am concerned about them misconstruing the product that they?ve built ? and they need regulatory oversight for that.?

But… here’s the thing: Haugen may be a wonderful data scientist. And, she may have done the world tremendous good by leaking tons of internal Facebook documents, giving the world some insight into what’s going on at the company. But that doesn’t make her an expert on encryption. And, it shows. As Alec Muffett, a security expert who actually used to work on encryption at Facebook, noted in a detailed thread, what Haugen is asking for here is dangerous and shows a real lack of understanding about encryption.

First, she claims that there should be a government review of any Facebook end-to-end encryption to make sure it’s legit. And, yes, there are many reasons to not trust Facebook, but introducing the idea that government needs to review and approve encryption is worse. Is she completely unaware of the government’s history of constantly trying to undermine and backdoor encryption? I mean, it’s not exactly secret. And the US government has been trying to undermine and backdoor encryption pretty aggressively lately. Suggesting that there needs to be some government entity blessing the encryption opens the door to all sorts of mischief.

The separate issue is claiming that end-to-end encryption for Facebook is somehow different because you can use Facebook for more than just messaging, and it’s bolted on to other services. Again, as Muffett explains, this kind of thinking is dangerous as well. It suggests that encrypted chat needs to be silo’d and kept distant from tons of internet services, when the reality is often that many more internet services should be embracing encryption much more widely to protect their users.

This is also why it’s difficult to understand Haugen’s claims — as they seem somewhat contradictory. Even if we take the Telegraph’s mission-driven editing with a grain of salt, Haugen doesn’t deny her claim that encryption makes it harder to protect Uighurs:

?A key part of [Chinese operatives?] strategy was to send malware to Uighurs who lived in places that weren?t China, as if they could compromise one phone they could compromise a whole community. We said we won?t be able to see the malware anymore [with encryption].?

But, that’s backwards. Do we think Uighurs will be more protected with encryption, or without it? As Riana Pfefferkorn pointed out just last week, encryption and security go hand in hand. It is not — as law enforcement would falsely have you believe — that encryption and security are at odds. Encryption provides security — especially against oppressive governments trying to genocide and entire culture. Uighurs need encryption much more than they “need” Facebook to be able to see what the Chinese are doing to protect the Uighurs.

Haugen’s statement on the Uighurs seems ridiculous when thought about: it’s basically arguing that without encryption Facebook can better protect the Uighurs from the Chinese government. Does anyone actually believe that? Or would they be better off with access to encryption? They shouldn’t necessarily rely on Facebook’s encryption, but arguing that it shouldn’t be there to better protect them is just silly.

Again, Haugen has likely done the world a great benefit in leaking a bunch of internal documents (I’ll have more on those soon). But it’s important to remember that just because she blew the whistle regarding Facebook research, it doesn’t make her an expert on everything else. She’s not an expert on content moderation, or antitrust, or encryption. She may be a useful source for exploring what Facebook’s research showed, or some of Facebook’s decision making, but it’s depressing how quickly eager politicians looking to gain support for their already existing plans are exploiting her to argue for their position on topics she’s really not qualified to comment on. Indeed, it’s also dismissing the hard work of tons of actual experts on these topics, from practitioners in the field to the academics who study these issues.

Filed Under: encryption, frances haugen, going dark, tech policy, uk
Companies: facebook

If Your Takeaway From Facebook's Whistleblower Is That Section 230 Needs Reform, You Just Got Played By Facebook

from the that's-what-it-wants dept

Here we go again. Yesterday, the Facebook whistleblower, Frances Haugen, testified before the Senate Commerce Committee. Frankly, she came across as pretty credible and thoughtful, even if I completely disagree with some of her suggestions. I think she’s correct about some of the problems she witnessed, and the misalignment of incentives facing Facebook’s senior management. However, her understanding of the possible approaches to deal with it is, unfortunately, a mixed bag.

Of course, for the Senators in the hearing, it became the expected exercise in confirmation bias, in which they each insisted that their plan to fix the internet would solve the problems Haugen detailed. And, not surprisingly, many of them insisted that Section 230 was the issue, and that if you magically changed 230 and made companies more liable, they’d somehow be better. Leaving aside that there is zero evidence to support this (and plenty of evidence to suggest the opposite is true), the most telling bit in all of this is that if you think changing Section 230 is the answer Facebook agrees with you . It’s exactly what Facebook wants. See the smarmy, tone-deaf, self-serving statement the company put out in response to the hearing:

Today, a Senate commerce subcommittee held a hearing with a former product manager at Facebook who worked for the company for less than two years, had no direct reports, never attended a decision-point meeting with C-level executives — and testified more than six times to not working on the subject matter in question. We don’t agree with her characterization of the many issues she testified about. Despite all this, we agree on one thing; it’s time to begin to create standard rules for the internet. It’s been 25 years since rules for the internet have been updated, and instead of expecting the industry to make societal decisions that belong to legislators, it is time for Congress to act.

Facebook has been blanketing Washington DC (and elsewhere, but mostly DC) with ads saying that it’s time to update internet laws that haven’t changed since 1996. And that message is very clearly talking about Section 230.

Earlier this year, also in front of Congress, Mark Zuckerberg came out in favor of Section 230 reform. The company’s plan really isn’t all that different than what some elected officials are now proposing in a variety of short-sighted bills: increase liability on the companies for failure to have “best practices.” But as we’ve noted, what’s clear is that Facebook is one of the few companies that can afford that liability. Facebook can afford the expensive lawyers to go to court and show that their system of dealing with this stuff (which we already know doesn’t work very well) is a “best practice” leading them to get these cases dismissed.

Smaller companies are going to be bankrupted by this. And those that aren’t bankrupted are going to end up turning to Facebook to handle their moderation, so that they can rely on Facebook’s legal might. It’s going to entrench Facebook’s power position, limit competition, and wipe out more innovative approaches. Of course Facebook supports this nonsense.

So for those who are still supporting changes to Section 230 and even pointing to Haugen’s testimony: congrats, you got played by Facebook. You’re advocating for exactly what Facebook wants.

I forget who I’ve heard say this (perhaps it was Cory Doctorow?), but, paraphrasing, the statement was that of course a company would prefer no regulation, but the 2nd best thing to no regulation is being heavily regulated, because as long as you’re at the top of the heap and know you can deal with the regulators, you’ve got a massive advantage that no other company can deal with. And, for a company that is so paranoid about competition, and is unsure how to continue winning against upstarts, leaning on the US government to “regulate” the space is a godsend. It’s exactly what Facebook wants, and those supporting it are playing into Facebook’s hands.

Filed Under: competition, congress, frances haugen, intermediary liability, section 230, whistleblower
Companies: facebook

Rethinking Facebook: We Need To Make Sure That 'Good For The World' Is More Important Than 'Good For Facebook'

from the these-things-matter dept

I’m sure by now most of you have either seen or read about Facebook whistleblower Frances Haugen’s appearance on 60 Minutes discussing in detail the many problems she saw within Facebook. I’m always a little skeptical about 60 Minutes these days, as the show has an unfortunately long history of misrepresenting things about the internet, and similarly a single person’s claims about what’s happening within a company are not always the most accurate. That said, what Haugen does have to say is still kind of eye opening, and certainly concerning.

The key takeaway that many seem to be highlighting from the interview is Haugen noting that Facebook knows damn well that making the site better for users will make Facebook less money.

Frances Haugen: And one of the consequences of how Facebook is picking out that content today is it is — optimizing for content that gets engagement, or reaction. But its own research is showing that content that is hateful, that is divisive, that is polarizing, it’s easier to inspire people to anger than it is to other emotions.

Scott Pelley: Misinformation, angry content– is enticing to people and keep–

Frances Haugen: Very enticing.

Scott Pelley:–keeps them on the platform.

Frances Haugen: Yes. Facebook has realized that if they change the algorithm to be safer, people will spend less time on the site, they’ll click on less ads, they’ll make less money.

Of course, none of this should be surprising to anyone. Mark Zuckerberg himself said as much in an internal email that was revealed a few years ago, in which he noted in response to a suggestion to make Facebook better: “that may be good for the world but it’s not good for us.”

Over the last few years that line has stuck with me, and I’ve had a few conversations trying to think through what that actually means. There is one argument, which partly makes sense to me, that much of this actually falls back on the problem being Wall Street and the (false) idea that a company’s fiduciary duty is solely to its shareholders above all else. This kind of thinking has certainly damned many companies that are so focused on making quarterly numbers and quarterly results that it makes it impossible to focus on long term sustainability and how, in the long term, being “good for the world” should also be “good for the company.” So many companies have been destroyed by needing to keep Wall Street happy.

And, of course, it’s tempting to blame Wall Street. And we’ve certainly seen it happen in other situations. The fear of “missing our numbers” drives so many stupid decisions in corporate America. I’m still nervous about how Wall St. is pressuring Twitter to make some questionable decisions. However, blaming Wall Street conveniently leaves Facebook off the hook, and that would also be wrong. As Haugen admits in the interview, she’s worked at other internet companies that weren’t like that.

I’ve seen a bunch of social networks and it was substantially worse at Facebook than anything I’d seen before.

So what is it about Facebook that leads them to believe that, when given the choice between “good for the world” and “good for Facebook,” it must lean in on “good for Facebook” at the cost of the world? That aspect has been less explored, and unfortunately Haugen’s revelations don’t tell us that much about why Facebook is so uniquely bad at this. I think some of it may be tied to what I wrote last week: Facebook’s internal hubris about what the company can and cannot accomplish — including a belief that maybe it can walk the fine line between pleasing Wall Street and beating its numbers… and not supporting genocide in Myanmar.

Part of me, though, wonders if the problem is not just the drive to meet Wall St.’s numbers, but that Zuckerberg, senior management, and (perhaps more importantly) Facebook’s Board actually believe that short term fiduciary duty to shareholders really is more important than being good for the world. Looking at Facebook’s Board, it’s not exactly composed of anyone who you’d think would step up to highlight that maybe “doing the right thing for society” outweighs keeping Wall Street happy. And that’s particularly disappointing given that Zuckerberg doesn’t need to keep Wall Street happy. The corporate structure of Facebook allows him to basically do what he wants (within certain limits) and still retain pretty much full control. He could come out and say that Facebook is going to stop worrying about its growth and focus on being better stewards. But he doesn’t seem interested in doing so.

This is obviously armchair psychologizing someone I do not know, but one of the most interesting traits that I’ve observed about Zuckerberg is that — more than just about any other CEO since Andy Grove — he truly seems to have internalized Andy Grove’s mantra that “only the paranoid survive.” I’ve talked before about how Facebook really seems to have completely bought into the idea of the Innovator’s Dilemma, and how competition can come from unexpected places and completely overwhelm incumbents before they even realize it. That has very clearly explained Facebook seeming to “overpay” for Instagram and WhatsApp (and then desperately try to buy out Snapchat, TikTok and others).

But that same thinking might easily apply to some of its other decisions as well, including a belief that if you’re not growing, you’re dying. And, as the NY Times notes, some of the recently leaked documents show real cracks in Facebook’s monolithic facade:

But if these leaked documents proved anything, it is how un-Godzilla-like Facebook feels. The documents, shared with The Journal by Frances Haugen, a former Facebook product manager, reveal a company worried that it is losing power and influence, not gaining it, with its own research showing that many of its products aren?t thriving organically. Instead, it is going to increasingly extreme lengths to improve its toxic image, and to stop users from abandoning its apps in favor of more compelling alternatives.

You can see this vulnerability on display in an installment of The Journal?s series that landed last week. The article, which cited internal Facebook research, revealed that the company has been strategizing about how to market itself to children, referring to preteens as a ?valuable but untapped audience.? The article contained plenty of fodder for outrage, including a presentation in which Facebook researchers asked if there was ?a way to leverage playdates to drive word of hand/growth among kids??

It?s a crazy-sounding question, but it?s also revealing. Would a confident, thriving social media app need to ?leverage playdates,? or concoct elaborate growth strategies aimed at 10-year-olds? If Facebook is so unstoppable, would it really be promoting itself to tweens as ? and please read this in the voice of the Steve Buscemi ?How do you do, fellow kids?? meme ? a ?Life Coach for Adulting??

So if you’ve been brought up to believe with every ounce of your mind and soul that growth is everything, and that the second you take your eye off the ball it will stop, decisions that are “good for Facebook, but bad for the world” become the norm. Going back to my post on the hubris of Facebook, it also feels like Mark thinks that once Facebook passes some imaginary boundary, then they can go back and fix the parts of the world they screwed up. It doesn’t work like that, though.

And that’s a problem.

So what can be done about that? At an absolute first pass, it would be nice if Mark Zuckerberg realized that he can make some decisions that are “good for the world, but bad for Facebook” and he should do that publicly, transparently, and clearly explaining why he knows that this will harm their growth or bottom line, but that it’s the right thing to do. To some small extent he tried to do something like that with the Oversight Board, but it was a half measure, with limited power. But it was something. He needs to be willing to step up and do more things like that, and if Wall Street doesn’t like it, he should just say he doesn’t care, this is too important. Other CEOs have done this. Hell, Jeff Bezos spent the first decade or so of Amazon’s life as a public company constantly telling Wall Street that’s how things were going to work (people now forget just how much Wall Street hated Amazon, and just how frequently Bezos told them he didn’t care, he was going to build a better customer experience). Google (perhaps somewhat infamously) launched their IPO with a giant middle finger to Wall Street in noting that they weren’t going to play the bankers’ games (though… that promise has mostly disappeared from Google, along with the founders).

Of course, in doing so most people will dismiss whatever Zuckerberg decides to do as a cynical nothingburger. And they should. He’s not done nearly enough to build up the public trust on this. But if he can actually follow through and do the right thing over and over again, especially when it’s bad for Facebook, that would at least start things moving in the right direction.

There are plenty of other ideas on how to make Facebook be better — and Haugen actually has some pretty good suggestions herself, first noting that the tools most people reach for won’t work:

While some have called for Facebook to be broken up or stripped of content liability protections, she disagrees. Neither approach would address the problems uncovered in the documents, she said?that despite numerous initiatives, Facebook didn?t address or make public what it knew about its platforms? ill effects.

That is, breaking up the company won’t make a difference for reasons we’ve discussed before, and taking away Section 230 will only give Facebook way more power — since smaller companies will be wiped out by the lack of liability protections.

Instead, Haugen notes, there needs to be way more transparency about how Facebook is doing what it’s doing:

In Ms. Haugen’s view, allowing outsiders to see the company’s research and operations is essential. She also argues for a radical simplification of Facebook’s systems and for limits on promoting content based on levels of engagement, a core feature of Facebook’s recommendation systems. The company’s own research has found that “misinformation, toxicity, and violent content are inordinately prevalent” in material reshared by users and promoted by the company’s own mechanics.

Tragically, Facebook has been going in the other direction and trying to make it harder for researchers to understand what’s going on there and study the impact.

I think there are some other structural changes that would also have some impact (a bunch of which I’ll lay out in an upcoming paper), but getting Zuckerberg, the Board, and the senior management team to be okay with focusing on something other than short term growth would be a huge step forward. Years back I noted that human beings have an unfortunate habit of optimizing for what we can measure and downplaying what we can’t. Engagement. Revenue. Daily average users. These are all measurable. What’s good for humanity is not measurable. It’s easy to prioritize one over the other — and somehow that needs to change.

There have been a few external steps in that direction. The Long Term Stock Exchange is an interesting experiment in getting companies past the “meet the quarterly numbers” mindset, and two big tech companies recently listed there — including Asana, which was founded by Zuckerberg’s co-founder and former righthand man, Dustin Moskovitz. That’s not a solution in and of itself, but it does show a direction in which we can look for solutions that might get past the constant focus on growth at the expense of everything else.

In the end, there are many complicating factors, but as noted earlier, Facebook seems pretty extreme in its unwillingness to actually confront many of these issues. Some of that, no doubt, is that many people are complaining about things that are unfixable, or are blaming Facebook for things totally outside of its control. But there are many things that Facebook does control and could do a much better job in dealing with. Yet, Facebook to date has failed to make it clear on a companywide basis that “good for the world, but bad for Facebook” is actually okay, and maybe it should be the focus for a while.

This is not to say that there aren’t people within the company who are working on doing such things — because there clearly are. The problem is that when a big issue needs a decision from the top, the end result is always to choose what’s good for Facebook over what’s good for the world. And however that can change, it needs to change. And that’s really up to one person.

Filed Under: frances haugen, good for the world, growth, mark zuckerberg, quarterly numbers, short-term thinking, wall st.
Companies: facebook