ranking – Techdirt (original) (raw)
Stories filed under: "ranking"
Incredibly, Facebook Is Still Figuring Out That Content Moderation At Scale Is Impossible To Do Well
from the impossibility-theorem-strikes-again dept
For years now, I’ve talked about the impossibility of doing content moderation well at scale. I know that execs at various tech companies often point to my article on this, and that includes top executives at Meta, who have cited my work on this issue. But it still amazes me when those companies act as if it’s not true, and there’s some simple solution to all of the moderation troubles they face. The WSJ recently had a fascinating article about how Facebook thought that by simply silencing political opinions, they’d solve a bunch of their moderation problems. Turns out: it didn’t work. At all.
Meta’s leaders decided, however, that wasn’t enough. In late 2021, tired of endless claims about political bias and censorship, Chief Executive Mark Zuckerberg and Meta’s board pushed for the company to go beyond incremental adjustments, according to people familiar with the discussions. Presented with a range of options, Mr. Zuckerberg and the board chose the most drastic, instructing the company to demote posts on “sensitive” topics as much as possible in the newsfeed that greets users when they open the app—an initiative that hasn’t previously been reported.
The plan was in line with calls from some of the company’s harshest critics, who have alleged that Facebook is either politically biased or commercially motivated to amplify hate and controversy. For years, advertisers and investors have pressed the company to clean up its messy role in politics, according to people familiar with those discussions.
It became apparent, though, that the plan to mute politics would have unintended consequences, according to internal research and people familiar with the project.
This plan to “demote” posts on sensitive topics apparently… actually resulted in an increase in the flow of less trustworthy content, because oftentimes it’s the more established media outlets that are covering those “sensitive topics.”
The result was that views of content from what Facebook deems “high quality news publishers” such as Fox News and CNN fell more than material from outlets users considered less trustworthy. User complaints about misinformation climbed, and charitable donations via the company’s fundraiser product through Facebook fell in the first half of 2022. And perhaps most important, users didn’t like it.
I am guessing that some in the comments here will quibble with the idea that Fox News is a “high quality news publisher” but I assure you that compared to some of the other options, it’s much more along the spectrum towards quality.
The details of what Facebook did to deal with this kind of content are interesting… as were the “unintended” consequences:
The announcement played down the magnitude of the change. Facebook wasn’t just de-emphasizing reshares and comments on civic topics. It was stripping those signals from its recommendation system entirely.
“We take away all weight by which we uprank a post based on our prediction that someone will comment on it or share it,” a later internal document said.
That was a more aggressive version of what some Facebook researchers had been pushing for years: addressing inaccurate information and other integrity issues by making the platform less viral. Because the approach didn’t involve censoring viewpoints or demoting content via imprecise artificial-intelligence systems, it was deemed “defensible,” in company jargon.
The report notes that this did drive down usage of Facebook, but (contrary to the public narrative that Facebook only cares about greater engagement), the company apparently felt that this cost was worth it if it resulted in less anger and disinformation flowing across its platform. But, a big part of the issue seems to be that the narrative that Facebook is focused on unhealthy engagement is so prevalent, that nothing will get rid of it:
An internal presentation warned that while broadly suppressing civic content would reduce bad user experiences, “we’re likely also targeting content users do want to see….The majority of users want the same amount or more civic content than they see in their feeds today. The primary bad experience was corrosiveness/divisiveness and misinformation in civic content.”
Even more troubling was that suppressing civic content didn’t appear likely to convince users that Facebook wasn’t politically toxic. According to internal research, the percentage of users who said they thought Facebook had a negative effect on politics didn’t budge with the changes, staying consistently around 60% in the U.S.
To some extent, this is not surprising. The narrative always outlasts reality. I still, frequently, have people making claims to me about how social media works that haven’t been true in a decade.
Indeed, it seems like perhaps this plan would have had more of an impact if Facebook had been a lot more transparent about it — which often seems to be the problem in nearly everything that the company does on these subjects. It makes vague public statements and claims without the important details, and no one can really tell what’s going on.
But, the simple fact is that, again, doing content moderation at scale well is impossible. Making changes in one area will have far reaching impacts that are equally impossible to anticipate.
There is no perfect answer here, but much more transparency from the company — combined with maybe giving end users a lot more control and say in how the platform works for them — seems like it would be more effective than this kind of thing where a bunch of decisions are made behind closed doors and then only leak out to the press way later.
Filed Under: content mdoeration, experiements, news feed, politics, ranking
Companies: facebook, meta
Dear Supreme Court: Judicial Curtailing Of Section 230 Will Make The Internet Worse
from the do-or-die-moment-for-the-Internet dept
Every amicus brief the Copia Institute has filed has been important. But the brief filed today is one where all the marbles are at stake. Up before the Supreme Court is Gonzalez v. Google, a case that puts Section 230 squarely in the sights of the Court, including its justices who have previously expressed serious misunderstandings about the operation and merit of the law.
As we wrote in this brief, the Internet depends on Section 230 remaining the intentionally broad law it was drafted to be, applying to all sorts of platforms and services that make the Internet work. On this brief the Copia Institute was joined by Engine Advocacy, speaking on behalf of the startup community, which depends on Section 230 to build companies able to provide online services, and Chris Riley, an individual person running a Mastodon server who most definitely needs Section 230 to make it possible for him to provide that Twitter alternative to other people. There seems to be this pervasive misconception that the Internet begins and ends with the platforms and services provided by “big tech” companies like Google. In reality, the provision of platform services is a profoundly human endeavor that needs protecting in order to be sustained, and we wrote this brief to highlight how personal Section 230’s protection really is.
Because ultimately without Section 230 every provider would be in jeopardy every time they helped facilitate online speech and every time they moderated it, even though both activities are what the Internet-using public needs platforms and services to do, even though they are what Congress intended to encourage platforms and services to do, and even though the First Amendment gives them the right to do them. Section 230 is what makes it possible at a practical level for them to them by taking away the risk of liability arising from how they do.
This case risks curtailing that critical statutory protection by inventing the notion pressed by the plaintiffs that if a platform uses an algorithmic tool to serve curated content, it somehow amounts to having created that content, which would put the activity beyond the protection of Section 230 as it only applies to when platforms intermediate content created by others and not content created by themselves. But this argument reflects a dubious read of the statute, and one that would largely obviate Section 230’s protection altogether by allowing liability to accrue as a result of some quality in the content created by another, which is exactly what Section 230 is designed to forestall. As we explained to the Court in detail, the idea that algorithmic serving of third party content could somehow void a platform’s Section 230 protection is an argument that had been cogently rejected by the Second Circuit and should similarly be rejected here.
Oral argument is scheduled for February 21. While it is possible that the Supreme Court could take onboard all the arguments being brought by Google and the constellation of amici supporting its position, and then articulate a clear defense of Section 230 platform operators could take back to any other court questioning in their statutory protection, it would be a good result if the Supreme Court simply rejected this particular theory pressing for artificial limits to Section 230 that are not in the statute or supported by the facially obvious policy values Section 230 was supposed to advance. Just so long as the Internet and the platforms that make up it can live on to fight another day we can call it a win. Because a decision in favor of the plaintiffs curtailing Section 230 would be an enormous loss to anyone depending on the Internet to provide them any sort of benefit. Or, in other words, everyone.
Filed Under: algorithms, amicus, chris riley, content moderation, gonzalez v. google, ranking, section 230
Companies: copia institute, engine, google
The Latest Version Of Congress's Anti-Algorithm Bill Is Based On Two Separate Debunked Myths & A Misunderstanding Of How Things Work
from the regulating-on-myths dept
It’s kind of crazy how many regulatory proposals we see appear to be based on myths and moral panics. The latest, just introduced is the House version of the Filter Bubble Transparency Act, which is the companion bill to the Senate bill of the same name. Both bills are “bipartisan,” which makes it worse, not better. The Senate version was introduced by Senator John Thune, and co-sponsored by a bevy of anti-tech grandstanding Senators: Richard Blumenthal, Jerry Moran, Marsha Blackburn, Brian Schatz, and Mark Warner. The House version was introduced by Ken Buck, and co-sponsored by David Cicilline, Lori Trahan, and Burgess Owens.
While some of the reporting on this suggests that the bill “targets” algorithms, it only does so in the stupidest, most ridiculous ways. The bill is poorly drafted, poorly thought out, and exposes an incredible amount of ignorance about how any of this works. It doesn’t target all algorithms — and explicitly exempts search based on direct keywords, or algorithms that try to “protect the children.” Instead, it has a weird attack on what it calls “opaque algorithms.” The definition itself is a bit opaque:
The term “opaque algorithm” means an algorithmic ranking system that determines the order or manner that information is furnished to a user on a covered internet platform based, in whole or part, on user-specific data that was no expressly provided by the user to the platform for such purpose.
The fact that it then immediately includes an exemption for “age-appropriate content filters” only hints at some of the problems with this bill — which starts with the fact that there are all sorts of reasons why algorithms recommending things to you based on more information than you provide directly might be kinda useful. For example, a straightforward reading of this bill would mean that no site can automatically determine you’re visiting with a mobile device and format the page accordingly. After all, that’s an algorithmic system that uses information not expressly provided by the user in order to present information to you ranked in a different way (for example, moving ads to a different spot). What’s more, “inferences about the user’s connected device” are explicitly excluded from being used even if they are based on data expressly provided by the user — so even allowing a user to set a preference for their device type, and serve optimized pages based on that preference, would appear to still count as an “opaque algorithm” under the bill’s definitions. You could argue that a mobile-optimized page is not necessarily a “ranking” system, except the bill defines “algorithmic ranking system” as “a computational process … used to determine the order or manner that a set of information is provided to a user.” At the very least, there are enough arguments either way that someone will sue over it.
Similarly, lots of media websites offer you a certain number of free articles before you hit their register or paywall — and again, that’s based on information not expressly provided by the user — meaning that such a practice might be in trouble (which will be fun to watch when media orgs who use those kinds of paywall tricks but are cheering this on as an “anti-big-tech” measure discover what they’re really supporting).
The point here is that lots of algorithm/ranking systems that work based on information not expressly provided by the user are actually doing important things that would be missed if they suddenly couldn’t be done any more.
And, even if the bill were clarified in a bill-of-attainder fashion to make it clear it only applies to social media news feeds, it still won’t do much good. Both Facebook and Twitter already let you set up a chronological feed if you want it. But, more to the point, the very rationale behind this bill makes no sense and is not based in reality.
Cicilline’s quote about the bill demonstrates just how ignorant he is of how all of this stuff actually works:
“Facebook and other dominant platforms manipulate their users through opaque algorithms that prioritize growth and profit over everything else. And due to these platforms? monopoly power and dominance, users are stuck with few alternatives to this exploitative business model, whether it is in their social media feed, on paid advertisements, or in their search results.”
Except… as already noted, you can already turn off the algorithmic feed in Facebook, and as the Facebook Papers just showed, when Facebook experimented with turning off the algorithmic rankings in its newsfeed it actually made the company more money, not less.
Also, the name of the bill is based on the idea of “filter bubbles” and many of the co-sponsors of the bill claim that these websites are purposefully driving people deeper into these “filter bubbles.” However, as we again just recently discussed, new research shows that social media tends to expose people to a wider set of ideas and viewpoints, rather than more narrowly constraining them. In fact, they’re much more likely to face a “filter bubble” in their local community than by being exposed to the wider world through the internet and social media.
So, in the end, we have a well-hyped bill based on the (false) idea of filter bubbles and the (false) idea of algorithms only serving corporate profit, which would require websites to give users a chance to turn off an algorithm — which they already allow, and which would effectively kill off other useful tools like mobile optimization. It seems like the only purpose this legislation actually serves to accomplish is to let these politicians stand up in front of the news media and claim they’re “taking on big tech!” and smile disingenuously.
Filed Under: algorithms, antitrust, big tech, david cicilline, filter bubble transparency act, filter bubbles, john thune, ken buck, opaque algorithms, ranking, richard blumenthal
Let Me Rewrite That For You: Washington Post Misinforms You About How Facebook Weighted Emoji Reactions
from the let's-clean-it-up dept
Journalist Dan Froomkin, who is one of the most insightful commentators on the state of the media today, recently began a new effort, which he calls “let me rewrite that for you,” in which he takes a piece of journalism that he believes misled readers, and rewrites parts of them — mainly the headline and the lede — to better present the story. I think it’s a brilliant and useful form of media criticism that I figured I might experiment with as well — and I’m going to start it out with a recent Washington Post piece, one of many the Post has written about the leaked Facebook Files from whistleblower Frances Haugen.
The piece is written by reporters Jeremy Merrill and Will Oremus — and I’m assuming that, like many mainstream news orgs, editors write the headlines and subheads, rather than the reporters. I don’t know Merrill, but I will note that I find Oremus to be one of the most astute and thoughtful journalists out there today, and not one prone to fall into some of the usual traps that journalists fall for — so this one surprised me a bit (though, I’m also using this format on an Oremus piece, because I’m pretty sure he’ll take the criticism in the spirit intended — to push for better overall journalism on these kinds of topics). The article’s headline tells a story in and of itself: Five points for anger, one for a ?like?: How Facebook?s formula fostered rage and misinformation, with a subhead that implies something similar: “Facebook engineers gave extra value to emoji reactions, including ?angry,? pushing more emotional and provocative content into users? news feeds.” There’s also a graphic that reinforces this suggested point: Facebook weighted “anger” much more than happy reactions. And it’s all under the “Facebook under fire” designation:
Seeing this headline and image, it would be pretty normal for you to assume the pretty clear implication: people reacting happily (e.g. with “likes”) on Facebook had those shows of emotions weighted at 1/5th the intensity of people reacting angrily (e.g. with “anger” emojis) and that is obviously why Facebook stokes tremendous anger, hatred and divisiveness (as the story goes).
But… that’s not actually what the details show. The actual details show that initially when Facebook introduced its list of five different “emoji” reactions (to be added to the long iconic “like” button), it weighted all five of them as five times as impactful as a like. That means that “love,” “haha,” “wow,” and “sad” also were weighted at 5 times a single like, and identical to “angry.” And while the article does mention this in the first paragraph, it immediately pivots to focus only on the “angry” weighting and what that means. When combined with the headline and the rest of the article, it’s entirely possible to read the article and not even realize that “love,” “sad,” “haha,” and “wow” were also ranked at 5x a single “like” and to believe that Facebook deliberately chose to ramp up promotion of “anger” inducing content. It’s not only possible, it’s quite likely. Hell, it’s how I read the article the first time through, completely missing the fact that it applied to the other emojis as well.
The article also completely buries how quickly Facebook realized this was an issue and adjusted the policy. While it does mention it, it’s very much buried late in the story, as are some other relevant facts that paint the entire story in a very different light than the way many people are reading it.
As some people highlighted this, Oremus pointed out that the bigger story here is “how arbitrary initial decisions, set by humans for business reasons, become reified as the status quo.” And he’s right. That is the more interesting story and one worth exploring. But that’s not how this article is presented at all! And, his own article suggested the “reified as the status quo” part is inaccurate as well, though, again, that’s buried further down in the story. The article is very much written in a way where the takeaway for most people is going to be “Facebook highly ranks posts that made you angry, because stoking divisiveness was good for business, and that’s still true today.” Except none of that is accurate.
So… let’s rewrite that, and try to better get across the point that Oremus claims was the intended point of the story.
The original title, again is:
Five points for anger, one for a ?like?: How Facebook?s formula fostered rage and misinformation
Let’s rewrite that:
Facebook weighted new emojis much more than likes, leading to unintended consequences
Then there’s the opening of the piece, which does mention very quickly that it applied to all five new emojis, but quickly pivots to just focusing on the anger:
Five years ago, Facebook gave its users five new ways to react to a post in their news feed beyond the iconic ?like? thumbs-up: ?love,? ?haha,? ?wow,? ?sad? and ?angry.?
Behind the scenes, Facebook programmed the algorithm that decides what people see in their news feeds to use the reaction emoji as signals to push more emotional and provocative content ? including content likely to make them angry. Starting in 2017, Facebook?s ranking algorithm treated emoji reactions as five times more valuable than ?likes,? internal documents reveal. The theory was simple: Posts that prompted lots of reaction emoji tended to keep users more engaged, and keeping users engaged was the key to Facebook?s business.
Facebook?s own researchers were quick to suspect a critical flaw. Favoring ?controversial? posts ? including those that make users angry ? could open ?the door to more spam/abuse/clickbait inadvertently,? a staffer, whose name was redacted, wrote in one of the internal documents. A colleague responded, ?It?s possible.?
The warning proved prescient. The company?s data scientists confirmed in 2019 that posts that sparked angry reaction emoji were disproportionately likely to include misinformation, toxicity and low-quality news.
Let’s rewrite that, both using what Oremus claims was the “bigger story” in the article, and some of the information that is buried much later.
Five years ago, Facebook expanded the ways that users could react to posts beyond the iconic “like” thumbs-up, adding five more emojis: “love,” “haha,” “wow,” “sad,” and “angry.” With this new addition, Facebook engineers needed to determine how to weight these new engagement signals. Given the stronger emotions portrayed in these emojis, the engineers made a decision that had a large impact on how the use of those emojis would be weighted in determining how to rank a story: each of those reactions would count for five times the weight of the classic “like” button. While Facebook did publicly say at the time that the new emojis would be weighted “a little more” than likes, and that all the new emojis would be weighted equally, it did not reveal that the weighting was actually five times as much.
This move came around the same time as Facebook’s publicly announced plans to move away from promoting clickbait-style news to users, and to try to focus more on engagement with content posted by friends and family. However, it turned out that friends and family don’t always post the most trustworthy information, and by overweighting the “emotional” reactions, this new move by Facebook often ended up putting the most emotionally charged content in front of users. Some of that content was joyful — people reacting with “love” to engagements and births — but some of it was disruptive and divisive, people reacting with “anger” to false or misleading content.
Facebook struggled internally with this result — while also raising important points about how “anger” as a “core human emotion” is not always tied to bad things, and could be important for giving rise to protest movements against autocratic and corrupt governments. However, since other signals were weighted significantly more than even these emojis — for example, replies to posts had a weight up to 30 times a single “like” click — not much was initially done to respond to the concerns about how the weighting on anger might impact the kinds of content users were prone to see.
However, one year after launch, in 2018, Facebook realized weighting “anger” so highly was a problem, and downgraded the weighting on the “anger” emoji to four times a “like” while keeping the four other emoji, including “love,” “wow,” and “haha” at five times a like. A year later, the company realized this was not enough and even though “anger” is the least used emoji, by 2019 the company had put in place a mechanism to “demote” content that was receiving a disproportionate level of “anger” reactions. There were also internal debates about reranking all of the emoji reactions to create better news feeds, though there was not widespread agreement within the company about how best to do this. Eventually, in 2020, following more internal research on the impact of this weighting, Facebook reweighted all of the emoji. By the end of 2020 it had cut the weight of the “anger” emoji to zero — taking it out of the equation entirely. The “haha” and “wow” emojis were weighted to one and a half times a like, and the “love” and “sad” were weighted to two likes.
From there, the article could then discuss a lot of what other parts of the article does discuss, about some of the internal debates and research, and also the point that Oremus raised separately, about the somewhat arbitrary nature of some of these ranking systems. But I’d argue that my rewrite presents a much more accurate and honest portrayal of the information than the current Washington Post article.
Anyone know how I can send the Washington Post an invoice?
Filed Under: algorithm, emoji, facebook papers, framing, frances haugen, journalism, let me rewrite that for you, ranking, reactions
Companies: facebook
Senator Kennedy Continues To Push My Buttons With His Ridiculously Dumb 'Don't Push My Buttons' Act
from the that's-not-how-any-of-this-works dept
Last fall, Senator John Kennedy of Louisiana (a supposedly smart Senator who seems to have decided his political future lies in acting dumber than 95% of all other Senators) introduced an anti-Section 230 bill. He’s now done so again in the new Congressional session. The bill is, once again, called the “Don’t Push My Buttons” Act and introducing such a piece of total garbage legislation a second time does not speak well of Senator Kennedy.
The bill is pretty short. It would create an exception to Section 230 for any website that… uses algorithms to rank content for you based on user data it collects. Basically, it’s taking a roundabout way to try to remove Section 230 from Facebook, Twitter, and YouTube. It is not clear why algorithmic ranking has anything to do with Section 230. While social media sites do tend to rely on both, they are separate things. Indeed, part of the reason why social media sites rely on algorithms is because Section 230 helps make sure they can host so much user-generated content, that there needs to be algorithmic rankings to make those sites useable.
So, in practice, if this became law, all it would really serve to do is to make social media sites totally unusable. Either, websites would have to stop doing algorithmic ranking of content (which would make the sites unusable for many people) or they’d start massively moderating content to avoid liability — making sites nearly unusable.
And, of course, there’s an exemption to this exemption which makes the whole thing useless. The bill will allow algorithms… if the user “knowingly and intentionally elects to receive the content.” So, all that will happen is every social media service will show you total garbage with a pop up saying “hey, we can straighten this out for you via our algorithm if you just click here” and everyone will click that button.
And that’s not even getting into the constitutional problems with this bill. It’s literally punishing companies for their editorial (ranking) choices. That’s Congress regulating expression. I don’t see how this bill would possibly survive 1st Amendment scrutiny. But, of course, it’s not designed to survive any scrutiny at all. It’s to serve as ever more grandstanding for Senator Kennedy to pretend to be looking out for a base he knows is ignorant beyond belief — and rather than educating them, he’s playing down to them.
Filed Under: algorithms, content moderation, john kennedy, push my buttons act, ranking, section 230
Facebook Ranking News Sources By Trust Is A Bad Idea… But No One At Facebook Will Read Our Untrustworthy Analysis
from the you-guys-are-doing-it-wrong-again dept
At some point I need to write a bigger piece on these kinds of things, though I’ve mentioned it here and there over the past couple of years. For all the complaints about how “bad stuff” is appearing on the big platforms (mainly: Facebook, YouTube, and Twitter), it’s depressing how many people think the answer is “well, those platforms should stop the bad stuff.” As we’ve discussed, this is problematic on multiple levels. First, handing over the “content policing” function to these platforms is, well, probably not such a good idea. Historically they’ve been really bad at it, and there’s little reason to think they’re going to get any better no matter how much money they throw at artificial intelligence or how many people they hire to moderate content. Second, it requires some sort of objective reality for what’s “bad stuff.” And that’s impossible. One person’s bad stuff is another person’s good stuff. And almost any decision is going to get criticized by someone or another. It’s why suddenly a bunch of foolish people are falsely claiming that these platforms are required by law to be “neutral.” (They’re not).
But, as more and more pressure is put on these platforms, eventually they feel they have little choice to do something… and inevitably, they try to step up their content policing. The latest, as you may have heard, is that Facebook has started to rank news organizations by trust.
Facebook CEO Mark Zuckerberg said Tuesday that the company has already begun to implement a system that ranks news organizations based on trustworthiness, and promotes or suppresses its content based on that metric.
Zuckerberg said the company has gathered data on how consumers perceive news brands by asking them to identify whether they have heard of various publications and if they trust them.
?We put [that data] into the system, and it is acting as a boost or a suppression, and we?re going to dial up the intensity of that over time,” he said. “We feel like we have a responsibility to further [break] down polarization and find common ground.?
But, as with the lack of an objective definition of “bad,” you’ve got the same problem with “trust.” For example, I sure don’t trust “the system” that Zuckerberg mentions above to do a particularly good job of determining which news sources are trustworthy. And, again, trust is such a subjective concept, that lots of people inherently trust certain sources over others — even when those sources have long histories of being full of crap. And given how much “trust” is actually driven by “confirmation bias” it’s difficult to see how this solution from Facebook will do any good. Take, for example, (totally hypothetically), that Facebook determines that Infowars is untrustworthy. Many people may agree that a site famous for spreading conspiracy theories and pushing sketchy “supplements” that you need because of conspiracy theory x, y or z, is not particularly trustworthy. But, for those who do like Infowars, how are they likely to react to this kind of thing? They’re not suddenly going to decide the NY Times and the Wall Street Journal are more trustworthy. They’re going to see it as a conspiracy for Facebook to continue to suppress the truth.
Confirmation bias is a hell of a drug, and Facebook trying to push people in one direction is not going to go over well.
To reveal all of this, Zuckerberg apparently invited a bunch of news organizations to talk about it:
Zuckerberg met with a group of news media executives at the Rosewood Sand Hill hotel in Menlo Park after delivering his keynote speech at Facebook?s annual F8 developer conference Tuesday.
The meeting included representatives from BuzzFeed News, the Information, Quartz, the New York Times, CNN, the Wall Street Journal, NBC, Recode, Univision, Barron?s, the Daily Beast, the Economist, HuffPost, Insider, the Atlantic, the New York Post, and others.
We weren’t invited. Does that mean Facebook doesn’t view us as trustworthy? I guess so. So it seems unlikely that he’ll much care about what we have to say, but we’ll say it anyway (though you probably won’t be able to read this on Facebook):
Facebook: You’re Doing It Wrong.
Facebook should never be the arbiter of truth, no matter how much people push it to be. Instead, it can and should be providing tools for its users to have more control. Let them create better filters. Let them apply their own “trust” metrics, or share trust metrics that others create. Or, as we’ve suggested on the privacy front, open up the system to let third parties come in and offer up their own trust rankings. Will that reinforce some echo chambers and filter bubbles? Perhaps. But that’s not Facebook’s fault — it’s part of the nature of human beings and confirmation bias.
Or, hey, Facebook could take a real leap forward and move away from being a centralized silo of information and truly disrupt its own setup — pushing the information and data out to the edges, where the users could have more control over it themselves. And not in the simplistic manner of Facebook’s other “big” announcement of the week about how it’ll now let users opt-out of Facebook tracking them around the web (leaving out that they kinda needed to do this to deal with the GDPR in the EU). Opting out is one thing — pushing the actual data control back to the end users and distributing it is something entirely different.
In the early days of the web, people set up their own websites, and had pretty much full control over the data and what was done there. It was much more distributed. Over time we’ve moved more and more to this silo model in which Facebook is the giant silo where everyone puts their content… and has to play by Facebook’s rules. But with that came responsibility on Facebook’s part for everything bad that anyone did on their platform. And, hey, let’s face it, some people do bad stuff. The answer isn’t to force Facebook to police all bad stuff, it should be to move back towards a system where information is more distributed, and we’re not pressured into certain content because that same Facebook thinks it will lead to the most “engagement.”
Push the content and the data out and focus on the thing that Facebook has always been better at at it’s core: the connection function. Connect people, but don’t control all of the content. Don’t feel the need to police the content. Don’t feel the need to decide who’s trustworthy and who isn’t. Be the protocol, not the platform, and open up the system so that anyone else can provide a trust overlay, and let those overlays compete. It would take Facebook out of the business of having to decide what’s good and what’s bad and would give end users much more control.
Facebook, of course, seems unlikely to do this. The value of the control is that it allows them to capture more of the money from the attention generated on their platform. But, really, if it doesn’t want to keep dealing with these headaches, it seems like the only reasonable way forward.
Filed Under: confirmation bias, fake news, mark zuckerberg, metrics, news, ranking, trust, trustworthy
Companies: facebook
DailyDirt: Who Cares if You Went To A Good School?
from the urls-we-dig-up dept
The field of education seems ripe for disruption — with Massive Open Online Courses (MOOCs) and other forms of online classes. However, it’s difficult to judge the quality of these online programs and compare them to the traditional classroom experience. The conventional wisdom has ranked prestigious universities in roughly the same order for decades, so it’ll be interesting to see how online courses and degrees might factor into these lists. Here are just a few interesting links on the quality of higher education.
- The college/university rankings from US News & World Report is not as meaningful as most people assume, according to Malcolm Gladwell. Any ranked list generated by weighting multiple variables depends greatly on those weights — and a simple, single score may be too simple to capture the most important characteristics of a diverse set of schools. [url]
- The US News college rankings don’t heavily weight the value (or “bang for the buck”) of the schools it covers, but the Washington Monthly ranking does. In 2013, Amherst College is listed as the “best bang for the buck” school, and Harvard/Princeton don’t seem to be in the top ten…. [url]
- Northwestern University was once known as a horrible football school, setting a 34 consecutive game losing streak record in 1981, but its football team has turned itself around recently. Unfortunately, there’s no special recognition for schools that simultaneously have good academic and athletic performance. [url]
- Newsweek and the Daily Beast have ranked 2,000 US high schools according to their effectiveness to produce college-ready graduates. But after going to college, how much does high school matter? [url]
If you’d like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post via StumbleUpon.
Filed Under: certification, class, college, degree, education, high school, mooc, ranking, school, university
Italian Newspapers Get Gov't To Investigate Google For Not Sharing Ranking Secret Sauce
from the seriously-delusional dept
A bunch of folks have been sending in the news that Italian regulators have begun an investigation into Google, at the request of some Italian newspapers. The complaint is a typical one from newspapers who seem slightly clueless about how Google works. They say that Google News is unfair — even though they can opt-out, but don’t. The newspapers falsely claim that if they opt-out of Google News, they also have to opt-out of Google Search. That’s simply untrue. But even if it were true, I’m not sure what the point would be. Getting traffic is a good thing. It’s unclear why Italian newspapers (or any newspapers) don’t like it.
In fact, the whole idea that Google News is unfair for sending traffic is undermined by the other complaint from the newspapers: that Google doesn’t reveal how it ranks stories:
Because Google does not disclose the criteria for ranking news articles or search results, he said, newspapers are unable to hone their content to try to earn more revenue from online advertising.
Of course, that’s silly. First, plenty of people have figured out how to optimize for Google — there’s a whole industry called SEO that does that. That doesn’t mean that Google needs to reveal the secret sauce. But the best response to the demand for Google to reveal how it ranks stories comes from Danny Sullivan, who turns the story around, and wonders how newspaper would feel in the other direction:
No newspaper editor of any quality would allow an external interest to walk into their newsroom and demand to know exactly how to guarantee a front page article about whatever they want. But that’s what the Italian papers seem to desire. Google has an editorial process for producing rankings, one that’s done using automation — but the papers seem to want to bypass those editorial decisions.
Exactly. The newspapers are basically demanding that their stories get ranked higher, but how would newspaper editors feel about the subjects of stories in the paper demanding that their stories be on the front page. After all, being on the front page would get the subject of a story more attention, and the newspaper isn’t paying those subjects — so the newspaper is “getting all the value.” — at least according to newspaper logic.
Sullivan also does a good job highlighting how useless it would be if the newspapers did get the details on how Google ranks stuff, because then everyone would just start writing stories to get to the top of the list, and any “advantage” would be lost. Separate from that, shouldn’t we be just a bit troubled to find out that the newspapers are interested in figuring out how to write stories that top Google, rather than writing stories to better inform the populace?
Filed Under: antitrust, editorial, google news, italy, journalism, news, ranking, seo
Companies: google
The Napoleon Dynamite Problem Stymies Netflix Prize Competitors
from the love-it-or-hate-it dept
We’ve been covering the ongoing race to claim the $1 million Netflix Prize for a while now, highlighting some surprising and unique methods for attacking the problem. Every time we write about it, it appears that the lead teams have inched just slightly closer to that 10% improvement hurdle, but progress has certainly been slow. Clive Thompson’s latest NY Times piece looks at the latest standings, noting that the issue now is “The Napoleon Dynamite problem.”
Apparently, the algorithms cooked up by various teams seems to work great for your typical mainstream movies, but where it runs into trouble is when it hits on quirky films, like Napoleon Dynamite or Lost in Translation or I Heart Huckabees, where people tend to have a rather strong and immediate love or hate reaction to those films, with very little in-between. No one seems quite sure what leads to such a strong polar reaction, and no algorithm can yet figure out how people will react to such films, which is where all of the various algorithms seem to run into a dead end.
Some folks believe that’s just the nature of taste. It really can’t just be programmed like an algorithm, but takes into account a variety of other factors: including what your friends think of something, or even if you happened to go see that movie with certain friends. Basically, there are external factors that could play into taste, that isn’t necessarily indicated in the fact that you may have liked some other set of quirky movies, and therefore you must love Napoleon Dynamite. In some ways, it makes you wonder if we’re all putting too much emphasis on an algorithmic approach to the issue, and if other recommendation systems, including what specific friends think of a movie might be more effective. Of course, Netflix is hedging its bets. It’s been pushing social networking “friend recommendation” features for a while as well.
Filed Under: movies, napoleon dynamite, netflix prize, ranking, recommendation engine
Companies: netflix