metrics – Techdirt (original) (raw)

Elon’s ExTwitter Engagement Stat Exaggeration: Outside Stats Paint A Bleaker Picture

from the new-record-high-vibes dept

Does anyone actually trust Elon to be honest about, well, anything? Last week he claimed that ExTwitter hit a new “all-time high” on engagement, with “417 billion user-seconds globally” and that in the US it was 93 billion “user-seconds.”

Image

First off, what the fuck are “user-seconds”? This is not a typical measure in the internet world. It’s also a potentially misleading one. The grand sum total of “user seconds” can be pretty misleading. Does it count people just seeing tweets in the wild? What counts as a “user second”? If someone just leaves a tab open does that continue to count? Are they actively engaging on the site? There are so many questions as to what that stat even means.

But, more importantly, as Media Post points out, Elon had announced numbers back in March that suggested even higher engagement data than what he claimed was this new “record” high. Of course, there appears to be some gamesmanship with the numbers, as the March numbers Media Post is discussing are per month, and Elon seems to be talking about a single (very newsworthy, right after the assassination attempt) day on ExTwitter:

Musk posted that X saw a cumulative “417 billion user-seconds globally” in one day — equating to 27.8 minutes per users — at 250 million daily active users, which does not align with X’s user reportage in March, when the company said users were spending 30 minutes per day with the app on average.

The company also claimed 8 billion total minutes in March, but 417 billion seconds only equates to 6.95 billion minutes, which either negates the “record high engagement” now or invalidates the previous numbers.

On July 15, Musk also posted that in the U.S., user seconds reached 93 billion — “23% higher than the previous record of 76B.” This equates to 15.5 minutes per user, on average, based on X’s previous reportage of 100 million U.S. users — a figure that is lower than expected.

The lack of standard and standardized reporting allows playing with the numbers to misrepresent how popular the site is.

Meanwhile, that same article also highlights how outside observers see little to no evidence of higher engagement on the site, and plenty of evidence of decline:

… a new report by data intelligence platform Tracer shows “significant drops” in user engagement and “drastic drops” in advertising unlike competitors like YouTube, Instagram and Pinterest.

In June, X advertising saw drops month-over-month and year-over-year, the report shows, with click-through-rates (CTRs) declining 78% month-over-month, which the report suggests reflects a sharp downturn in user activity. In addition, cost-per-thousand (CPMs) decreased 17% from May to June, suggesting that advertisers are also leaving X.

[….]

Comparatively, Instagram, Pinterest and YouTube have seen dramatic user engagement increases recently. Instagram’s CTRs surged by 89% over the past year, while YouTube and Pinterest saw increases of 77% and 385%, respectively. The success these platforms are seeing is likely a direct result of the introduction of new video-first launches, such as Instagram Reels, YouTube Shorts, and a host of new features on Pinterest.

Is it any wonder that the site is struggling, with Elon telling advertisers to go fuck themselves and then threatening to sue those advocating pulling ads from the site?

But really, when it comes down to details, does anyone believe Elon’s random “best day ever!” tweets to be trustworthy?

Filed Under: elon musk, metrics, stats, user seconds
Companies: twitter, x

Facebook Ranking News Sources By Trust Is A Bad Idea… But No One At Facebook Will Read Our Untrustworthy Analysis

from the you-guys-are-doing-it-wrong-again dept

At some point I need to write a bigger piece on these kinds of things, though I’ve mentioned it here and there over the past couple of years. For all the complaints about how “bad stuff” is appearing on the big platforms (mainly: Facebook, YouTube, and Twitter), it’s depressing how many people think the answer is “well, those platforms should stop the bad stuff.” As we’ve discussed, this is problematic on multiple levels. First, handing over the “content policing” function to these platforms is, well, probably not such a good idea. Historically they’ve been really bad at it, and there’s little reason to think they’re going to get any better no matter how much money they throw at artificial intelligence or how many people they hire to moderate content. Second, it requires some sort of objective reality for what’s “bad stuff.” And that’s impossible. One person’s bad stuff is another person’s good stuff. And almost any decision is going to get criticized by someone or another. It’s why suddenly a bunch of foolish people are falsely claiming that these platforms are required by law to be “neutral.” (They’re not).

But, as more and more pressure is put on these platforms, eventually they feel they have little choice to do something… and inevitably, they try to step up their content policing. The latest, as you may have heard, is that Facebook has started to rank news organizations by trust.

Facebook CEO Mark Zuckerberg said Tuesday that the company has already begun to implement a system that ranks news organizations based on trustworthiness, and promotes or suppresses its content based on that metric.

Zuckerberg said the company has gathered data on how consumers perceive news brands by asking them to identify whether they have heard of various publications and if they trust them.

?We put [that data] into the system, and it is acting as a boost or a suppression, and we?re going to dial up the intensity of that over time,” he said. “We feel like we have a responsibility to further [break] down polarization and find common ground.?

But, as with the lack of an objective definition of “bad,” you’ve got the same problem with “trust.” For example, I sure don’t trust “the system” that Zuckerberg mentions above to do a particularly good job of determining which news sources are trustworthy. And, again, trust is such a subjective concept, that lots of people inherently trust certain sources over others — even when those sources have long histories of being full of crap. And given how much “trust” is actually driven by “confirmation bias” it’s difficult to see how this solution from Facebook will do any good. Take, for example, (totally hypothetically), that Facebook determines that Infowars is untrustworthy. Many people may agree that a site famous for spreading conspiracy theories and pushing sketchy “supplements” that you need because of conspiracy theory x, y or z, is not particularly trustworthy. But, for those who do like Infowars, how are they likely to react to this kind of thing? They’re not suddenly going to decide the NY Times and the Wall Street Journal are more trustworthy. They’re going to see it as a conspiracy for Facebook to continue to suppress the truth.

Confirmation bias is a hell of a drug, and Facebook trying to push people in one direction is not going to go over well.

To reveal all of this, Zuckerberg apparently invited a bunch of news organizations to talk about it:

Zuckerberg met with a group of news media executives at the Rosewood Sand Hill hotel in Menlo Park after delivering his keynote speech at Facebook?s annual F8 developer conference Tuesday.

The meeting included representatives from BuzzFeed News, the Information, Quartz, the New York Times, CNN, the Wall Street Journal, NBC, Recode, Univision, Barron?s, the Daily Beast, the Economist, HuffPost, Insider, the Atlantic, the New York Post, and others.

We weren’t invited. Does that mean Facebook doesn’t view us as trustworthy? I guess so. So it seems unlikely that he’ll much care about what we have to say, but we’ll say it anyway (though you probably won’t be able to read this on Facebook):

Facebook: You’re Doing It Wrong.

Facebook should never be the arbiter of truth, no matter how much people push it to be. Instead, it can and should be providing tools for its users to have more control. Let them create better filters. Let them apply their own “trust” metrics, or share trust metrics that others create. Or, as we’ve suggested on the privacy front, open up the system to let third parties come in and offer up their own trust rankings. Will that reinforce some echo chambers and filter bubbles? Perhaps. But that’s not Facebook’s fault — it’s part of the nature of human beings and confirmation bias.

Or, hey, Facebook could take a real leap forward and move away from being a centralized silo of information and truly disrupt its own setup — pushing the information and data out to the edges, where the users could have more control over it themselves. And not in the simplistic manner of Facebook’s other “big” announcement of the week about how it’ll now let users opt-out of Facebook tracking them around the web (leaving out that they kinda needed to do this to deal with the GDPR in the EU). Opting out is one thing — pushing the actual data control back to the end users and distributing it is something entirely different.

In the early days of the web, people set up their own websites, and had pretty much full control over the data and what was done there. It was much more distributed. Over time we’ve moved more and more to this silo model in which Facebook is the giant silo where everyone puts their content… and has to play by Facebook’s rules. But with that came responsibility on Facebook’s part for everything bad that anyone did on their platform. And, hey, let’s face it, some people do bad stuff. The answer isn’t to force Facebook to police all bad stuff, it should be to move back towards a system where information is more distributed, and we’re not pressured into certain content because that same Facebook thinks it will lead to the most “engagement.”

Push the content and the data out and focus on the thing that Facebook has always been better at at it’s core: the connection function. Connect people, but don’t control all of the content. Don’t feel the need to police the content. Don’t feel the need to decide who’s trustworthy and who isn’t. Be the protocol, not the platform, and open up the system so that anyone else can provide a trust overlay, and let those overlays compete. It would take Facebook out of the business of having to decide what’s good and what’s bad and would give end users much more control.

Facebook, of course, seems unlikely to do this. The value of the control is that it allows them to capture more of the money from the attention generated on their platform. But, really, if it doesn’t want to keep dealing with these headaches, it seems like the only reasonable way forward.

Filed Under: confirmation bias, fake news, mark zuckerberg, metrics, news, ranking, trust, trustworthy
Companies: facebook

Techdirt Podcast Episode 155: Lies, Damned Lies & Audience Metrics

from the traffic-is-fake dept

In 2016, mostly out of frustration, I wrote a post about how traffic is fake, audience numbers are garbage, and nobody knows how many people see anything. My feelings haven’t changed much, and neither has the digital advertising ecosystem. And since regular podcast co-host Dennis Yang runs a digital metrics company, it only made sense for us to hash it out on an episode all about audience measurement and how it shapes online advertising.

Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.

Filed Under: advertising, metrics, podcast, publishing, traffic

Facebook Finds More Broken Metrics, Metrics Industry Rejoices

from the i-bet-they'll-count-the-dollars-accurately dept

Back in September, Facebook issued a mea culpa when it realized it had been accidentally inflating a key video metric for over a year. Now, the company has turned up several more audience metrics that were being miscalculated:

The company publicly disclosed on Wednesday that a comprehensive internal metrics audit found that discrepancies, or ?bugs,? led to the undercounting or overcounting of four measurements, including the weekly and monthly reach of marketers? posts, the number of full video views and time spent with publishers? Instant Articles.

None of the metrics in question impact Facebook?s billing, said Mark Rabkin, vice president of Facebook?s core ads team.

Facebook is always quick to add that last part, of course — and it’s technically true, though the indirect impact of performance metrics on how much publishers are willing to spend on Facebook ads is somewhat harder to be sure about. But what’s more interesting is Facebook’s plan to fix and improve the metrics going forward:

For starters, Facebook will provide viewability data from third-party metrics companies like Moat and Integral Ad Science for display ad campaigns. Previously, this data was limited to video campaigns.

… In addition, Facebook said it is working with Nielsen to count Facebook video views, including both on-demand views and live viewing, as part of Nielsen?s Digital Content Ratings metric.

… Lastly, Facebook said it plans to form a Measurement Council made up of marketers and ad agency executives, and will also roll out a blog to more regularly communicate updates about measurement.

Well, one thing is clear: fixing Facebook metrics is going to be a huge boon… for the metrics and marketing industries. Big new contracts for metrics companies! Executive jobs on Facebook’s new council! A new strut to prop up the ersatz monster of Nielsen ratings! Millions of dollars will be spent fixing and refining these metrics — which Facebook emphasizes are only four of over 220 it collects. Wow, 220! But online advertising still almost universally sucks, so you’d almost think the quantity of metrics isn’t helping, and might even be optimizing in the wrong direction…

So what exactly are the benefits for publishers and users going to be? Is advertising going to improve in quality? All the pressure on Facebook over this has come from marketing agencies, advertising networks and other tracking and metrics companies. And they’re the ones who are still complaining, since Facebook still doesn’t plan to allow ad buyers to add third-party tracking tags as some, like GroupM (the world’s largest advertising media company), have called for. But given how ultimately useless such metrics generally turn out to be, here’s my question: do these companies actually know or even care if any of these things improve the efficiency or efficacy of advertising’s ultimate goal — connecting consumers with products they want and generating a positive return on investment for advertisers — or does that not really matter, since they can profit just by showing clients fancier charts with more numbers and boasting about more elaborate tracking mechanisms in a whirlwind of marketing-speak about their new, revolutionary approach to serving everyone the same damn ad for a Thai dating website?

I know what I think. But hey, just because a castle is built on sand doesn’t mean Facebook can’t pay for a few new towers, right?

Filed Under: ads, broken metrics, metrics
Companies: facebook

Facebook Video Metrics Crossed The Line From Merely Dubious To Just Plain Wrong

from the fix-it-so-it's-just-normal-broke-again dept

Earlier this week, when I explained how basically all audience metrics are garbage both online and off, I trimmed the specifics on several platforms since knocking them all down in detail would have made that already-hefty post even longer. But, recent revelations about Facebook’s long-running inflation of a key video metric call for a deeper look at the world of Facebook video content and why, yet again, nobody has any idea how many people really see something (and this time, advertisers are unhappy):

Several weeks ago, Facebook disclosed in a post on its “Advertiser Help Center” that its metric for the average time users spent watching videos was artificially inflated because it was only factoring in video views of more than three seconds. The company said it was introducing a new metric to fix the problem.

… Ad buying agency Publicis Media was told by Facebook that the earlier counting method likely overestimated average time spent watching videos by between 60% and 80%, according to a late August letter Publicis Media sent to clients that was reviewed by The Wall Street Journal. … Publicis was responsible for purchasing roughly $77 billion in ads on behalf of marketers around the world in 2015, according to estimates from research firm Recma.

What happened here is actually pretty subtle, so bear with me. Facebook distinguishes “plays” from “views” — with the former being every single play of the video, including those auto-plays that you scroll straight past and never even look at, and the latter being only people who actually watched the video for three seconds or longer. Of course, there are still a million ways in which this metric is itself broken (I’ve certainly let plenty of videos play for more than three seconds or even all the way through while reading a post above or below them) but the distinction is a good one. All of the more detailed stats are based on either plays or views (mostly views) and are clearly labeled, but the one metric at issue was the “Average Duration of Video Viewed.” This metric could be fairly calculated as either the total amount of time from all plays divided by the total number of plays, or the same thing based only on time and number of views — but instead, it was erroneously being calculated as total play time divided by total number of views. In other words, all the second-or-two autoplays from idle newsfeed scrollers were being totalled up, and that time was being distributed among the smaller number of people who stayed on the video for more than three seconds as part of their average duration, leading to across-the-board inflation of that figure.

Now, in some ways this error is minor: it had no impact on billing for promoted videos, since average view time is not a factor there, and all the other more-detailed metrics about view duration were accurate, including a per-second graph of viewer drop-off for each video. But, indirectly, it’s a pretty big deal, because average view duration is a top-line metric for publishers figuring out which content is the most engaging. Beyond that, it’s the key metric for determining whether Facebook Video is truly engaging as a whole, and given the massive explosion of both publishers and advertisers putting all their focus on video recently, it’s worrying to think they might have been doing so at least in part based on a broken, inflated metric.

Of course, none of that changes the fact that even when the metrics are working properly, they most likely still suck. Much of the Facebook video boom has been in the live streaming arena, where publishers like BuzzFeed have been, well, buzzing about “peak concurrent viewer” numbers that rivaled the ratings of major cable networks. But television ratings represent something entirely different from this “peak” figure, and a similar system would likely peg these streams’ audiences at closer to zero. But then again, what we’re talking about here is Nielsen ratings, and I don’t need to reiterate just how many problems those have. All we’re doing is comparing a bunch of vague, hard-to-support and impossible-to-confirm numbers with each other, and ending up with almost no new insight into the reality of audiences.

Still, if the system is going to run on bullshit, it should at least be internally consistent bullshit — so it’s good that this latest Facebook error has been caught and fixed.

Filed Under: advertising, metrics, social media, videos
Companies: facebook

ESPN Gets Nielsen To Revise Its Data To Suggest Cord Cutting's No Big Deal

from the massaging-statistics dept

Thu, Feb 4th 2016 06:25am - Karl Bode

We’ve discussed for years that as an apparatus directly tied to the wallet of the cable and broadcast industry, TV viewing tracking company Nielsen has gladly helped reinforce the cable industry belief that cord cutting was “pure fiction.” Once the trend became too obvious to ignore, Nielsen tried to bury cord cutting — by simply calling it something else in reports. And while Nielsen was busy denying an obvious trend, it was simultaneously failing to track TV viewing on emerging platforms, something the company still hasn’t fully incorporated.

We’ve also been talking about how ESPN has been making the rounds, trying to “change the narrative” surrounding cord cutting to suggest that worries about ESPN’s long-term viability in the face of TV evolution have been overblown. Part of that effort this week apparently involved reaching out to Nielsen to demand the company fiddle with its cord cutting numbers, which ESPN then peddled to reporters in the hopes of creating an artificial, rosier tomorrow:

“On Thursday, ESPN reached out to reporters to let them know that cord-cutting isn?t nearly as bad as it sounds, and that the reason is the way Nielsen revised its pay-TV universe estimates. Nielsen (under client pressure) decided to remove broadband-only homes from its sample, but it didn?t restate historical data. It is now showing that, as of December, 1.2 million homes had cut the cord, a much smaller number than its earlier figure of 4.33 million homes for the year.”

Isn’t that handy! This of course isn’t the first time Nielsen has tweaked troubling numbers on demand to appease an industry eager to believe its cash cow will live forever. The irony is that the same industry that’s happy to gobble up potentially distorted data is simultaneously deriding Nielsen out of the other corner of its mouth as a company whose data is no longer reliable in the modern streaming video age. In a profile piece examining Nielsen’s struggle to adapt, the New York Times (and Nielsen itself) puts the problem rather succinctly:

“Yet Nielsen is established on an inherent conflict that can impede the adoption of new measurement methods. Nielsen is paid hundreds of millions of dollars a year by the television industry that it measures. And that industry, which uses Nielsen?s ratings to sell ads, is known to oppose changes that do not favor it. ?People want us to innovate as long as the innovation is to their advantage,? Mr. Hasker said.

Obviously getting a distrusted metric company to fiddle with data even further won’t save ESPN. The company’s SEC filings still suggest ESPN lost 7 million subscribers in the last few years alone. Some of these subscribers have cut the cord, but others have simply “trimmed” the cord — signing up for skinny bundles that have started to boot ESPN out of the core TV lineup. Similarly, studies have recently shown that 56% of ESPN users would drop ESPN for an $8 reduction on their cable bill. This sentiment isn’t going to magically go away as alternative viewing options increase.

BTIG analyst Rich Greenfield, who funded that survey and has been a thorn in ESPN’s side for weeks (for you know, highlighting facts and stuff), had a little advice for ESPN if it’s worried about accurate data:

“?If this is an important issue for ESPN, they should start releasing actual subscriber numbers rather than relying on third parties [Nielsen]. If they are upset with the confusion, let?s see the actual number of paying subscribers in the US over five years.”

Wall Street’s realization that ESPN may not fare well under the new pay TV paradigm at one point caused $22 billion in Disney stock value to simply evaporate. As a result, ESPN executives have addressed these worries in the only way they know how: by massaging statistics and insulting departing subscribers by claiming they were old and unwanted anyway. One gets the sneaking suspicion that’s not going to be enough to shelter ESPN from the coming storm.

Filed Under: cord cutting, data, denial, measurement, metrics
Companies: espn, nielsen

Yahoo Pumps Up Viewership Numbers For NFL Game By Autoplaying It On Your Yahoo Home Page

from the your-not-helping dept

Over the summer, when the NFL and Yahoo inked a deal for a one-game test run of an NFL game exclusively streamed by a service provider, I tried to temper everyone’s excitement. Baby steps, is what I called the deal, which it absolutely was. In many ways, the NFL either set this all up to minimize risk to its reputation and revenue, or set it up to fail, depending on who you believe. The game featured two teams expected to be bad, with followings and markets on the smaller side of the league, and the game was played overseas in the UK absurdly early in the morning in all the time zones state-side. That meant that the game would never have the viewership that a prime-time matchup between two good teams might have, but that probably worked for the NFL’s test run, in that any failure would be minimized for all the reasons listed above.

Well, the streamed game happened this past weekend. The results? Pretty good, actually.

Sunday’s live stream marked the first time that a single company had distributed an NFL telecast all around the world via the Internet. It was widely seen as a test for future NFL streaming efforts. The NFL said Monday that it was “thrilled with the results.” Technical reviews were mixed, with many fans reporting a seamless live stream, while others ran into connectivity trouble. Average viewership per minute was high by web live-streaming standards, but low by NFL TV standards.

The average viewership per minute reported by Yahoo was about 2.5 million, which is quite large by streaming standards. By comparison, though, that viewership is something like a third or a quarter of the viewership for most NFL television broadcasts. Still, considering the game started in the wee hours of the morning (4:30am Central Time, for instance), nobody was scoffing at the numbers. In fact, the NFL announced it was “thrilled” with the results.

And that really should have been enough. Unfortunately, Yahoo also stated that 15 million viewers had watched at least part of the live stream as well, which was quickly shown to be largely bullshit the company concocted by counting, oh, anyone who visited Yahoo’s home page.

All morning long, if you visited Yahoo.com on a PC, you were greeted with an autoplay stream of the broadcast, including commercials, but without sound. Yahoo says 43 million people a day visit its homepage. That number is presumably lower on a Sunday morning. But if it can get a big chunk of those visitors to see a couple minutes each, it will be in good shape — at least by the low bar it laid out for itself.

You’re not helping, Yahoo. Look, wider streaming of professional sports is going to happen. And advertisers and leagues are going to end up coming along for the ride because, no matter what your local cable company tells you, cord-cutting is a thing and it isn’t slowing down. But trying these little gimmicks to fudge the numbers will only set back the willingness of leagues and advertisers to jump into this. It creates a trust problem, similar to that experienced by other internet advertising gimmicks, where ads are reported to have been seen after being autoplayed, whether they were truly watched or not. This was a big moment for those of us that believe streaming sports is the future. Whatever you think about the viewership results, they weren’t disappointing the principal players involved in this entertainment game. For Yahoo to sully the waters in a transparent and obvious way was silly, as it could only hurt its own effort to secure future streaming deals.

Still, the rest of the news about the streamed NFL game was positive.

Filed Under: football, metrics, streaming, video, viewership
Companies: nfl, yahoo

After Calling Cord Cutting 'Purely Fiction' For Years, Nielsen Decides Just Maybe It Should Start Tracking Amazon, Netflix Viewing

from the selective-hearing dept

Wed, Nov 26th 2014 06:14am - Karl Bode

For years, we’ve discussed that while cord cutting is a very real (though slow growing) phenomenon, the broadcast and cable industry has done its best to pretend that it doesn’t exist. For years, the industry blamed the slowly defecting users on the recession or the sluggish housing market, and when the data failed to support that claim, they began going out of their way to argue that these users were middle-aged losers living in mom’s basement and therefore irrelevant. In fact, data shows that cord cutters are usually young, gainfully employed, and highly educated users who make plenty of money.

This silly denial included the TV ratings measurement firm Nielsen, which for several years insisted that cord cutting was “purely fiction.” When it became clear that cord cutting was very real, Nielsen didn’t admit error. It simply stopped calling it cord cutting and started calling it “zero TV households.”

Except here’s the rub: all that time that Nielsen was downplaying cord cutting, it wasn’t bothering to actually measure it. It was only late last year that Nielsen began to at least try tracking television viewers on PC, tablets and phones (something still not fully implemented), and the firm only just announced last week that it would soon begin tracking Netflix viewing (did I mention that it’s 2014?). Shockingly, guess what the preliminary Nielsen data leaked to the Journal indicates?

“The Nielsen documents also contain some of the strongest data to date suggesting that time spent on these streaming services is meaningfully eating into traditional television viewing. Television viewing is down 7% for the month ended Oct. 27 from a year earlier among adults 18 to 49, a demographic that advertisers pay a premium to reach. Meanwhile, subscribership to streaming video services has jumped to 40% of households in September, up from 34% in January, Nielsen found. That is a rate of growth that advertising agency executives who saw the Nielsen document said they found shocking. Netflix accounts for the vast majority of the viewership.”

That’s on top of the small but meaningful net loss of 150,000 pay TV customers last quarter; including the first ever third-quarter net loss for companies like DirecTV. Nielsen, like broadcast executives, has a vested interest in propping up the legacy cash cow and burying its head squarely in the sand, hoping the obvious cord cutting phenomenon is akin to yeti and unicorns. We’ve seen an increasing number of top telecom and cable industry analysts who spent years insisting cord cutting wasn’t real, only to sheepishly and quietly change their tune over the last year. Now that Nielsen’s actually bothering to measure the data, it should be only a matter of time before it too admits it was wrong, right?

Filed Under: analytics, cord cutting, measurement, metrics, online streaming, online video
Companies: amazon, netflix, nielsen

James Clapper Still Trying To Distance Intelligence Community From The Bogus 'Foiled Plots' Metric He Came Up With

from the but-of-course dept

In the Senate Intelligence Committee hearing on Wednesday, where James Clapper effectively accused journalists who reported on the Snowden documents of being criminals, he also tried to distance himself from the fact that the NSA’s surveillance activities have been basically useless. During the hearing, he attempted to claim that looking at “foiled plots” is the “wrong metric.”

Clapper answered by saying “it’s an important tool,” but he also added that “simply using the metric of plots foiled is not necessarily a way to get at the value of the program.”

This is not the first time he’s tried this. Last year, he started claiming that “peace of mind” was the better metric. Of course, that resulted in people pointing out that it would provide a lot more “peace of mind” if the NSA weren’t collecting all of our information all the time.

But there’s another issue here, which is that the only reason why Clapper has to now try to distance himself from is because he and Keith Alexander made it the metric by trotting out the whole bogus 54 “terrorist events foiled” line to defend the program — only to see that number fall apart under scrutiny to the point that it showed the bulk metadata collection had basically stopped no US terrorist plots at all.

So, sure, Clapper wants to get away from that number now. But, perhaps he should have not brought it up in the first place in trying to defend the program…

Filed Under: foiled plots, james clapper, metrics, nsa, surveillance

DailyDirt: Measuring Scientific Impact Is Far From Simple

How do you measure the impact of a scientist’s research? Some common metrics include the number of publications in peer-reviewed and high-impact journals, the number of citations, etc. But it’s more complicated than just using the quantity and quality of a scientist’s peer-reviewed publications to determine their significance in the scientific community. Here are a few more things to consider.

If you’d like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post via StumbleUpon.

Filed Under: academics, citations, co-authors, h-index, merit, metrics, peer review, publication, science, scientific impact, tenure