fake data – Techdirt (original) (raw)
AT&T Provided FCC Bunk Broadband Availability Data Across 20 States
from the driving-blind dept
We’ve noted repeatedly that despite a lot of talk from U.S. leaders and regulators about the “digital divide,” the United States doesn’t actually know where broadband is available. Historically the FCC has simply trusted major ISPs — with a vested interest in downplaying coverage and competition gaps — to tell the truth. The FCC’s methodology has also long been flawed, considering an entire area to be connected if just one home in a census tract has service. The results are ugly: the FCC’s $350 million broadband availability map all but hallucinates broadband availability and speed (try it yourself).
As pressure mounts on the agency to finally improve its broadband mapping, the scope of the problem continues to come into focus. Like this week, when AT&T was forced to acknowledge that the company provided the FCC with inaccurate broadband availability data across 20 states, impacting some 3,600 census blocks:
“AT&T disclosed the error to the FCC in a filing a week ago. The filing provides “a list of census blocks AT&T previously reported as having broadband deployment at speeds of at least 25Mbps downstream/3 Mbps upstream that AT&T has removed from its Form 477 reports.” The 78-page list includes nearly 3,600 blocks.”
You’ll recall that last year, Ajit Pai tried to claim that his “deregulatory agenda” (read: gutting oversight of an uncompetitive and hugely unpopular business sector) resulted in some amazing broadband expansion. Only later was it revealed that much of this growth either was triggered by things Pai’s FCC had nothing to do with (like fiber build-out requirements affixed to AT&T’s 2015 merger with DirecTV by the previous FCC), or a broadband mapping blunder by a small provider by the name of BarrierFree, which overstated its footprint to the FCC by a cool 1.5 million locations.
AT&T insists this latest error was caused by a “software bug,” and while relatively small in the scope of AT&T’s overall service area, consumer groups are a little curious how it could have gone unnoticed for the better part of two years:
“Aside from one even bigger error by an ISP called BarrierFree last year, Turner said he hasn’t “seen any other ISP reporting error like this before” and that “it is curious that the [AT&T] error may have gone unnoticed for 2-plus years.”…”While relatively small errors like this don’t end up changing conclusions about national trends, it certainly can impact the FCC decisions about where to spend?and where to not spend?scarce subsidy funds,” Turner said. “AT&T should be quite a bit more forthcoming about the exact nature of this error and how it discovered it, so that other ISPs can be sure they’re not making similar errors.”
While there’s no evidence of intentional under-reporting by AT&T, the timing is curious all the same.
After several decades of complaints, pressure has mounted on the FCC and Congress to actually do something as states vie for additional deployment subsidies. That culminated in the recent passage of the Broadband Deployment Accuracy and Technological Availability (DATA) Act, which mandates the FCC to use more accurate geolocation and crowdsourced data to create more accurate maps and actually verify where broadband’s available before doling out billions in subsidies or issuing policy (fancy that!).
It will take years to complete, the FCC has warned they can’t afford to finish it without more funding, and the industry, which has spent years lobbying against mapping improvements for obvious reasons, could still find ways to either scuttle the effort or make access to the data difficult. Still, baby steps and all that. There are at least indications that the “what US broadband competition problem?” telecom policy set finally realizes this is a problem that needs fixing, even if truly better broadband maps are still several years away.
Filed Under: broadband, broadband map, data, digital divide, fake data, fcc
Companies: at&t
Miami Cops Flood Waze With Bogus Speed Trap Data, Don't Understand How Crowd Sourcing Works
from the you-think-you're-being-clever dept
Wed, Feb 11th 2015 06:12am - Karl Bode
We’ve been discussing how law enforcement organizations have started ramping up their war on the Google-owned, traffic info crowdsourcing app, Waze, in the belief that it’s hindering local revenue generation. More specifically, they’ve been trying to stop the app and its users from reporting police speed trap locations, going so far as to make the absurd argument that the app allows citizens to become police “stalkers.” Of course as noted previously, these officers are usually in plain sight and obviously marked, meaning if you really had an insane hankering to annoy a cop you can certainly do it without an app. It’s also worth reminding officers that Waze users are simply having a perfectly legal conversation (just like flashing headlights or even holding up signs is legal), at least for now.
With the “mean old citizens are stalking us” defense apparently not working so well, some law enforcement agencies are turning to another, more clever (or so they think) solution: pollute Waze’s data with false police speed trap locations. Officers in Miami have apparently taken to downloading the Waze app themselves just so they can flood the app with inaccurate data:
“Hundreds of officers in the Miami area have downloaded the app, which lets users provide real-time traffic information and identify areas where police are conducting speed enforcement. The local NBC affiliate says the officers are flooding Waze with false information on their activity in an attempt to make the app’s information less useful to drivers. Disclosing the location of police officers “puts us at risk, puts the public at risk, because it’s going to cause more deadly encounters between law enforcement and suspects,” Sgt. Javier Ortiz, president of the Miami Fraternal Order of Police, tells the news outlet.”
This was apparently something some Los Angeles homeowners tried as well late last year, when they reported false congestion to the app in the hopes of lessening local traffic load. Of course the very nature of crowd-sourced apps like this involves repeated false reports and unreliable users being weeded out not only by the system itself, but by more trustworthy reports from reliable Waze users with higher scores. Even if this dumb idea worked, and all Miami Waze users were confused into thinking speed traps were everywhere, wouldn’t they drive slower and ruin revenue generation (what this is really about) anyway?
All the Miami police force is doing is wasting time and taxpayer money in a war on perfectly legal conversation. In fact, you could argue they’re doing something worse by eroding their own safety. As it stands the Waze app isn’t specifically singling out speed traps — it allows users to mark any police location. As in, it allows users to mark any emergency vehicle at the side of the road for any reason, notifying Waze users that they should slow down. If this was truly about public safety and not revenue generation, you’d think this would at least be part of the conversation.
Still, law enforcement associations are increasing pressure on politicians (like Chuck Schumer), and Google’s shown at least some flexibility on this. For me personally, it’s all kind of a moot point anyway. I drove from New York to Seattle and back again last summer and found that police move positions so frequently, Waze probably indicated an accurate speed trap location around a third of the time anyway. Still, you’d hate to see any app made less useful just because it hurts a police department’s ability to turn public protection into a major revenue stream.
Filed Under: crowdsourcing, fake data, miami, police, waze
Companies: google, waze
DailyDirt: Mistakes In Science Publishing
from the urls-we-dig-up dept
It’s amazing some of the stuff that gets published in peer-reviewed scientific journals these days. For example, recently there was a paper published in a peer-reviewed journal in which the images appeared to be photoshopped. The photoshopping was so badly done that it was obvious upon looking at the images that they were doctored. The paper was withdrawn after this was discovered, but why didn’t the journal editors catch this before it was published? Here are some other examples of questionable things that have made their way into journals.
- The supporting information for a recently published chemistry paper contained an editorial note that was inadvertently left in the published document. Not only did the journal’s editors fail to catch this, but the paper’s author is apparently being told to make up fake data: “_Emma, please insert NMR data here! where are they? and for this compound, just make up an elemental analysis…_” [url]
- A bizarre and completely unintelligible journal article that was recently published in a peer-reviewed journal has people wondering if it’s a joke. The author spends most of the time explaining, in the most convoluted and incomprehensible manner, what the paper is apparently about, without really telling the audience what the paper is about. [url]
- The number of retracted scientific papers is increasing — but not necessarily because more scientists are fabricating, falsifying, or modifying data. It’s more likely because there is now an increased awareness of research misconduct, a greater audience thanks to the internet, and better software to detect plagiarism, image manipulation, etc. The blog Retraction Watch keeps track of scientific papers that have been retracted. [url]
- Check out some of last year’s worst scientific mistakes, missteps, and misdeeds. These include at least one author who faked the e-mail addresses of and impersonated his paper’s reviewers. [url]
If you’d like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post via StumbleUpon.
Filed Under: authorship, fake data, fraud, journals, peer review, publishing, retracted paper, science