disclosure – Techdirt (original) (raw)

ExTwitter Accused Of Secretly Boosting Mr. Beast Video With Undisclosed Ad Placements

from the anyone-inform-the-ftc? dept

It’s kind of pathetic how desperate Elon Musk is to convince people that ExTwitter is a good platform for posting their video content. I assume it’s a perfectly okay place to post videos, but it’s hardly where most people go to watch videos, and Elon may discover sooner or later just how difficult it is to change the overall perception.

In recent weeks, Elon’s taken to begging big influencers to post on ExTwitter and retweeting misleading tweets from other users, claiming that ExTwitter pays out way more than other sites (there are many, many asterisks associated with the payments ExTwitter actually pays out).

But one of his big targets has been the YouTube super influencer, MrBeast. If you somehow don’t know of MrBeast, he’s a massive YouTube star who makes elaborate and expensive videos. And Elon has been bugging him to post on ExTwitter basically ever since he took over the company.

Just a few weeks ago, MrBeast (real name Jimmy Donaldson) responded to the latest request to upload his videos to ExTwitter by pointing out that he didn’t think the revenue share from ExTwitter could help him out. “My videos cost millions to make and even if they got a billion views on X it wouldn’t fund a fraction of it.” Ouch.

Image

Still, just a few weeks later, MrBeast decided to test it out to see, posting a new video to ExTwitter, and saying explicitly that he wanted to see how well it performed.

Image

Now, I’d argue that the views on the video are not the most accurate. For example, I just went to find that tweet to screenshot, and the video started playing immediately, even though I never clicked on it to play. So, I’m guessing a lot of the views are… not real.

But, either way, people are pointing out that it appears someone is juicing the views anyway, by promoting the video post as an ad… but without the (required by law) disclosure that it’s an ad. This certainly suggests that it’s being done by ExTwitter itself, rather than MrBeast directly. If it were being done by MrBeast or someone else, then it would say that it was a promoted/advertised slot. The fact that it’s hidden suggests the call is coming from inside the house.

The evidence that it’s an undisclosed ad is pretty strong. People are seeing it show up in their feeds without the time/date of the post, which is something that only happens with ads. Other tweets show that info.

Image

Even stronger proof? If you click on the three dots next to the tweet… it says “Report ad” and “Why this ad?” which, um, is pretty damning.

Image

Cody Johnston notes that he has refused to update his Twitter app in ages, and on the old app, it is properly designated as a “Promoted” tweet, which is how ads were normally disclosed.

Image

Elon is denying that he’s done anything to goose the numbers, but the evidence suggests someone at the company is doing so, whether or not Elon knows about it.

Image

Of course, the evidence still suggests otherwise. Meanwhile, Ryan Broderick was told by an ExTwitter employee that they don’t have to label promoted tweets that have videos because there’s also a pre-roll video and that is disclosed. Of course, that… makes no sense at all. Those are two totally separate things, and not labeling the promoted tweet is a likely FTC violation (and potentially fraudulent in misrepresenting to people how much they might make from videos posted to the platform).

Image

Anyway, beyond raising even more questions about how (and how much) ExTwitter is actually paying content creators, it seems like this might just create a whole new headache for the company.

Filed Under: advertisements, disclosure, elon musk, ftc, mr beast, revenue share
Companies: twitter, x

OpenAI Says GPT-4 Is Great For Content Moderation, But Seems… A Bit Too Trusting

from the i-can't-block-that,-dave dept

In our Moderator Mayhem mobile browser game that we released back in May, there are a couple of rounds where some AI is introduced to the process, as an attempt to “help” the moderators. Of course, in the game, the AI starts making the kinds of mistakes that AI makes. That said, there’s obviously a role for AI and other automated systems within content moderation and trust & safety, but it has to be done thoughtfully, rather than thinking that “AI will solve it.”

Jumping straight into the deep end, though, is OpenAI, which just announced how they’re using GPT-4 for content moderation, and making a bunch of (perhaps somewhat questionable) claims about how useful it is:

We use GPT-4 for content policy development and content moderation decisions, enabling more consistent labeling, a faster feedback loop for policy refinement, and less involvement from human moderators.

The explanation of how it works is… basically exactly what you’d expect. They write policy guidelines and feed it to GPT-4 and then run it against some content and see how it performs against human moderators using the same policies. It’s also setup so that a human reviewer can “ask” GPT-4 why it classified some content some way, though it’s not at all clear that GPT-4 will give an accurate answer to that question.

The point that OpenAI makes about this is that it allows for faster iteration, because you can roll out new policy changes instantly, rather than having to get a large team of human moderators up to speed. And, well, sure, but that doesn’t necessarily deal with most of the other challenges having to do with content moderation at scale.

The biggest question from me, knowing how GPT-4 works, is how consistent are the outputs? The whole thing about LLM tools like this is that every time you give it the same inputs it may give wildly different outputs. That’s part of the fun of these models that rather than looking for the “correct” answer, it just creates the answer on the fly by taking a probabilistic approach to figuring out what it should say. Given how one of the big complaints about content moderation is “unfair treatment” this seems like a pretty big question.

But not just in the most direct manner: one of the reasons why there are so many complaints about “unfair treatment” between “similar” content is that one thing a trust & safety team often really needs to understand is the deeper nuance and context between claims. Stated more explicitly: malicious and nefarious users often try to hide or justify their problematic behavior by presenting it in a form that mimics content from good actors… and then using that to play victim, pretending there’s been unfair treatment.

This is why some trust & safety experts talk about the “I wasn’t born yesterday…” test that a trust & safety team needs to apply in recognizing those deliberately trying to game the system. Can AI handle that? As of now there’s little to no indication that it can.

And, this gets even more complex and problematic when put into the context of the already problematic DSA in the EU. That requires that platforms explain their moderation choices in most cases. Now, I think that’s extremely problematic for a variety of reasons, and is likely going to lead to even greater abuse and harassment, but it will soon be the law in the EU and then we get to see how it plays out.

But, how does that work here? Yes, GPT-4 can “answer” the question, but again, there’s no way to know if that answer is accurate, or if it’s just saying what it thinks the person wants to hear.

I definitely think that AI tools like GPT-4 will absolutely be a helpful tool for handling trust & safety issues. There is a lot they can do in assisting a trust & safety team. But we should be realistic about its limits, and where it can be best put to use. And OpenAI’s description on this page sounds naively optimistic about some things, and just ill-informed about some others.

Over at Platformer, Casey Newton got a lot more positive responses from a bunch of experts (and they are all experts I trust) about this offering, noting that, at the very least it might be useful in raising the baseline for trust & safety teams, allowing the AI to handle the basic stuff, and pass along the thornier problems for humans to handle. For example, Dave Willner, who until recently ran trust & safety for OpenAI noted that it’s very good in certain circumstances:

“Is it more accurate than me? Probably not,” Willner said. “Is it more accurate than the median person actually moderating? It’s competitive for at least some categories. And, again, there’s a lot to learn here. So if it’s this good when we don’t really know how to use it yet, it’s reasonable to believe it will get there, probably quite soon.”

Similarly, Alex Stamos from the Stanford Internet Observatory said that in testing various AI systems, his students found GPT-4 to be really strong:

Alex Stamos, director of the Stanford Internet Observatory, told me that students in his trust and safety engineering course this spring had tested GPT-4-based moderation tools against their own models, Google/Jigsaw’s Perspective model, and others.

“GPT-4 was often the winner, with only a little bit of prompt engineering necessary to get to good results,” said Stamos, who added that overall he found that GPT-4 works “shockingly well for content moderation.”

One challenge his students found was that GPT-4 is simply more chatty than they are used to in building tools like this; instead of returning a simple number reflecting how likely a piece of content is to violate a policy, it responded with paragraphs of text.

Still, Stamos said, “my students found it to be completely usable for their projects.”

That’s good to hear, but it also… worries me. Note that both Willner and Stamos highlighted how it was good with caveats. But in operationalizing tools like this, I’m wondering how many companies are going to pay that much attention to those caveats as opposed to just going all in.

Again, I think it’s a useful tool. I keep talking about how important it is in all sorts of areas for us to look at AI as a tool that helps improve base level for all sorts of jobs, and how that could even revitalize the middle class. So, in general, this is a good step forward, but there’s a lot about it that makes me wonder if those implementing it will really understand its limitations.

Filed Under: ai, consistency, content moderatl, disclosure, gpt-4, llm
Companies: openai

Google Strikes $9.4 Million Settlement With FTC For Paying DJs And Influencers To Praise Phones They Never Touched

from the artificial-enthusiasm dept

Fri, Dec 2nd 2022 01:54pm - Karl Bode

The FTC and four state attorneys general this week struck a $9.4 million settlement with Google over allegations that Google covertly paid celebrities money to promote a phone none of them had ever used.

The FTC’s announcement states that the agency had previously filed suit against Google and iHeartMedia for airing nearly 29,000 deceptive endorsements by radio personalities and influencers, promoting their use of and experience with Google’s Pixel 4 phone in 2019 and 2020. The FTC and state AGs said the DJs and influencers had never actually so much as touched the phones, violating truth in advertising rules:

“It is common sense that people put more stock in first-hand experiences. Consumers expect radio advertisements to be truthful and transparent about products, not misleading with fake endorsements,” said Massachusetts Attorney General Maura Healey. “Today’s settlement holds Google and iHeart accountable for this deceptive ad campaign and ensures compliance with state and federal law moving forward.”

Of course, this kind of obscured financial relationship is happening constantly, especially in the influencer space. But like most U.S. regulators, the FTC lacks the staff, finances, or overall resources to police this stuff with any meaningful consistency. So instead, they occasionally fire a warning shot over the bow of the biggest and worst offenders, in the hopes that it scares others into behaving.

The Pixel 4 is a three-generation old phone, so, as usual, any regulatory action on this kind of stuff happens pretty late, if it happens at all. It sounds like Google would have been fine if it had just had the influencers more generally imply that they loved the phone, and it was the phony first-person endorsements that got Google and iHeartMedia in trouble.

More generally, poorly or non-disclosed influencer marketing arrangements are everywhere, and the FTC’s simply too inundated with other responsibilities to take aim at the problem with any real consistency. Still, the agency issued warnings to 700 companies in 2021 that it was at least paying attention to the problem, something that can’t be said of previous incarnations of the agency.

Filed Under: disclosure, ftc, influencers, marketing, phones, regulatory enforcement
Companies: google

The Problem With The Otherwise Very Good And Very Important Eleventh Circuit Decision On The Florida Social Media Law

from the the-hole-in-the-donut dept

There are many good things to say about the Eleventh Circuit’s decision on the Florida SB 7072 social media law, including that it’s a very well-reasoned, coherent, logical, sustainable, precedent-consistent, and precedent-supporting First Amendment analysis explaining why platforms moderating user-generated speech still implicates their own protected rights. And not a moment too soon, while we wait for the Supreme Court to hopefully grant relief from the unconstitutional Texas HB20 social media bill.

But there’s also a significant issue with it, which is that it only found most of the provisions of SB 7072 presumptively unconstitutional, so some of the law’s less-obviously-yet-still pernicious provisions have been allowed to go into effect.

These provisions include the need to disclose moderation standards (§501.2041(2)(a)) (the court only took issue with needing to post an explanation for every moderation decision), disclose when the moderation rules change (501.2041(2)(c)), disclose to users view counts on their posts (§501.2041(2)(e)), disclose that it has given candidates free advertising (§ 106.1072(4)), and give deplatformed users access to their data (§ 510.2041(2)(i)). The analysis gave short-shrift to these provisions that it allowed to go into effect, despite their burdens on the same editorial discretion the court overall recognized was First Amendment-protected, despite the extent that they violate the First Amendment as a form of compelled speech, and despite how they should be pre-empted by Section 230.

Of course, the court did acknowledge that these provisions might yet be shown to violate the First Amendment. For instance, in the context of the data-access provision the court wrote:

It is theoretically possible that this provision could impose such an inordinate burden on the platforms’ First Amendment rights that some scrutiny would apply. But at this stage of the proceedings, the plaintiffs haven’t shown a substantial likelihood of success on the merits of their claim that it implicates the First Amendment. [FN 18]

And it made a somewhat similar acknowledgment for the campaign advertising provision:

While there is some uncertainty in the interest this provision serves and the meaning of “free advertising,” we conclude that at this stage of the proceedings, NetChoice hasn’t shown that it is substantially likely to be unconstitutional. [FN 24]

And for the other disclosure provisions as well:

Of course, NetChoice still might establish during the course of litigation that these provisions are unduly burdensome and therefore unconstitutional. [FN 25]

Yet because the court could not already recognize how these rules chill editorial discretion means that they will now get the chance to. For example, it is unclear how a platform could even comply with them, especially a platform like Techdirt (or Reddit, or Wikimedia), which use community-based moderation, and whose moderating whims are impossible to know, let alone disclose, in advance of implementing. Such a provision would seem to be chilling of editorial discretion by making it impossible to choose such a moderation system, even when doing so aligns with the expressive values of the platform. (True, SB 7072 may not yet reach the aforementioned platforms, but such is little consolation if it means that the platforms it does reach could still be chilled from making of such editorial choices.)

The analysis was also scant with respect to the First Amendment prohibition against compelled speech, which these provisions implicate by forcing platforms to say certain things. Although this prohibition against compelled speech supported the court’s willingness to enjoin the other provisions, its analysis glossed over how this constitutional rule should have applied to these disclosure provisions:

These are content-neutral regulations requiring social-media platforms to disclose “purely factual and uncontroversial information” about their conduct toward their users and the “terms under which [their] services will be available,” which are assessed under the standard announced in Zauderer. 471 U.S. at 651. While “restrictions on non-misleading commercial speech regarding lawful activity must withstand intermediate scrutiny,” when “the challenged provisions impose a disclosure requirement rather than an affirmative limitation on speech . . . the less exacting scrutiny described in Zauderer governs our review.” Milavetz, Gallop & Milavetz, P.A. v. United States, 559 U.S. 229, 249 (2010). Although this standard is typically applied in the context of advertising and to the government’s interest in preventing consumer deception, we think it is broad enough to cover S.B. 7072’s disclosure requirements—which, as the State contends, provide users with helpful information that prevents them from being misled about platforms’ policies. [p. 57-8]

And by not enjoining these provisions it will now compel platforms to publish information it wasn’t already publishing, or even potentially significantly re-engineer their systems (such as to give users view count data).

In addition, the decision then gave short shrift to how Section 230 pre-empted such requirements. To an extent, this oversight may in part be due to how the court found it was not necessary to reach Section 230 in finding that most of the law’s provisions should be enjoined (“Because we conclude that the Act’s content-moderation restrictions are substantially likely to violate the First Amendment, and because that conclusion fully disposes of the appeal, we needn’t reach the merits of the plaintiffs’ preemption challenge.” [p.18]).

But for the provisions where it couldn’t find the First Amendment to be enough of a reason to enjoin it, the court ideally should have moved onto this alternative basis before allowing the provisions to go into effect. Unfortunately, it’s also possible that the court really didn’t recognize how Section 230 was a bar to them:

Nor are these provisions substantially likely to be preempted by 47 U.S.C. § 230. Neither NetChoice nor the district court asserted that § 230 would preempt the disclosure, candidate-advertising, or user-data-access provisions. It is not substantially likely that any of these provisions treat social-media platforms “as the publisher or speaker of any information provided by” their users, 47 U.S.C. § 230(c)(1), or hold platforms “liable on account of” an “action voluntarily taken in good faith to restrict access to or availability of material that the provider considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable,” id. § 230(c)(2)(A). [FN 26]

Fortunately, however, there will likely be future opportunities to brief that issue more clearly in the future, as the case has now been remanded back to the district court for further proceedings – this appeal was only in reference to whether the law was likely to be so legally dubious to warrant being enjoined while it was challenged, but the challenge itself can continue. And it will happen in the shadow of this otherwise full-throated defense of the First Amendment in the context of platform content moderation.

Filed Under: 11th circuit, 1st amendment, content moderation, disclosure, florida, sb 7072

WatchGuard Plays The Ostrich, Patches Exploit Without Informing Customers

from the heads-in-the-sand dept

Firewalls. You know, boring old IT stuff. So why are we talking about them at Techdirt? Well, one thing we regularly talk about is how companies tend to respond to exploits and breaches that are uncovered and, far too often, how horrifically bad they are in those responses. Often times, breaches and exploits end up being far more severe than originally reported, and there are some companies that actually try to go after those reporting on breaches and exploits legally.

And then there’s WatchGuard, which was informed in February of 2021 by the FBI that an exploit in one of its firewall lines was being used by Russian hackers to build a botnet, yet the company only patched the exploit out in May of 2021. Oh, and the company didn’t bother to alert its customers of the specifcs in any of this until court documents were unsealed in the past few days revealing the entire issue.

In court documents unsealed on Wednesday, an FBI agent wrote that the WatchGuard firewalls hacked by Sandworm were “vulnerable to an exploit that allows unauthorized remote access to the management panels of those devices.” It wasn’t until after the court document was public that WatchGuard published this FAQ, which for the first time made reference to CVE-2022-23176, a vulnerability with a severity rating of 8.8 out of a possible 10.

The WatchGuard FAQ said that CVE-2022-23176 had been “fully addressed by security fixes that started rolling out in software updates in May 2021.” The FAQ went on to say that investigations by WatchGuard and outside security firm Mandiant “did not find evidence the threat actor exploited a different vulnerability.”

Note that there was an initial response from WatchGuard almost immediately after the advisement from US/UK LEOs, with a tool to let customers identify if they were at risk and instructions for mitigation. Which is all well and good, but customers weren’t given any real specifics as to what the exploit was or how it might be used. That’s the sort of thing IT administrators dig into. The company also basically suggested it was not providing those details to keep the exploit from being more widely used.

When WatchGuard released the May 2021 software updates, the company made only the most oblique of references to the vulnerability.

“These releases also include fixes to resolve internally detected security issues,” a company post stated. “These issues were found by our engineers and not actively found in the wild. For the sake of not guiding potential threat actors toward finding and exploiting these internally discovered issues, we are not sharing technical details about these flaws that they contained.”

Unfortunately, there doesn’t seem to be much that is true in that statement. Law enforcement uncovered the security issue, not some internal WatchGuard team. The exploit was found in the wild, with the FBI assessing that roughly 1% of the firewalls the company sold were compromised with malware called Cyclops Blink, another specific that doesn’t appear to have been communicated to clients.

“As it turns out, threat actors *DID* find and exploit the issues,” Will Dormann, a vulnerability analyst at CERT, said in a private message. He was referring to the WatchGuard explanation from May that the company was withholding technical details to prevent the security issues from being exploited. “And without a CVE issued, more of their customers were exposed than needed to be.

WatchGuard should have assigned a CVE when they released an update that fixed the vulnerability. They also had a second chance to assign a CVE when they were contacted by the FBI in November. But they waited for nearly 3 full months after the FBI notification (about 8 months total) before assigning a CVE. This behavior is harmful, and it put their customers at unnecessary risk.”

And it’s not the kind of thing you can get away with when your business is literally threat detection and prevention in IT. This stinks of a coverup, which is always worse than the crime, cliché though that might be.

Filed Under: botnet, disclosure, fbi, firewall, hackers, security, vulnerability disclosure
Companies: watchguard

New Washington Law Requires Home Sellers Disclose Lack Of Broadband Access

from the it's-a-utility,-stupid dept

Thu, Jan 13th 2022 12:43pm - Karl Bode

For decades the U.S. newswires have been peppered with stories where somebody bought a house after being told by their ISP it had broadband access, only to realize the ISP didn’t actually serve that address. Generally, the homeowner then realizes they have to spend a stupid amount of money to pay the local telecom monopoly to extend service.. or move again. Time after time, local ISPs are found to be flat out lying when they claim they can offer an essential utility (broadband), and the home buyer has little recourse thanks to the slow, steady erosion of U.S. state and federal telecom regulatory oversight.

So yeah, one problem is that we continue to lobotomize our state and federal telecom regulators under the bullshit claim that this results in some kind of free market Utopia (you’d think everyday reality would have cured folks of this belief by now, but nope). The other underlying culprit has generally been America’s notoriously shitty broadband maps, which let regional monopolies obscure the patchy coverage, slow speeds, and high prices created by regional monopolization so American policymakers can more easily pretend none of this is a problem.

State telecom consumer protection is generally feckless, with the entirety of telecom policy in most corrupt state legislatures directly dictated by AT&T or Comcast. Washington State continues to be one of just a few exceptions. In the last few years the state has killed a protectionist law designed to hamstring community broadband, passed its own net neutrality laws in the wake of federal apathy, and has actually stood up to the longstanding telecom industry practice of ripping off consumers with bullshit fees. Now, the state is also passing a new law requiring that home sales disclose whether the home actually has broadband:

“Starting in the new year, home sellers in Washington will be required to share their internet provider on signed disclosure forms that include information about plumbing, insulation and structural defects…”Does the property currently have internet service?” the disclosure form will now ask, along with a space to say who the provider is. The law doesn’t require sellers to detail access speeds, quality or alternative providers.

In short, the law now gives potential buyers three days after receiving the disclosure to back out of the deal if they are concerned over any of the details indicated on the form. It basically just requires the seller to confirm whether or not they currently have access, and keeps the buyer from having to confirm service availability from a regional monopoly with a vested interest in lying about their footprint reach.

It’s a small shift, but it reflects the growing COVID-era understanding that broadband is more of an essential utility than a luxury. For years telecom monopolies (and the consultants, think tankers, and proxy policy wonks paid to love them) fought tooth and nail against the idea of broadband as a utility because it would require being regulated like one. That means stricter safeguards on things like metered usage (which would be a problem given most broadband usage caps are bullshit), sneaky and completely bogus fees, predatory pricing, and other bad practices the industry has engaged in for a generation.

Telecom monopoly lobbyists and policy folks for decades were allowed to get away with the argument that broadband isn’t an essential utility and therefore should not be subject to functional, meaningful oversight. That helped build the monopolies like Comcast we all know and love. But that rhetoric was thrown immediately in the dumpster courtesy of the harsh reality of the COVID home telecommuting and home education boom (apparently the visual of kids huddled in the dirt outside of taco bell just to attend class in the richest country in history had a motivating impact, who knew?).

The problem continues to be that reform like this remains hard to come by. Federal policymakers are intentionally hamstrung and gridlocked by a Congress awash in telecom campaign contributions, and for every state like Washington there are five states that simply couldn’t care less about the laundry list of predatory behaviors regional telecom monopolies engage in on a daily basis. Like, say, falsely claiming for thirty straight years they offer broadband to areas they don’t, in part due to incompetence, and in part to help mask a widespread lack of actual coverage and competition.

Filed Under: broadband, disclosure, washington

Journalists In St. Louis Discover State Agency Is Revealing Teacher Social Security Numbers; Governors Vows To Prosecute Journalists As Hackers

from the wtf-missouri? dept

Last Friday, Missouri’s Chief Information Security Officer Stephen Meyer stepped down after 21 years working for the state to go into the private sector. His timing is noteworthy because it seems like Missouri really could use someone in their government who understands basic cybersecurity right now.

We’ve seen plenty of stupid stories over the years about people who alert authorities to security vulnerabilities then being threatened for hacking, but this story may be the most ridiculous one we’ve seen. Journalists for the St. Louis Post-Dispatch discovered a pretty embarrassing leak of private information for teachers and school administrators. The state’s Department of Elementary and Secondary Education (DESE) website included a flaw that allowed the journalists to find social security numbers of the teachers and administrators:

Though no private information was clearly visible nor searchable on any of the web pages, the newspaper found that teachers? Social Security numbers were contained in the HTML source code of the pages involved.

The newspaper asked Shaji Khan, a cybersecurity professor at the University of Missouri-St. Louis, to confirm the findings. He called the vulnerability ?a serious flaw.?

?We have known about this type of flaw for at least 10-12 years, if not more,? Khan wrote in an email. ?The fact that this type of vulnerability is still present in the DESE web application is mind boggling!?

In the HTML source code means that it sent that information to the computers/browsers of those who knew what pages to go to. It also appears that the journalists used proper disclosure procedures, alerting the state and waiting until it had been patched before publishing their article:

The Post-Dispatch discovered the vulnerability in a web application that allowed the public to search teacher certifications and credentials. The department removed the affected pages from its website Tuesday after being notified of the problem by the Post-Dispatch.

Based on state pay records and other data, more than 100,000 Social Security numbers were vulnerable.

The newspaper delayed publishing this report to give the department time to take steps to protect teachers? private information, and to allow the state to ensure no other agencies? web applications contained similar vulnerabilities.

Also, it appears that the problems here go back a long ways, and the state should have been well aware that this problem existed:

The state auditor?s office has previously sounded warning bells about education-related data collection practices, with audits of DESE in 2015 and of school districts in 2016.

The 2015 audit found that DESE was unnecessarily storing students? Social Security numbers and other personally identifiable information in its Missouri Student Information System. The audit urged the department to stop that practice and to create a comprehensive policy for responding to data breaches, among other recommendations. The department complied, but clearly at least one other system contained an undetected vulnerability.

This is where a competent and responsible government would thank the journalists for finding the vulnerability and disclosing it in an ethical manner designed to protect the info of the people the state failed to properly protect.

But that’s not what happened.

Instead, first the Education Commissioner tried to make viewing the HTML source code nefarious:

In the letter to teachers, Education Commissioner Margie Vandeven said ?an individual took the records of at least three educators, unencrypted the source code from the webpage, and viewed the social security number (SSN) of those specific educators.?

It was never “encrypted,” Commissioner, if the journalists could simply look at the source code and get the info.

Then DESE took it up a notch and referred to the journalists as “hackers.”

But in the press release, DESE called the person who discovered the vulnerability a ?hacker? and said that individual ?took the records of at least three educators? ? instead of acknowledging that more than 100,000 numbers had been at risk, and that they had been available to anyone through DESE?s own search engine.

And then, it got even worse. Missouri Governor Mike Parson called a press conference in which he again called the journalists hackers and said he had notified prosecutors and the Highway Patrol’s Digital Forensic Unit to investigate. Highway Patrol? He also claimed (again) that they had “decoded the HTML source code.” That’s… not difficult. It’s called “view source” and it’s built into every damn browser, Governor. It’s not hacking. It’s not unauthorized.

Through a multi-step process, an individual took the records of at least three educators, decoded the HTML source code, and viewed the SSN of those specific educators.

We notified the Cole County prosecutor and the Highway Patrol?s Digital Forensic Unit will investigate. pic.twitter.com/2hkZNI1wXE

— Governor Mike Parson (@GovParsonMO) October 14, 2021

It gets worse. Governor Parson claims that this “hack” could cost $50 million. I only wish I was joking.

This incident alone may cost Missouri taxpayers up to $50 million and divert workers and resources from other state agencies. This matter is serious.

The state is committing to bring to justice anyone who hacked our system and anyone who aided or encouraged them ? In accordance with what Missouri law allows AND requires.

A hacker is someone who gains unauthorized access to information or content. This individual did not have permission to do what they did. They had no authorization to convert and decode the code. This was clearly a hack.

We must address any wrongdoing committed by bad actors.

If it costs $50 million to properly secure the data on your website that previous audits had already alerted you as a problem, then that’s on the incompetent government who failed to properly secure the data in the first place. Not on journalists ethically alerting you to fix the vulnerability. And, there’s no “unauthorized access.” Your system put that info into people’s browsers. There’s no “decoding” to view the source. That’s not how any of this works.

As people started loudly mocking Governor Parson, he decided to double down, insisting that it was more than a simple “right click” and repeating that journalists had to “convert and decode the data.”

We want to be clear, this DESE hack was more than a simple ?right click.?

THE FACTS: An individual accessed source code and then went a step further to convert and decode that data in order to obtain Missouri teachers? personal information. (1/3) pic.twitter.com/JKgtIpcibM

— Governor Mike Parson (@GovParsonMO) October 14, 2021

Again, even if it took a few steps, that’s still not hacking. It’s still a case where the state agency made that info available. That’s not on the journalists who responsibly disclosed it. It’s on the state for failing to protect the data properly (and for collecting and storing too much data in the first place).

Indeed, in doing this ridiculous show of calling them hackers and threatening prosecution, all the state of Missouri has done is make damn sure that the next responsible/ethical journalists and/or security researchers will not alert the state to their stupidly bad security. Why take the risk?

Filed Under: blame the messenger, dese, disclosure, ethical disclosure, hacking, mike parson, private information, schools, social security numbers, st. louis, teachers, vulnerabilities
Companies: st. louis post dispatch

European Law Enforcement Officials Upset Facebook Is Warning Users Their Devices May Have Been Hacked

from the screw-the-little-people,-we've-got-bad-guys-to-hack dept

Oh boy. Facebook has just added fuel to the anti-encryption fire. And by doing nothing more than something it should be doing: notifying users that their device may have been compromised by malware.

The Wall Street Journal article covering this standard notification is full of quotes from government officials who aren’t happy a suspected terrorist was informed his phone had possibly been infected by targeted malware. [Non-paywalled version here.]

A team of European law-enforcement officials was hot on the trail of a potential terror plot in October, fearing an attack during Christmas season, when their keyhole into a suspect’s phone went dark.

WhatsApp, Facebook Inc. ’s popular messaging tool, had just notified about 1,400 users—among them the suspected terrorist—that their phones had been hacked by an “advanced cyber actor.” An elite surveillance team was using spyware from NSO Group, an Israeli company, to track the suspect, according to a law-enforcement official overseeing the investigation.

Facebook is no fan of NSO Group. In fact, very few people are fans of NSO Group, other than their customers, which have included UN-blacklisted countries and a number of governments that rank pretty high on the Most Human Rights Violated charts. Facebook sued NSO back in November, making very questionable allegations about CFAA violations. Facebook’s servers were never targeted by NSO’s malware. Only end users were, which makes it pretty difficult for Facebook to claim it has been personally (so to speak) injured by NSO’s actions.

Back to the matter at hand, Facebook didn’t just warn suspected terrorists about detected malware.

WhatsApp’s Oct. 29 message to users warned journalists, activists and government officials that their phones had been compromised, Facebook said. But it also had the unintended consequence of potentially jeopardizing multiple national-security investigations in Western Europe about which Facebook hadn’t been alerted—and about which government agencies can’t formally complain, given their secret nature.

Would these government officials rather have not been warned about threats? Were any of these government officials receiving warnings the same ones now complaining the warning allowed a terrorism suspect to vanish? Maybe so. The one quoted in the article seems very short-sighted.

On the day WhatsApp sent its alert, the official overseeing the terror investigation in Western Europe said, he was stuck in traffic on his way to work when a call came in from Israel. “Have you seen the news? We’ve got a problem,” he said he was told. WhatsApp was notifying suspects whom his team was tracking that their phones had been hacked. “No, that can’t be right. Why would they do that?” the official said he asked his contact, thinking it a joke.

“Why would they do that” indeed. Maybe to protect their users from cybercriminals and state-sponsored hackers. It’s not about allowing suspected criminals to dodge law enforcement, even though it will undoubtedly have that effect. It’s about keeping users and their communications protected — users that include journalists, activists, and government officials.

This response indicates the investigators pursuing the suspected terrorist would rather hundreds of innocent people be harmed than someone suspected of terrorism go free. But it really doesn’t matter what unnamed officials think about Facebook’s “you may have been compromised” notifications or the harm these might do to ongoing investigations. Facebook’s voluntary warnings will soon be mandatory in Europe. By the end of 2020, all service providers and telcos will be obligated to warn customers of security threats.

That fact — and the apparent willingness to allow innocent people to be victimized by targeted attacks — makes the article’s closing statement all the more ridiculous.

Gilles de Kerchove, the European Union’s counterterrorism coordinator, says encryption shouldn’t allow criminals to be “less accountable online than in real life.”

I have no idea what that means. I know what the official thinks it’s supposed to mean — that “online” is bad because sometimes criminals get away — but even that interpretation doesn’t make sense. Criminals discover their phones have been tapped and stop using those lines. Criminals talk to each other in person to avoid creating records of conversations. Criminals get tips from other criminals they’re under surveillance. This stuff just happens. Investigations don’t always run smoothly.

A standard warning about possibly-compromised devices and services is just good business — something that protects everyone who uses the service, not just the people governments think are OK to protect. These warnings are essential and they benefit everyone, not just the people governments want to lock up.

Filed Under: disclosure, europe, hacking, law enforcement, malware, terrorists
Companies: facebook, nso group, whatsapp

China Actively Collecting Zero-Days For Use By Its Intelligence Agencies — Just Like The West

from the no-moral-high-ground-there,-then dept

It all seems so far away now, but in 2013, during the early days of the Snowden revelations, a story about the NSA’s activities emerged that apparently came from a different source. Bloomberg reported (behind a paywall, summarized by Ars Technica) that Microsoft was providing the NSA with information about newly-discovered bugs in the company’s software before it patched them. It gave the NSA a window of opportunity during which it could take advantage of those flaws in order to gain access to computer systems of interest. Later that year, the Washington Post reported that the NSA was spending millions of dollars per year to acquire other zero-days from malware vendors.

A stockpile of vulnerabilities and hacking tools is great — until they leak out, which is precisely what seems to have happened several times with the NSA’s collection. The harm that lapse can cause was vividly demonstrated by the WannaCry ransomware. It was built on a Microsoft zero-day that was part of the NSA’s toolkit, and caused very serious problems to companies — and hospitals — around the world.

The other big problem with the NSA — or the UK’s GCHQ, or Germany’s BND — taking advantage of zero-days in this way is that it makes it inevitable that other actors will do the same. An article on the Access Now site confirms that China is indeed seeking out software flaws that it can use for attacking other systems:

In November 2017, Recorded Future published research on the publication speed for China’s National Vulnerability Database (with the memorable acronym CNNVD). When they initially conducted this research, they concluded that China actually evaluates and reports vulnerabilities faster than the U.S. However, when they revisited their findings at a later date, they discovered that a majority of the figures had been altered to hide a much longer processing period during which the Chinese government could assess whether a vulnerability would be useful in intelligence operations.

As the Access Now article explains, the Chinese authorities have gone beyond simply keeping zero-days quiet for as long as possible. They are actively discouraging Chinese white hats from participating in international hacking competitions because this would help Western companies learn about bugs that might otherwise be exploitable by China’s intelligence services. This is really bad news for the rest of us. It means that China’s huge and growing pool of expert coders are no longer likely to report bugs to software companies when they find them. Instead, they will be passed to the CNNVD for assessment. Not only will bug fixes take longer to appear, exposing users to security risks, but the Chinese may even weaponize the zero-days in order to break into other systems.

Another regrettable aspect of this development is that Western countries like the US and UK can hardly point fingers here, since they have been using zero-days in precisely this way for years. The fact that China — and presumably Russia, North Korea and Iran amongst others — have joined the club underlines what a stupid move this was. It may have provided a short-term advantage for the West, but now that it’s become the norm for intelligence agencies, the long-term effect is to reduce the security of computer systems everywhere by leaving known vulnerabilities unpatched. It’s an unwinnable digital arms race that will be hard to stop now. It also underlines why adding any kind of weakness to cryptographic systems would be an incredibly reckless escalation of an approach that has already put lives at risk.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

Filed Under: china, cybersecurity, disclosure, intelligence, nsa, security, surveillance, vulnerabilities, zero days

Security Researcher At The Center Of Emoji-Gate Heading Home After Feds Drop Five Felony Charges

from the good-news-for-one-of-the-actual-good-guys dept

The security researcher who was at the center of an audacious and disturbing government demand to unmask several Twitter accounts on the basis of an apparently menacing smiley emoji contained in one of them is now facing zero prison time for his supposed harassment of an FBI agent. Justin Shafer, who was originally facing five felony charges, has agreed to plead guilty to a single misdemeanor charge. Shafer, who spent eight months in jail for blogging about the FBI raiding his residence repeatedly, is finally going home.

Here are the details of plea agreement [PDF] Shafer has agreed to. (h/t DissentDoe]

Mr. Shafer is pleading to a single misdemeanor of simple assault, based on his sending a Facebook direct message to an FBI Agent’s immediate relative’s public Facebook account. There is no allegation of any physical contact.

The government agrees to recommend a sentence of time served. Mr. Shafer already served 8 months in jail before trial for criticizing the government’s prosecution in a blog post. He was released after the defense filed a motion arguing his pre-trial detention violated First Amendment free speech rights and the statute governing pre-trial detention.

The government is not seeking for any restitution.

The United States Attorney’s Office has agreed not to prosecute Mr. Shafer for the events leading to the initial armed FBI raid of his family’s home.

Mr. Shafer has agreed to a no contact order with the FBI agent, the agent’s family, and the company involved in the initial investigation.

What started out as normal security research soon became a nightmare for Shafer. His uncovering of poor security practices in the dental industry — particularly the lack of attention paid to keeping HIPAA information secured — led to his house being raided by FBI agents. The FBI raided his house again after he blogged about the first raid. The FBI justified its harassment of Shafer with vague theories about his connection to infamous black hat hacker TheDarkOverlord. To do this, the FBI had to gloss over — if not outright omit — the warnings Shafer had sent to victims of TheDarkOverlord, as well as the information on the hacker Shafer had sent to law enforcement agencies including the FBI.

Blogging about his interactions with the FBI led to the judge presiding over his criminal trial to revoke his release and jail him for exercising his First Amendment rights. This was ultimately reversed by a federal judge who agreed Shafer was allowed to call FBI agents “stupid” and blog about his treatment by the federal agency. (He was not to reveal personal info about FBI agents, however.)

This trial has come to a swift end because the presiding judge sees zero merit in the government’s case.

[T]he case probably would have gone to trial had it not been for Judge Janis Graham Jack letting the prosecution know that she saw no evidence of any threat to support the felony charges and that she might rule on the defense’s motion to dismiss if the prosecution didn’t come up with some reasonable plea deal.

This case comes to an end, but it does not absolve the government of its abusive behavior. Here’s what Shafer’s defense team (Tor Ekeland, Fred Jennings, and Jay Cohen) had to say about their client’s treatment by federal law enforcement.

Mr. Shafer first contacted us after he [was] raided by armed federal law enforcement for alleged computer crimes the government has never charged him for. When he complained to the government about it, he was arrested and thrown in jail for his criticism. He was freed after the defense filed a motion arguing his pre-trial detention violated the First Amendment. Fortunately, when presented with the facts of this case, the Court understood the magnitude of the issues here and helped us resolve this case without the hassle, expense, and stress of a jury trial. We are grateful to the Northern District of Texas for recognizing this case for what it was: an attack on internet free speech and a citizen’s right to criticize the government.

And what can we learn from this debacle? Here’s what Shafer has learned: never help anybody.

I think the next time someone finds social security numbers that is considered protected health information under HIPAA they should just turn a blind eye. Nobody is going to call you a hero (except the enlightened), and you run the risk of being harassed by the FBI. Doctors responsible for alerting patients will now have yet another reason not to. Already, only about 10% of doctors notified patients that their patient information was publicly available. Law enforcement or the Office of Civil Rights won’t care, and will most likely ignore it. Punishing health information researchers for reporting these issues only puts patients at greater risk. I think it would benefit society greatly if people who find publicly accessible data were not threatened by the people who put it there.

Thank god the FBI was there to help ensure public safety no one publicly badmouthed one of its agents. Shooting the messenger is the expected response when security breaches are discovered. If it’s not those leaving personal info exposed threatening researchers with lawsuits or criminal charges, it’s the government itself stepping in to “protect” entities that can’t even protect the data of paying customers.

Filed Under: disclosure, fbi, hacking, justin shafer, security, security research