public info – Techdirt (original) (raw)

from the f#$@-it-we'll-do-it-live dept

Perhaps it’s a result of spending many years now writing about intellectual property matters, but it is still shocking just how little understanding there is for how fair use works in conjunction with copyright law. It’s especially irritating when the folks who don’t understand it come from the government itself.

Which brings us to the Louisiana parole board. Interestingly, parole hearings are all publicly streamed in a live format, but the Parole Board does not make those videos available for viewing outside of the live stream. But one YouTuber, going by the name of Mandoo, records those streams and then adds commentary to them, with the stated purpose of making the system transparent and commenting on the way the justice system works. Mandoo was also handed 52 takedowns of videos on his channel after a local news organization used them in some of its own reporting on a specific parole hearing.

After our report aired though Mandoo said 52 of his uploaded parole board hearings received copyright claims and were deleted from his page, including Thomas Cisco’s hearing.

Mandoo said he had been recording the hearings for about a year without any issues.

“Maybe it has something to do with the controversy behind that [Cisco’s] hearing. Maybe it didn’t. I don’t know,” He said. Mandoo is said he’s concerned this could be a violation of his rights.

And I agree with him. This seems to be pretty squarely in the realm of fair use. These are public hearings that the government is streaming itself, meaning there is a serious degree of public interest here. The Parole Board, for its part, claims it doesn’t make video recordings available for download in order to “protect the victims” who testify in those hearings. Which, you know… doesn’t make any fucking sense. They’re not protected in the live stream.

Add to all of that a couple of things. First, the commentary and purpose of Mandoo’s videos adds to the claim of fair use. The fact the commentary centers on government action makes the case even clearer. And there are laws outside of fair use that make all of this legal in Louisiana anyway.

Scott Sternberg, who represents media organizations across Louisiana, said besides fair use Louisiana’s open meeting law makes clear the public has a right to record a meeting.

“In the day of cell phones where everybody’s got a camera and can take 4K or even 8K video, you know, people record stuff in public meetings all the time and yes it is perfectly legal to do so,” Sternberg said.

All of which is leading to Mandoo appealing the copyright claims and takedowns with YouTube. I expect those videos will be reinstated soon as they absolutely should be, perhaps even by the time this post is published.

Filed Under: copyright, louisiana, louisiana parole board, mandoo, parole hearings, public info
Companies: youtube

Parler Forced To Explain The First Amendment To Its Users After They Complain About Parler Turning Over Info To The FBI

from the delicious dept

Parler — the social media cesspool that claimed the only things that mattered to it were the First Amendment and, um… FCC standards — has reopened with new web hosting after Amazon decided it no longer wished to host the sort of content Parler has become infamous for.

Parler has held itself up to be the last bastion of the First Amendment and a protector of those unfairly persecuted by left-wing tech companies. The users who flocked to the service also considered themselves free speech absolutists. But like far too many self-ordained free speech “absolutists,” they think the only speech that should be limited is moderation efforts by companies like Twitter and Facebook.

And, like a lot of people who mistakenly believe the First Amendment guarantees them access to an active social media account, a lot of Parler users don’t seem to understand the limits of First Amendment protections. Parler, like every other social media service, has had to engage in moderation efforts that removed content undeniably protected by the First Amendment but that it did not want to host on its platform. It has also had to remove illegal content and that’s where its most recent troubles began.

Over the weekend, the resurrected Parler crossed over into meta territory, resulting in an unintentionally hilarious announcement to its aggrieved users upset about the platform’s decision to forward Capitol riot related posts to law enforcement. It really doesn’t get any better than this in terms of schadenfreude and whatever the German word is for an ad hoc group of self-proclaimed First Amendment “experts” having their second favorite right explained to them.

Here’s Matt Binder for Mashable:

The reaction to the news that Parler “colluded” with the FBI in order to report violent content was so strong on the right wing platform, the company was compelled to release a statement addressing those outraged users.

In doing so, Parler found itself unironically explaining the First Amendment to its user base filled with members who declare themselves to be “Constitutionalists” and “Free Speech” advocates.

Parler’s statement spells it out: the First Amendment does not protect the speech shared with law enforcement by the social media platform.

In reaction to yesterday’s news stories, some users have raised questions about the practice of referring violent or inciting content to law enforcement. The First Amendment does not protect violence inciting speech, nor the planning of violent acts. Such content violates Parler’s TOS. Any violent content shared with law enforcement was posted publicly and brought to our attention primarily via user reporting. And, as it is posted publicly, it can properly be referred to law enforcement by anyone. Parler remains steadfast in protecting your right to free speech.

That’s a very concise and accurate reading of the First Amendment and how it applies to the content Parler forwarded to the FBI. It’s not covered. But that hasn’t stopped a few vocal complainants from telling Parler to try reading the Constitution again and, apparently, decide it means not only hosting violent content, but refusing to pass these threats on to law enforcement.

The core user base being unable to understand the limits of the right it believes allows it to say anything anywhere is partially a byproduct of Parler’s promise to erect a Wild West internet playground for bigots and chauvinists who had nowhere else to go. Once it had some users, Parler realized it too needed to engage in moderation, even if only to rid itself of porn and outsiders who showed up solely to troll its stable of alt-right “influencers.”

The January 6th insurrection appears to have forced the platform to grow up a little. Of course, some of that growth was forced on it by the leak of thousands of users’ posts, which were examined by journalists and forwarded to law enforcement to assist in identifying Parler users who attended the deadly riot in DC earlier this year. Illegal content is still illegal, and being beholden only to the First Amendment doesn’t change that.

Filed Under: 1st amendment, content moderation, fbi, insurrection, public info
Companies: parler

Harvard Opens Up Its Massive Caselaw Access Project

from the good-to-see dept

Almost exactly three years ago, we wrote about the launch of an ambitious project by Harvard Law School to scan all federal and state court cases and get them online (for free) in a machine readable format (not just PDFs!), with open APIs for anyone to use. And, earlier this week, case.law officially launched, with 6.4 million cases, some going back as far as 1658. There are still some limitations — some placed on the project by its funding partner, Ravel, which was acquired by LexisNexis last year (though, the structure of the deal will mean some of these restrictions will likely decrease over time).

Also, the focus right now is really on providing this setup as a tool for others to build on, rather than as a straight up interface for anyone to use. As it stands, you can either access data via the site’s API, or by doing bulk downloads. Of course, the bulk downloads are, unfortunately, part of what’s limited by the Ravel/LexisNexis data. Bulk downloads are available for cases in Illinois and Arkansas, but that’s only because both of those states already make cases available online. Still, even with the Ravel/LexisNexis limitation, individual users can download up to 500 cases per day.

The real question is what will others build with the API. The site has launched with four sample applications that are all pretty cool.

Her son Julius is a confirmed thief. He did not turn over a new leaf. The vessel, not. the parking lot. Respondent concedes this in its brief.

The quality overall is… a bit mixed. But it’s fun.

Hopefully this inspires a lot more on the development side as well.

Filed Under: caselaw, caselaw access project, legal data, public info, public records, transparency
Companies: harvard, lexisnexis, ravel

Facebook Derangement Syndrome: Don't Blame Facebook For Company Scraping Public Info

from the it's-public-info dept

Earlier this month I talked a little bit about “Facebook Derangement Syndrome” in which the company, which has real and serious issues, is getting blamed for other stuff. It’s fun to take potshots at Facebook, and we can talk all we want about the actual problems Facebook has (specifically its half-hearted attempts at transparency and user control), but accusing the company of all sorts of things that are not actually a problem doesn’t help. It actually makes it that much harder to fix things.

The latest case in point. Zack Whittaker, who is one of the absolute best cybersecurity reporters out there, had a story up recently on ZDNet about a data mining firm called Localblox, that was pulling all sorts of info to create profiles on people… leaking 48 million profiles by failing to secure an Amazon S3 instance (like so many such Amazon AWS leaks, this one was spotted by Chris Vickery at Upgard, who seems to spot leaks from open S3 instances on weekly basis).

There is a story here and Whittaker’s coverage of it is good and thorough. But the story is in Localblox’s crap security (though the company has tried to claim that most of those profiles were fake and just for testing). However, many people are using the story… to attack Facebook. Digital Trends claims that this story is “the latest nightmare for Facebook.” Twitter users were out in force blaming Facebook.

But, if you look at the details, this is just Facebook Derangement Syndrome all over again. Localblox built up its data via a variety of means, but the Facebook data was apparently scraped. That is, it used its computers to scrape public information from Facebook accounts (and Twitter, LinkedIn, Zillow, elsewhere) and then combined that with other data, including voter rolls (public!) and other data brokers, to build more complete profiles. Now, it’s perfectly reasonable to point out that combining all of this data can raise some privacy issues — but, again, that’s a Localblox issue if there’s a real issue there, rather than a Facebook one.

And, this is clearly the kind of thing that Facebook actively tries to prevent. Remember, as we’ve covered, the company went on a legal crusade against another scraper company, Power.com, using the CFAA to effectively kill that company’s useful service.

Here’s why this kind of thing matters: if you blame Facebook for this kind of thing, then you actively encourage Facebook to go out of its way to block scraping or other efforts to free up user data. That means, it ends up giving Facebook more control over user data. Allowing scrapers of public info (again, the fact that this is public info is important) could actually limit Facebook’s powers, and enable other companies to pop up and make use of the data inside Facebook to build other (competing) services. The ability to scrape Facebook would allow third parties to build tools to give users more control over their Facebook accounts.

But when we look on scraping of public info as somehow a “breach” of Facebook (which, again, is separate from the messed up nature of Localblox leaking data itself), we’re pushing everyone towards a world where Facebook has more control, more dominance and less competition. And that should be the last thing that anyone (outside of Facebook) wants.

Filed Under: chris vickery, data collectors, facebook derangement syndrome, public info, scraping
Companies: facebook, localblox, upgard

EU Looks To Prevent Employers From Viewing An Applicant's Publicly Available Social Media Information

from the well-that's-dumb dept

Ever since social media sites like Facebook and Twitter became household names here in America, we’ve occasionally had really stupid debates about just what type of access to those accounts employers should get from their employees. Some states have even passed laws that would allow employers to demand social media passwords from employees and applicants, presumably so that company reps can comb through private messages and posts shared only with the employee’s or applicant’s friends. If all of that seems stupid to you, that’s because it totally is!

But it’s not remotely as dumb as what the EU has decided to do in regulating corporations such that they are disallowed from viewing public social media information about an applicant unless it directly relates to the job for which they have applied. To be clear, this new regulation is non-binding at the moment, but it will be the basis of data protection laws set to come out in the future. Still, preventing a company from viewing publicly available information doesn’t make much sense.

Employers who use Facebook, Twitter and other social media to check on potential job candidates could be breaking European law in future. An EU data protection working party has ruled that employers should require “legal grounds” before snooping. The recommendations are non-binding, but will influence forthcoming changes to data protection laws.

The guidelines from the Article 29 working party will inform a radical shake-up of European data protection laws, known as the General Data Protection Regulation (GDPR), which are due to come into force in May 2018. Their recommendations also suggest that any data collected from an internet search of potential candidates must be necessary and relevant to the performance of the job.

When it comes to privacy restrictions on matters of social media, it seems to me that there is an easy demarcation line that ought to suffice here: that which is public and that which is not. Most social media sites come with handy tools to keep some or all portions of an account private, or shareable only amongst connections within the platform. If an applicant wants something kept from the eyes of an employer, they need only hide it behind those privacy options. This regulation, however, would restrict a company from accessing public information, which should plainly be viewed as nonsensical.

The post notes that recruitment sites like CareerBuilder have seen rates of 70% or so employers that check public social media accounts of applicants they consider hiring. That’s as surprising as the sun rising each morning. It’s barely even considered creepy any longer to google the names of friends, never mind people you’re looking to hire. Somehow I don’t see any regulation curbing that across a continent.

Filed Under: data protection, eu, interviews, jobs, privacy, public info, social media

Judge: Using Publicly-Available Twitter Profile Info Is Like Stealing Social Security Numbers

from the #wat dept

A potential class action lawsuit against Twitter and the creators of a short-lived app that allowed users to “buy” and “sell” celebrities’ Twitter accounts has raised some questions about a federal judge’s grasp on social media reality and the First Amendment.

The background: Jason Parker — fronting an Alabama-based class action suit [original filing here] — sued Twitter and Hey, Inc. back in August, claiming Hey’s “Famous” app violated the state’s right of publicity law. (We won’t get into how ridiculous many “right of publicity” laws are as this lawsuit may not even survive a motion to dismiss even after it’s amended.) The app, called “Famous: The Celebrity Twitter” allowed users to collect, buy, and trade Twitter profiles of famous people using virtual currency.

For some reason, this made a bunch of people angry. The app’s gameplay — buying and selling people — was somewhat unsavory, but it was all based on publicly-available Twitter profile information. Twitter allowed the app to pull this data for use in the game. The game underwent some changes after Congresswoman Katherine Clark sent a letter to Twitter telling it to remove all “unconsenting” profiles, whatever that meant.

Hey, Inc. pulled the app and retooled it, releasing it a month later as simply “Famous.” Gone was the virtual currency (almost) and the buying and selling of Twitter profiles. Instead, players “invested” in celebrity Twitter accounts with “hearts,” which could be purchased with real money.

The class action suit persisted as Parker’s right of publicity claims wasn’t based on whether Twitter profiles were bought/sold/stolen, but rather that Twitter didn’t have the right to make this information available to the app creators. Despite voluntarily using a service and providing Twitter with profile information, Parker (and users similarly situated) somehow believe they should be able to control how their Twitter profile information is used.

Supposedly, Hey, Inc. — with Twitter’s “collusion” — is “exploiting” thousands of profiles for profit without their “consent.” This must be Parker’s first experience with a social media platform if he thinks Twitter is the only one “exploiting” users and their data for profit. Sure, it looks a bit more unseemly when an app allows users to buy and sell other people’s profiles for in-game currency/hearts, but all of this voluntarily-provided data can be accessed by anyone, with or without Twitter’s strict approval. (Use of Twitter’s API is subject to some restrictions, but public profile information can be seen by anyone, even without a Twitter account.)

Twitter has been in court arguing that Parker’s claims — if upheld — will violate it and its users’ First Amendment right, as reported by Helen Christophi of Courthouse News.

At oral argument Thursday, [Judge William] Alsup assailed Twitter’s argument that Parker’s right-of-publicity claim violates the First Amendment by seeking to curb users’ activities with each other’s profiles.

“I don’t see how you can even make that argument with a straight face,” Alsup told Twitter attorney Matthew Brown.

Brown replied: “Twitter has the First Amendment right to disseminate the information.”

This is a legitimate argument. Dissemination of information is protected speech. Judge William Alsup — who has done good IP work elsewhere — somehow managed to make the following retort without realizing how completely off-base his comparison is.

Alsup took issue with that, likening it to criminals stealing and sharing Social Security numbers.

“I can’t believe the First Amendment allows that kind of criminal conduct,” Alsup said. “You’re telling me that’s protected by the First Amendment? No way. You’re disclosing their personas.”

WTF.

Information voluntarily provided to Twitter for profiles is in NO WAY comparable to other personally-identifiable information that is traditionally safeguarded by users and platforms alike. The app’s use of Twitter’s API only pulls publicly-available profile information that has been provided by users. Anyone whose Twitter account is public is “disclosing their persona.” No one’s doing that with their Social Security numbers. (If they are, good lord please get off the internet.) Twitter isn’t digging up information not voluntarily provided by users and adding that to the pool of data by Hey Inc.’s game. Jason Parker’s “right of publicity” isn’t being violated and the use of publicly-available data is decidedly not a criminal act.

Parker’s lawyer didn’t do any better than Judge Alsup with his assertions.

Parker’s attorney Tievsky said the Constitution does not protect Twitter in this case.

“The point here is, in the use of my client’s name and likeness, there is no creative expression, there is a mere taking of information and putting it in another place and another context, and that’s what becomes problematic,” Tievsky said. “We’re not talking about the kind of expressive work we recognize for First Amendment protection.”

The First Amendment doesn’t just protect for expressive works. As was stated earlier, the publication of information/data is protected by the First Amendment. If it wasn’t, every person with a beef about their failed lawsuits and/or criminal convictions would be able to scrub the web of public documents containing these details (along with any reporting using these documents as source material) because this publication would no longer be protected speech.

Just because Parker and Judge Alsup don’t like the app’s gameplay doesn’t mean Hey, Inc. or Twitter are committing some form of new digital crime and/or working outside of the confines of protected speech.

Hopefully, these arguments won’t become worse as the case moves forward. Judge Alsup has given Parker permission to amend his filing and claims he needs an “expert on consent” to help sort things out. Parker claims he never consented to Twitter allowing third parties to use his voluntarily-provided profile data, but that’s a claim that’s going to be extremely difficult to assert successfully. The information was already out there for any third party to access. Twitter just made it simpler.

Filed Under: first amendment, public info, publicly available info, social media, william alsup
Companies: famous, hey, twitter

Is It Really That Big A Deal That Twitter Blocked US Intelligence Agencies From Mining Public Tweets?

from the it's-public-info dept

Over the weekend, some news broke about how Twitter was blocking Dataminr, a (you guessed it) social media data mining firm, from providing its analytics of real-time tweets to US intelligence agencies. Dataminr — which, everyone makes clear to state, has investments from both Twitter and the CIA’s venture arm, In-Q-Tel — has access to Twitter’s famed “firehose” API of basically every public tweet. The company already has relationships with financial firms, big companies and other parts of the US government, including the Department of Homeland Security, which has been known to snoop around on Twitter for quite some time.

Apparently, the details suggest, some (unnamed) intelligence agencies within the US government had signed up for a free pilot program, and it was as this program was ending that Twitter reminded Dataminr that part of the terms of their agreement in providing access to the firehose was that it not then be used for government surveillance. Twitter insists that this isn’t a change, it’s just it enforcing existing policies.

Many folks are cheering Twitter on in this move, and given the company’s past actions, the stance is perhaps not that surprising. The company was one of the very first to challenge government attempts to get access to Twitter account info (well before the whole Snowden stuff happened). Also, some of the Snowden documents revealed that Twitter was alone among internet companies in refusing to sign up for the NSA’s PRISM program, which made it easier for internet firms to supply the NSA with info in response to FISA Court orders. And, while most other big internet firms “settled” with the government over revealing government requests for information, Twitter has continued to fight on, pushing for the right to be much more specific about how often the government asks for what kinds of information. In other words, Twitter has a long and proud history of standing up to attempts to use its platform for surveillance purposes — and it deserves kudos for its principled stance on these issues.

That said… I’m not really sure that blocking this particular usage really makes any sense. This is public information, rather than private information. And, yes, not everyone has access to “the firehose,” so Twitter can put whatever restrictions it wants on usage of that firehose, but seeing as it’s public information, it’s likely that there are workarounds that others have (though, perhaps not quite as timely). But separately, reviewing public information actually doesn’t seem like a bad idea for the intelligence community. Yes, we can all agree (and we’ve been among the most vocal in arguing this) that the intelligence agencies have a long and horrifying history of questionable datamining of other databases that they should not have access to. But publicly posted tweet information seems like a weird thing for anyone to be concerned about. There’s no reasonable expectation of privacy in that information, and not because of some dumb “third party doctrine” concept, but because the individuals who tweet do, in fact, make a proactive decision to post that information publicly.

So, perhaps I’m missing something here (and I expect that some of you will explain what I’m missing in the comments), but I don’t see why it’s such a problem for intelligence agencies to do datamining on public tweets. We can argue that the intelligence community has abused its datamining capabilities in the past, and that’s true, but that’s generally over private info where the concern is raised. I’m not sure that it’s helpful to argue that the intelligence community shouldn’t even be allowed to scan publicly available information as well. It feels like it’s just “anti-intelligence” rather than “anti-abusive intelligence.”

Filed Under: data mining, intelligence, intelligence community, public info, surveillance, tweets
Companies: dataminr, twitter