companies – Techdirt (original) (raw)
It's Time to Talk About Internet Companies' Content Moderation Operations
from the transparency dept
As discussed in this post below, on February 2nd, Santa Clara University is hosting a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants have written short essays about the questions that will be discussed at this event — and over the next few weeks we’ll be publishing many of those essays. This first one comes from Professor Eric Goldman, who put together the conference, explaining the rationale behind the event and this series of essays.
Many user-generated content (UGC) services aspire to build scalable businesses where usage and revenues grow without increasing headcount. Even with advances in automated filtering and artificial intelligence, this goal is not realistic. Large UGC databases require substantial human intervention to moderate anti-social and otherwise unwanted content and activities. Despite the often-misguided assumptions by policymakers, problematic content usually does not have flashing neon signs saying “FILTER ME!” Instead, humans must find and remove that content?especially with borderline cases, where machines can’t make sufficiently nuanced judgments.
At the largest UGC services, the number of people working on content moderation is eye-popping. By 2018, YouTube will have 10,000 people on its “trust & safety teams.” Facebook’s “safety and security team” will grow to 20,000 people in 2018.
Who are these people? What exactly do they do? How are they trained? Who sets the policies about what content the service considers acceptable?
We have surprisingly few answers to these questions. Occasionally, companies have discussed these topics in closed-door events, but very little of this information has been made public.
This silence is unfortunate. A UGC service’s decision to publish or remove content can have substantial implications for individuals and the community, yet we lack the information to understand how those decisions are made and by whom. Furthermore, the silence has inhibited the development of industry-wide “best practices.” UGC services can learn a lot from each other?if they start sharing information publicly.
On Friday, a conference called “Content Moderation and Removal at Scale” will take place at Santa Clara University. (The conference is sold out, but we will post recordings of the proceedings, and we hope to make a live-stream available). Ten UGC services will present “facts and figures” about their content moderation operations, and five panels will discuss cutting-edge content moderation issues. For some services, this conference will be the first time they’ve publicly revealed details about their content moderation operations. Ideally, the conference will end the industry’s norm of silence.
In anticipation of the conference, we assembled ten essays from conference speakers discussing various aspects of content moderation. These essays provide a sample of the conversation we anticipate at the conference. Expect to hear a lot more about content moderation operational issues in the coming months and years.
Eric Goldman is a Professor of Law, and Co-Director of the High Tech Law Institute, at Santa Clara University School of Law. He has researched and taught Internet Law for over 20 years, and he blogs on the topic at the Technology & Marketing Law Blog.
Filed Under: companies, content moderation, filtering, intermediary liability, internet platforms, moderation
Surveillance Software Company Gamma Found To Have Violated Human Rights; Receives Unprecedented Slap On The Wrist
from the critical-decisions dept
As Techdirt has reported on the increasingly active world of commercial spyware, one name in particular has cropped up several times: Gamma, with its FinFisher suite of spyware products. In October last year, we reported that Privacy International had filed a criminal complaint against the company with the National Cyber Crime Unit of the UK’s National Crime Agency. There’s no update on that move, but it seems that a parallel action has had more success (pdf):
> British-German surveillance company Gamma has been condemned by a human rights watchdog for its failure to adhere to human rights and due diligence standards, after a two year investigation into the company’s sale of surveillance technology to Bahrain.
Here’s what Privacy International says was happening in Bahrain:
> The complaint alleged that Gamma sold its notorious FinFisher intrusion software product to Bahrain as early as 2009, after which time it was used by the Bahraini government to violate the human rights of three Bahraini nationals and human rights activists, Ala’a Shehabi, Husain Abdulla and Shehab Hashem.
You’re probably wondering what the penalty is if you are found in breach of human rights in this way — clearly a serious matter. Well, here it is:
> The Organisation for Economic Cooperation and Development?s UK National Contact Point (?”CP”) concluded today that Gamma International should make changes to its business practices in order to ensure that in the future it respects the human rights of those affected by the surveillance technologies it sells.
Yes, you are told to do better next time. However, looking at things more positively, Privacy International points out:
> Today’s decision is the first time that the OECD has found a companies selling surveillance technologies to be in violation of human rights guidelines, and one of the most critical decisions ever issued by the OECD. In it, the NCP sets out in strong terms that Gamma has no human rights policies and due diligence processes that would protect against the abusive use of its products.
In other words, just as with the recent court victories against the UK government over its surveillance activities, what’s important here is not so much the punishment — or lack of it — as the fact that for the first time a company selling invasive surveillance tools was condemned in this way. At the very least, it puts such companies on notice that they are being watched and will be hauled up before these kind of bodies for public shaming. Well, it’s a start.
Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+
Filed Under: bahrain, companies, finfisher, human rights, privacy, surveillance
Companies: gamma, privacy international
Forbes Praises YouTube Censoring Steven Sotloff Beheading Video
from the getting-it-wrong dept
Following the horrific actions of ISIS/ISIL, in which the group beheaded American journalist James Foley and plastered the video in online forums like Twitter and YouTube, I argued that it is important that the American Public be given the chance to repudiate the aim of the video: paralyzing us with fear. Adding to that thought, Glenn Greenwald argued that the reason one must fight against censorship in the most egregious of speech cases is that such cases are often where the limitation of speech is legitimized. While this may not be a First Amendment consideration, since those sites are not affiliated with the government, it would be a mistake to suggest that free speech is limited as a concept to that narrow legal definition. Free and open speech is an ideal, one that is codified into law in some places, and one which enjoys a more relaxed but important status within societal norms.
I can only assume it’s a lack of understanding in both arguments above that has led one Forbes writer to rush to praise YouTube for taking down the latest ISIS/ISIL video. You’ve almost certainly heard that another American has been beheaded at the hands of civilization’s enemy, yet you’ll have a much harder time finding the video of Steven Sotloff’s death on YouTube this time around. Jeff Bercovici suggests this is a good thing.
With 100 hours of new footage uploaded every minute, YouTube says it doesn’t, and couldn’t, prescreen content, relying on users to flag violations. In this case, its monitors were, unfortunately, expecting the Sotloff video to be posted after weeks of threats by his captors and a widely circulated video plea by his mother to spare his life. That readiness allowed them to remove the video and shut down the account that posted it within hours.
This is how you get an American public uninformed about the brutality of groups like ISIS/ISIL. It’s how you legitimize terror groups who themselves wish to impose limitations on the types of things the people under their rule are allowed to see and do. It’s the start of how the American public is refused the opportunity to witness the full story. And that last part is especially egregious in a time and place where images rule the news cycle. Here the public is, inundated with the story of an American journalist being murdered at the hands of a group that considers that public a target for violence, and the public isn’t even given the opportunity to see the images at hand.
This, of course, isn’t to argue that people should be forced to watch the brutality. But, as I argued before, denying the American people the opportunity to disabuse ISIS/ISIL of the notion that they can scare us into inaction is something we shouldn’t stand for. YouTube can do this, but they shouldn’t, and they certainly shouldn’t be praised for it.
YouTube, on the other hand, has given itself more latitude to make judgement calls by basing its policies on common sense rather than First Amendment absolutism… For tech companies to embrace the principle of free expression is laudable — but they should also leave themselves the maneuverability to deal with bad actors who care nothing for that or any other civilized value.
This misunderstands the most important value of free speech: allowing the evil in the world to identify itself. Once we start down the road of disappearing the speech we deem to not have any value, you open the door for alternative interpretations of the value on a whole host of other speech. Censoring the bad actors doesn’t make them go away, it only refuses to shine the public light on them. It keeps people from being able to confront the horrible reality that exists and the group that wants to do us harm. That can’t be allowed to continue.
Filed Under: censorship, companies, free speech, isis, james foley, jeff bercovici, steven sotloff, youtube
Companies: google, youtube
One More Benefit From Snowden: Companies No Longer Lulled Into Helping NSA Without Legal Basis
from the sunshine-does-amazing-things dept
One of the earliest Snowden revelations wasn’t just that the big telcos were closely cooperating with the NSA, but that they had sometimes proactively volunteered to hand over much more information than the law required. So far, there has not been any evidence that tech/internet companies were quite so cooperative, but it does seem clear that many were, at the very least, incurious concerning what the NSA was up to and somewhat apathetic concerning actual requests that came in from the NSA. While there were a few companies (not many) that appeared to push back in a few circumstances (again, not many), for the most part, when someone from the federal government showed up with requests, companies were pretty quick to comply.
To some extent, you can understand why: not only is it easy to assume that government officials demanding information from you have some sort of serious reason for doing so, all of it happened in secret — meaning that the “benefits” to pushing back often seemed slight, while the costs could be tremendous.
Ed Snowden changed that cost-benefit equation in a big, big way.
And it’s most obvious in this simple way: companies are now proactively doing everything possible to counteract the NSA, realizing that they actually have to think about the impact on their customers when (not if) these programs become public. What used to be a simple relationship, with a lot of help from various companies, has changed to something much more approaching a directly adversarial relationship.
As fast as it can, Google is sealing up cracks in its systems that Edward J. Snowden revealed the N.S.A. had brilliantly exploited. It is encrypting more data as it moves among its servers and helping customers encode their own emails. Facebook, Microsoft and Yahoo are taking similar steps.
After years of cooperating with the government, the immediate goal now is to thwart Washington — as well as Beijing and Moscow. The strategy is also intended to preserve business overseas in places like Brazil and Germany that have threatened to entrust data only to local providers.
[….]
A year after Mr. Snowden’s revelations, the era of quiet cooperation is over. Telecommunications companies say they are denying requests to volunteer data not covered by existing law. A.T.&T., Verizon and others say that compared with a year ago, they are far more reluctant to cooperate with the United States government in “gray areas” where there is no explicit requirement for a legal warrant.
There is still a long way to go, but just the fact that companies now have to take into account “how will this look when splashed across the internet” means that they’re already going much, much further in protecting the privacy of their users and customers.
Filed Under: companies, ed snowden, nsa, surveillance, voluntary assistance, warrants
Companies: at&t, facebook, google, microsoft, verizon, yahoo
Snapchat Comes In Dead Last On EFF's Privacy Protecting List; Just Days After Getting Spanked By FTC
from the what-privacy dept
Snapchat is often pitched as a more “private” alternative to other messaging apps, considering that a key part of its pitch is that the messages/images you send to others quickly disappear. For years, people have pointed out that Snapchat was overstating the reality when making those claims, and last week the FTC spanked the company for misleading its users about the privacy and security of their messages. And, this week, the “privacy” claims of Snapchat get another black eye as the EFF’s latest Who Has Your Back? chart has come out, detailing how various services deal with protecting your privacy from the US government. Want to know who came in dead last? Snapchat:
Just a couple weeks ago, we had noted that a bunch of tech companies had been improving their policies in an attempt to score better on this annual report from the EFF. And, indeed, as you look down the full list, you see a lot more stars than when EFF started this list. Back then, lots of companies only got one star (or less!), though the categories weren’t exactly the same (EFF has added a few over the years).
Want to know just how bad Snapchat is? Even AT&T and Comcast score better. Snapchat was the only company with one star. Amazon and AT&T only got two. Comcast (along with Foursquare and Myspace) had three. At the top of the list, Apple, CREDO Mobile, Dropbox, Facebook, Google, Microsoft, Sonic.net, Twitter and Yahoo all got five stars. Some might question some of this, given stories of things like Microsoft changing Skype to grant greater government access, but on the specific categories that EFF judges for the ratings, they appear to be accurate.
Of course, to be fair, one of the categories is whether or not the company “fights for users’ privacy rights in courts.” That’s an important measure, but it’s also a conditional one. All of the other categories can be done by any company of their own volition, but you can’t fight in court if there are no opportunities to go to court to protect your users’ privacy. Either way, it’s good to see that the EFF chart is having an impact in getting companies to be more aggressive in protecting the privacy of their users from the government. But, really, shame on Snapchat for positioning itself as a privacy option when it appears to do very little to actually protect people’s privacy.
Filed Under: companies, government, privacy, surveillance, who's got your back
Companies: eff, snapchat
UK Police And Companies Will Have Access To Database Of All England's Medical Records
from the privacy-disaster-waiting-to-happen dept
The UK government is currently building a database called care.data that will contain all of England’s medical records. It’s being promoted as providing valuable information for healthcare management and medical researchers that will lead to improved treatment.
Given the extremely sensitive nature of the material that will be stored, you might have expected this to be opt-in, but instead the UK government has chosen to make it opt-out. Not only that, but the relatively sparse information about what was happening was sent in the form of a generic, unaddressed letter that differs little from the dozens of junk mail pieces received by most households each week, and failed to include any easy-to-use opt-out form.
This has fuelled suspicions that the UK government is making it hard to opt out in order to keep the numbers enrolled in the database as high as possible. More recently, good reasons why people might want to avoid the scheme have emerged. For example, it was revealed that as well as being provided to research scientists, the database could also be bought by companies:
> Drug and insurance companies will from later this year be able to buy information on patients — including mental health conditions and diseases such as cancer, as well as smoking and drinking habits — once a single English database of medical data has been created.
Now we learn that the UK police will also have access:
> The database that will store all of England’s health records has a series of “backdoors” that will allow police and government bodies to access people’s medical data.
As the UK MP David Davis told the Guardian:
> “The idea that police will be able to request information from a central database without a warrant totally undermines a long-held belief in the confidentiality of the doctor-patient relationship,” he said.
That means that as well as the risk of a privacy disaster of unprecedented proportions if the consolidated health data is lost or stolen as a result of being passed to third parties (as has already happened with a similar but smaller database), patients may be less likely to confide in their doctors knowing that details will end up in a database sold to companies and freely available to the police. Nice work, Mr Cameron.
Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+
Filed Under: companies, database, healthcare data, medical records, nhs, police, privacy, uk
Why, Yes, Of Course The NSA Spying Involves More Companies Than Already Listed
from the them-too?-the-club-is-getting-bigger dept
This shouldn’t be a surprise to anyone, but the NSA’s spying on Verizon call logs were not, of course, limited to just Verizon. The WSJ has confirmed that AT&T and Sprint are both under similar orders. That article also says that a number of internet firms and credit card companies are participating as well.
And, of course, as the story gets bigger and bigger, we’re now getting quotes from ex-government officials saying that even they are surprised at how comprehensive the surveillance appears to be.
“It looks from what I’ve seen to be larger than anything I thought we were doing,” says Paul Rosenzweig, author of a recent book, Cyber Warfare.
Rosenzweig should know. As a former acting assistant secretary at the Department of Homeland Security, he was one of those people given the kind of Top Secret / Sensitive Compartmented Information clearances needed to work on any project as sensitive as this. But, he says, “I wasn’t read in on this.”
I heard the same basic thing from another ex-government official, who didn’t want to be named, who had some knowledge of these kinds of programs back at the beginning in the 2008/2009 timeframe — saying that if what’s being said is true, the program has greatly expanded from where it originated.
Filed Under: companies, nsa, surveillance
Companies: at&t, sprint, verizon
Former DHS Head On Google Glass: Intrusive Surveillance Is Bad — If It's A Corporation Doing It
from the you-know,-it's-completely-possible-that-BOTH-are-bad dept
With Google’s eyewear seemingly headed to the general public in the not-too-distant future, many people have expressed concern about being recorded against their wishes. As Mike pointed out, there’s a bit of a backlash/moral panic on display right now, which has resulted in a petition requesting the White House ban the devices. He also mentioned briefly that former DHS head Michael Chertoff had written an editorial about the privacy implications of Google Glass.
Chertoff analyzes some of the privacy implications raised by Google Glass but, considering his former position in the DHS and his current role as the head of The Chertoff Group, a “global security advisory firm,” this editorial comes off as one-sided and tone deaf. Why would someone who seemingly has no concern about government intrusion into people’s privacy care about a corporation’s move onto the same turf? Bruce Schneier addresses this dissonance briefly in his post linking to Chertoff’s editorial.
It’s not unusual for government officials — the very people we disagree with regarding civil liberties issues — to agree with us on consumer privacy issues.
Deep down, we’re all human, I suppose. Or, at the very least, we have common enemies. Chertoff is concerned about the potential of a corporation collecting and controlling this massive amount of data. But is his concern genuine? Schneier addresses that as well.
But don’t forget that this person advocated for full-body scanners at airports while on the payroll of a scanner company.
Chertoff gets off on the wrong foot by comparing Google Glass with surveillance drones, referring to government and law enforcement’s “acceptable” surveillance while trying to paint a horrific portrait of a sky filled with corporate surveillance.
Imagine a world in which every major company in America flew hundreds of thousands of drones overhead, 24 hours a day, seven days a week, 365 days a year, collecting data on what Americans were doing down below. It’s a chilling thought that would engender howls of outrage.
Now imagine that millions of Americans walk around each day wearing the equivalent of a drone on their head: a device capable of capturing video and audio recordings of everything that happens around them. And imagine that these devices upload the data to large-scale commercial enterprises that are able to collect the recordings from each and every American and integrate them together to form a minute-by-minute tracking of the activities of millions.
There’s really no need to imagine any part of this scenario. Law enforcement entities all over the US are purchasing drones and our government is using this same equipment to patrol borders and keep tabs on large crowds.
There are legitimate privacy concerns, but Chertoff’s background distracts from his message, especially when he himself brings up drone usage that likely concerns Americans more than privacy invasions from Glass wearers.
So, who owns and what happens to the user’s data? Can the entire database be mined and analyzed for commercial purposes? What rules will apply when law enforcement seeks access to the data for a criminal or national security investigation? For how long will the data be retained?
These are the questions that should be raised and Google and its competitors should probably seek some answers before turning interactive eyewear into a tool for second-hand government surveillance. More importantly, the government itself should probably answer a few of these questions. What are the rules that apply when law enforcement (or larger security agencies) seek to obtain this handily compiled data? As it stands right now, most of this process is shrouded in secrecy and attempts to pry some answers out of the government’s hands have been rebuffed via claims of “national security” or in the form of redacted-to-abstraction FOIA “responses.”
The length of data retention should be addressed as well. As Chertoff points out, Google will probably handle these questions with a lengthy Terms of Service agreement, one that most users will never read until something undesirable happens. A convoluted TOS is a company’s best friend, but at least the information is freely available. The same can’t be said for law enforcement and government entities.
Ubiquitous street video streaming will capture images of many people who haven’t volunteered to have their images collected, collated and analyzed. Even those who might be willing to forgo some degree of privacy to enhance national security should be concerned about a corporate America that will have an unrestricted continuous video record of millions.
Yes, this is a definite downside to Google Glass. But Chertoff muffs this by worrying that even good citizens (those willing to “forgo some privacy to enhance [ha!] national security”) won’t be thrilled that any citizen could be “taping” them at any time. Once again, we’re contrasting the actions of a corporation with the actions of government and law enforcement. But Chertoff fails to see how both can be undesirable. Instead, he frames Google’s product as an encroachment but paints government surveillance as, at worst, a very necessary evil.
We need to consider what rights consumers have, and what rights nonparticipant third parties should have.
Sure, consumers should have rights, “nonparticipant third parties” especially. Unless they’re American citizens being increasingly surveilled by the “good guys.” This huge number of “nonparticipant third parties” doesn’t even warrant a mention by Chertoff.
Chertoff has a suggestion for a fix, but it’s nothing more than a power grab presented as a “solution.”
Maybe the market can take care of this problem. But the likely pervasiveness of this type of technology convinces me that government must play a regulatory role.
A regulatory role does nothing more than give the government (and law enforcement) an opportunity to insert a “back door,” either via coding changes or by placing themselves in a middleman position, much in the way they have with telcos and ISPs. There are a lot of unintended consequences and perverse incentives that go hand-in-hand with government regulation and no one should be in a hurry to unpack those.
Finally, Chertoff comes full circle back to his strained starting point: drones.
The new data collection platforms right in front of us are much more likely to affect our lives than is the prospect of drones overhead surveilling American citizens.
If there’s a more noticeable effect from Google Glass, it’s only because it’s a consumer product the public can access (or be subjected to). Drones are an abstraction. The general public is severely limited in its response to state-deployed drones. A response to a consumer product can be felt immediately. If you feel uncomfortable around a Google Glass wearer, you have a few options (ask the wearer to take them off or leave/exit the “filming” area). If you feel uncomfortable being surveilled by eyes in the sky, well, you can set any number of lengthy plans in motion, but it’s unlikely your concerns will be addressed, much less result in curtailed surveillance.
While it’s nice to see Chertoff recognizes the privacy issues inherent in a consumer product like this, it’s rather annoying to see him treat government/law enforcement surveillance as something far less problematic.
Filed Under: companies, google glass, government, michael chertoff, privacy, surveillance
Companies: google
As Congress Debates CISPA, Companies Admit No Real Damage From Cyberattacks
from the the-truth-is-so-inconvenient dept
Since the beginning of the cybersecurity FUDgasm from Congress, we’ve been asking for proof of the actual problem. All we get are stories about how airplanes might fall from the sky, but not a single, actual example of any serious problem. Recently, some of the rhetoric shifted to how it wasn’t necessarily planes falling from the sky but Chinese hackers eating away at our livelihoods by hacking into computers to get our secrets and destroy our economy. Today, Congress is debating CISPA (in secret) based on this assumption. There’s just one problem: it’s still not true.
The 27 largest companies have now admitted to the SEC that cyberattacks are basically meaningless and have done little to no damage.
The 27 largest U.S. companies reporting cyber attacks say they sustained no major financial losses, exposing a disconnect with federal officials who say billions of dollars in corporate secrets are being stolen.
MetLife Inc., Coca-Cola Co. (KO), and Honeywell International Inc. were among the 100 largest U.S. companies by revenue to disclose online attacks in recent filings with the Securities and Exchange Commission, according to data compiled by Bloomberg. Citigroup Inc. (C) reported “limited losses” while the others said there was no material impact.
So what’s this all really about? It goes back to what we said from the very, very beginning. This is all FUD, engineered by defense contractors looking for a new way to charge the government tons of money, combined with a willing government who sees this as an opportunity to further take away the public’s privacy by claiming that it needs to see into corporate networks to prevent these attacks.
If this was a real problem, wouldn’t we see at least some evidence?
Filed Under: cispa, companies, cybersecurity, harm, threats
Last Chance To Help With Some Research On The Impact Of Patent Trolls
from the go-for-it dept
A few weeks ago, we wrote about some research being done by Colleen Chien, a law professor, concerning the impact of patent trolls on startups and tech companies. If you represent a tech company which has had to deal with patent trolls, please take a few minutes to answer these survey questions. The research results that come out of this should be quite helpful in understanding the impact of trolls on innovative startups.
Filed Under: companies, innovation, patent trolls, patents, research