w3c – Techdirt (original) (raw)

How Sketchy Data Scavengers Are Using Hatred Of 'Big Tech' To Attack Plans To Make The Web More Private

from the be-careful-what-you-wish-for dept

We warned that this was likely back when Google announced plans to phase out third-party cookies in Chrome (something all the other major browser makers had already done): that this would be used to attack Google as being anti-competitive, even as it was pro-privacy. Privacy and competition do not need to be in conflict, but they can be. And what’s happening now is that more sketchy ad companies are abusing the constant drumbeat and fear over “Big Tech” to attack privacy protections — but that behind the scenes story is getting missed as people are more focused on more breaking news about how Google has decided to push back its move on 3rd party cookies for two more years.

Issie Lapowsky, over at Protocol, has a must read story on how sketchy ad and data brokers have crashed the W3C, riding a wave of anti-big tech feelings to push for worse solutions for everyone as it comes to privacy (of course, Facebook is on the wrong side of this as well — it’s basically all the sketchy companies and Facebook against all the other companies). It’s quite a story.

On one side are engineers who build browsers at Apple, Google, Mozilla, Brave and Microsoft. These companies are frequent competitors that have come to embrace web privacy on drastically different timelines. But they’ve all heard the call of both global regulators and their own users, and are turning to the W3C to develop new privacy-protective standards to replace the tracking techniques businesses have long relied on.

On the other side are companies that use cross-site tracking for things like website optimization and advertising, and are fighting for their industry’s very survival. That includes small firms like Rosewell’s, but also giants of the industry, like Facebook.

Much of the story focuses on James Rosewell, who runs a “data analytics” company, and joined the W3C apparently just to throw a wrench in any real effort to beef up privacy protections as a core element of the internet.

Rosewell has become one of this side’s most committed foot soldiers since he joined the W3C last April. Where Facebook’s developers can only offer cautious edits to Apple and Google’s privacy proposals, knowing full well that every exchange within the W3C is part of the public record, Rosewell is decidedly less constrained. On any given day, you can find him in groups dedicated to privacy or web advertising, diving into conversations about new standards browsers are considering.

Rather than asking technical questions about how to make browsers’ privacy specifications work better, he often asks philosophical ones, like whether anyone really wants their browser making certain privacy decisions for them at all. He’s filled the W3C’s forums with concerns about its underlying procedures, sometimes a dozen at a time, and has called upon the W3C’s leadership to more clearly articulate the values for which the organization stands.

His exchanges with other members of the group tend to have the flavor of Hamilton and Burr’s last letters ? overly polite, but pulsing with contempt. “I prioritize clarity over social harmony,” Rosewell said.

Rosewell and his companions are leaning hard on the idea that if privacy is actually protected, then only Google, Apple, and Microsoft will control the internet. But that’s bullshit — and thankfully, the privacy-focused browser Brave has people willing to call it out as such:

“They use cynical terms like: ‘We’re here to protect user choice’ or ‘We’re here to protect the open web’ or, frankly, horseshit like this,” said Pete Snyder, director of privacy at Brave, which makes an anti-tracking browser. “They’re there to slow down privacy protections that the browsers are creating.”

The article also notes that these more sketchy data brokers (and Facebook) are not just looking to derail the more privacy protective approaches of the browser makers, but to create new standards that would enshrine the surveillance aspect of the web. And, again, they’re playing on the general sentiment against “big tech” to try to push that across.

That’s only the beginning of the article, it then goes into great detail on the history here (including the fight over “Do Not Track”), and how Rosewell and his colleagues are trying to throw sand in the gears of the W3C to basically wipe out any real movement towards better standards on privacy. And also how Facebook is quietly cheering on Rosewell and folks.

It may sound like a boring and wonky discussion, but it’s hugely important for understand the future of the open internet and privacy, and how competition questions are impacted by these things. All of this is important, but the various things — privacy, competition, content moderation, etc — are often looked at as separate buckets. But bad choices in one will inevitably impact the others as well. For those who care about privacy online, this story (which goes into a lot more detail) is a must read.

Filed Under: 3rd party cookies, data brokers, privacy, standards, tracking, web browsing
Companies: apple, facebook, google, w3c

EFF Officially Appeals Tim Berners-Lee Decision On DRM In HTML

from the last-ditch-effort dept

Last week, we wrote about the unfortunate and short-sighted decision by Tim Berners-Lee to move forward with DRM in HTML. To be more exact, the move forward is on Encrypted Media Extensions in HTML, which will allow third party DRM to integrate simply into the web. It’s been a foregone conclusion that EME was going to get approved, but there was a smaller fight about whether or not W3C would back a covenant not to sue security and privacy researchers who would be investigating (and sometimes breaking) that encryption. Due to massive pushback from the likes of the MPAA and (unfortunately) Netflix, Tim Berners-Lee rejected this covenant proposal.

In response, W3C member EFF has now filed a notice of appeal on the decision. The crux of the appeal is the claimed benefits of EME that Berners-Lee put forth won’t actually be benefits without the freedom of security researchers to audit the technology — and that the wider W3C membership should have been able to vote on the issue. This appeals process has never been used before at the W3C, even though it’s officially part of its charter — so no one’s entirely sure what happens next.

The appeal is worth reading so we’re reposting a big chunk of it here:

1. The enhanced privacy protection of a sandbox is only as good as the sandbox, so we need to be able to audit the sandbox.

The privacy-protecting constraints the sandbox imposes on code only work if the constraints can’t be bypassed by malicious or defective software. Because security is a process, not a product and because there is no security through obscurity, the claimed benefits of EME’s sandbox require continuous, independent verification in the form of adversarial peer review by outside parties who do not face liability when they reveal defects in members’ products.

This is the norm with every W3C recommendation: that security researchers are empowered to tell the truth about defects in implementations of our standards. EME is unique among all W3C standards past and present in that DRM laws confer upon W3C members the power to silence security researchers.

EME is said to be respecting of user privacy on the basis of the integrity of its sandboxes. A covenant is absolutely essential to ensuring that integrity.

2. The accessibility considerations of EME omits any consideration of the automated generation of accessibility metadata, and without this, EME’s accessibility benefits are constrained to the detriment of people with disabilities.

It’s true that EME goes further than other DRM systems in making space available for the addition of metadata that helps people with disabilities use video. However, as EME is intended to restrict the usage and playback of video at web-scale, we must also ask ourselves how metadata that fills that available space will be generated.

For example, EME’s metadata channels could be used to embed warnings about upcoming strobe effects in video, which may trigger photosensitive epileptic seizures. Applying such a filter to (say) the entire corpus of videos available to Netflix subscribers who rely on EME to watch their movies would safeguard people with epilepsy from risks ranging from discomfort to severe physical harm.

There is no practical way in which a group of people concerned for those with photosensitive epilepsy could screen all those Netflix videos and annotate them with strobe warnings, or generate them on the fly as video is streamed. By contrast, such a feat could be accomplished with a trivial amount of code. For this code to act on EME-locked videos, EME’s restrictions would have to be bypassed.

It is legal to perform this kind of automated accessibility analysis on all the other media and transports that the W3C has ever standardized. Thus the traditional scope of accessibility compliance in a W3C standard — “is there somewhere to put the accessibility data when you have it?” — is insufficient here. We must also ask, “Has W3C taken steps to ensure that the generation of accessibility data is not imperiled by its standard?”

There are many kinds of accessibility metadata that could be applied to EME-restricted videos: subtitles, descriptive tracks, translations. The demand for, and utility of, such data far outstrips our whole species’ ability to generate it by hand. Even if we all labored for all our days to annotate the videos EME restricts, we would but scratch the surface.

However, in the presence of a covenant, software can do this repetitive work for us, without much expense or effort.

3. The benefits of interoperability can only be realized if implementers are shielded from liability for legitimate activities.

EME only works to render video with the addition of a nonstandard, proprietary component called a Content Decryption Module (CDM). CDM licenses are only available to those who promise not to engage in lawful conduct that incumbents in the market dislike.

For a new market entrant to be competitive, it generally has to offer a new kind of product or service, a novel offering that overcomes the natural disadvantages that come from being an unknown upstart. For example, Apple was able to enter the music industry by engaging in lawful activity that other members of the industry had foresworn. Likewise Netflix still routinely engages in conduct (mailing out DVDs) that DRM advocates deplore, but are powerless to stop, because it is lawful. The entire cable industry — including Comcast — owes its existence to the willingness of new market entrants to break with the existing boundaries of “polite behavior.”

EME’s existence turns on the assertion that premium video playback is essential to the success of any web player. It follows that new players will need premium video playback to succeed — but new players have never successfully entered a market by advertising a product that is “just like the ones everyone else has, but from someone you’ve never heard of.”

The W3C should not make standards that empower participants to break interoperability. By doing so, EME violates the norm set by every other W3C standard, past and present.

It’s unclear to me why Tim Berners-Lee has been so difficult on this issue — as he’s been so good for so long on so many other issues. I understand that not everyone you agree with should ever agree with you on all things, but this seems like a very weird hill to die on.

Filed Under: appeal, drm, eme, html, tim berners-lee
Companies: eff, w3c

Tim Berners-Lee Sells Out His Creation: Officially Supports DRM In HTML

from the this-is-bad dept

For years now, we’ve discussed the various problems with the push (led by the MPAA, but with some help from Netflix) to officially add DRM to the HTML 5 standard. Now, some will quibble with even that description, as supporters of this proposal insist that it’s not actually adding DRM, but rather this “Encrypted Media Extensions” (EME) is merely just a system by which DRM might be implemented, but that’s a bunch of semantic hogwash. EME is bringing DRM directly into HTML and killing the dream of a truly open internet. Instead, we get a functionally broken internet. Despite widespread protests and concerns about this, W3C boss (and inventor of the Web), Tim Berners-Lee, has signed off on the proposal. Of course, given the years of criticism over this, that signoff has come with a long and detailed defense of the decision… along with a tiny opening to stop it.

There are many issues underlying this decision, but there are two key ones that we want to discuss here: whether EME is necessary at all and whether or not the W3C should have included a special protection for security researchers.

First, the question of whether or not EME even needs to be in HTML at all. Many — even those who dislike DRM — have argued that it was kind of necessary. The underlying argument here is that certain content producers would effectively abandon the web without EME being in HTML5. However, this argument rests on the assumption that the web needs those content producers more than those content producers need the web — and I’m not convinced that’s an accurate portrayal of reality. It is fair to note that, especially with the rise of smart devices from phones to tablets to TVs, you could envision a world in which the big content producers “abandoned” the web and only put their content in proprietary DRM’d apps. And maybe that does happen. But my response to that is… so what? Let them make that decision and perhaps the web itself is a better place. And plenty of other, smarter, more innovative content producers can jump in and fill the gaps, providing all sorts of cool content that doesn’t require DRM, until those with outdated views realize they’re missing out. Separately, I tend to agree with Cory Doctorow’s long-held view that DRM is an attack on basic computing principles — one that sets up the user as a threat, rather than the person who owns the computer in question. That twisted setup leads to bad outcomes that create harm. That view, however, is clearly not in the majority, and many people admitted it was a foregone conclusion that some form of EME would move forward.

The second issue is much more problematic. A bunch of W3C members had made a clear proposal that if EME is included, there should be a covenant that W3C members will not sue security researchers under Section 1201 of the DMCA should they crack any DRM. There is no reason not to support this. Security researchers should be encouraged to be searching for vulnerabilities in DRM and encryption in order to better protect us all. And, yet, for reasons that no one can quite understand, the W3C has rejected multiple versions of this proposal, often with little discussion or explanation. The final decision from Tim Berners-Lee on this is basically “sure a covenant not to sue would have been nice, and we think companies shouldn’t sue, but… since this wasn’t raised at the very beginning, we’re not supporting it”:

We recommend organizations involved in DRM and EME implementations ensure proper security and privacy protection of their users. We also recommend that such organizations not use the anti-circumvention provisions of the Digital Millennium Copyright Act (DMCA) and similar laws around the world to prevent security and privacy research on the specification or on implementations. We invite them to adopt the proposed best practices for security guidelines [7] (or some variation), intended to protect security and privacy researchers. Others might advocate for protection in public policy fora ? an area that is outside the scope of W3C which is a technical standards organization. In addition, the prohibition on “circumvention” of technical measures to protect copyright is broader than copyright law’s protections against infringement, and it is not our intent to provide a technical hook for those paracopyright provisions.

Given that there was strong support to initially charter this work (without any mention of a covenant) and continued support to successfully provide a specification that meets the technical requirements that were presented, the Director did not feel it appropriate that the request for a covenant from a minority of Members should block the work the Working Group did to develop the specification that they were chartered to develop. Accordingly the Director overruled these objections.

This is unfortunate. What’s bizarre is that the supporters of DRM basically refuse to discuss any of this. Even just a few days ago, the Center for Democracy and Technology proposed a last-ditch “very narrow” compromise to protect a limited set of security and privacy researchers (just those examining implementations of w3C specifications for privacy and security flaws.) Netflix flat out rejected this compromise saying that it’s “similar to the proposal” that was made a year ago. Even though it’s not. It was more narrowly focused and designed to respond to whatever concerns Netflix and others had.

The problem here seemed to be that Netflix and the MPAA realized that they had enough power to push this through without needing to protect security researchers, and just decided “we can do it, so fuck it, let’s do it.” And Tim Berners-Lee — who had the ability to block it — caved in and let it happen. The whole thing is a travesty.

Corry Doctorow has a thorough and detailed response to the W3C’s decision that pushes back on many of the claims that the W3C and Berners-Lee have made in support of this decision. Here’s just part of it:

We’re dismayed to see the W3C literally overrule the concerns of its public interest members, security experts, accessibility members and innovative startup members, putting the institution’s thumb on the scales for the large incumbents that dominate the web, ensuring that dominance lasts forever.

This will break people, companies, and projects, and it will be technologists and their lawyers, including the EFF, who will be the ones who’ll have to pick up the pieces. We’ve seen what happens when people and small startups face the wrath of giant corporations whose ire they’ve aroused. We’ve seen those people bankrupted, jailed, and personally destroyed.

This was a bad decision done badly, and Tim Berners-Lee, the MPAA and Netflix should be ashamed. The MPAA breaking the open internet I can understand. It’s what that organization has wanted to do for over a decade. But Netflix should be a supporter of the open internet, rather than an out and out detractor.

As Cory notes in his post, there is an appeals process, but it’s never been used before. The EFF and others are exploring it now, but it’s a hail mary process at this point. What a shame.

Filed Under: circumvention, copyright, dmca, drm, eme, html, html 5, research, security, tim berners-lee
Companies: mpaa, netflix, w3c

from the what-a-world dept

Last week, the Copyright Office finally released a report that it had been working on for some time, looking specifically at Section 1201 of the DMCA. In case you’re new around here, or have somehow missed all the times we’ve spoken about DMCA 1201 before, that’s the “anti-circumvention” part of the DMCA. It’s the part that says it’s against copyright law to circumvent (or provide tools to circumvent) any kind of “technological protection measures,” by which it means DRM. In short: getting around DRM or selling a tool that gets around DRM — even if it’s not for the purpose of infringing on any copyrights — is seen as automatically infringing copyright law. This is dumb for a whole host of reasons, many of which we’ve explored in the past. Not only is the law dumb, it’s so dumb that Congress knew that it would create a massive mess for tons of legitimate uses. So it built in an even dumber procedure to try to deal with the fact it passed a dumb law (have you noticed I have opinions on Section 1201?).

Specifically, every three years, people and companies can petition the Copyright Office/Librarian of Congress to “exempt” certain technologies or uses from 1201, saying that it is legal to circumvent the technological protection measures in that case, for the succeeding three years (yes, after three years, the original exemption expires, unless it is renewed). This triennial review process has historically been an (annoying) joke, where people basically have to beg the Copyright Office to let them, say, get around DVD DRM, in order to make documentaries. Or, famously, that time in 2012 when the Librarian of Congress refused to renew the phone unlocking exemption, magically making it illegal to unlock your phone for no clear reason at all. The whole thing is fairly described as a hot mess.

And, it really harms our own security the most.

That’s because security researchers often need these exemptions the most, because they don’t want to be accused of violating copyright law for doing their jobs in figuring out where there are weaknesses and vulnerabilities in various technologies. So, many of the applied for exemptions tend to come from the security community — and sometimes they’re granted, and other times they are not. A year ago, some security researchers (along with the EFF) sued the US government, arguing that 1201 violates the First Amendment, scaring off security researchers, and providing none of the usual defenses against infringement, such as fair use (which the Supreme Court has argued is a necessary First Amendment valve on copyright). That case is still waiting for a judge to rule on early motions (and it’s waiting a long time).

Given all that as background, it’s somewhat fascinating (and marginally surprising) to see that the Copyright Office officially agrees that the 1201 setup totally sucks for security researchers, and it would actually like Congress to fix that. The report specifically recommends expanding the existing “permanent exemption” for certain types of “security testing” to make it more applicable to a wider set of security practices:

… the Office recommends that Congress consider expanding the exemption for security testing under section 1201(j). This could include expanding the definition of security testing, easing the requirement that researchers obtain authorization, and abandoning or clarifying the exemption?s multifactor test for eligibility.

There’s another section in the law for “encryption research” and, again, the Copyright Office recognizes that should be expanded:

The exemption for encryption research under section 1201(g) may benefit from similar revision, including removal of the requirement to seek authorization and clarification or removal of the multifactor test.

For what it’s worth, the report (obviously remembering how it got basically mocked and burned by everyone for removing the cell phone unlocking exemption in 2012) now asks for phone unlocking to be designated a permanent exemption under the law.

These are fairly small changes being sought by the Copyright Office, but it strikes me as somewhat incredible (and very disappointing) that this small bit of enlightenment goes much further than the World Wide Web Consortium’s (W3C) view on DRM and security research. As you may recall, there’s this ongoing battle over DRM in HTML 5. When the W3C refused to block it outright, some members came up with a fairly straightforward no-brainer rule: all members had to agree not to go after security researchers for circumventing the DRM in HTML 5. And the W3C rejected that proposal.

In other words, the Copyright Office — famous for its historically expansionist view of copyright, as well as its general tilt towards supporting Hollywood over everyone else — is now recognizing that it’s obvious that security researchers should have the right to circumvent DRM without violating copyright law, while the W3C — famous for promoting an open web — is against this. This is “up is down, night is day, cats & dogs living together” kind of stuff. Maybe someone should let the W3C know that it’s position on security researchers and DRM is now more extremist than the Copyright Offices?

Either way, at the very least, Congress should follow up on this report and expand the exemptions for security research. It doesn’t just help out those researchers, it helps all of us when security researchers are able to do their jobs and help to protect us all.

Filed Under: anti-circumvention, copyright office, dmca, dmca 1201, drm, research, security, triennial review
Companies: w3c

Unesco Says Adding DRM To HTML Is A Very Bad Idea

from the when-unesco-is-against-you... dept

For years now, we’ve written about the years-long effort, led by the MPAA and others, to put DRM directly into the standard for HTML5 (via “Encrypted Media Extensions” or EME) which continues to move forward with Tim Berners-Lee acting as if there’s nothing that can be done about it. It appears that not everyone agrees. Unesco, the United Nations Educational, Scientific and Cultural Organization has come out strongly against adding DRM to HTML5 in a letter sent to Tim Berners-Lee (found via Boing Boing).

… should Internet browsers become configured to work with EME to act as a framed gateway rather than serving as intrinsically open portals, there could be risks to Rights, to Openness and Accessibility.

Primarily, there is the issue of the Right to seek and receive information. To date, most filtering and blocking of content has been done at the level of the network, whereas the risk now is that this capacity could also become technically effective at the level of the browser. With standardized EME incorporated in the browser, a level of control would cascade to the user interface level. This could possibly undercut the use of circumvention tools to access content that is illegitimately restricted.

While a case can be made for exceptional limitations on accessing certain content, as per international human rights standards such as the International Covenant on Civil and Political Rights, the same human rights standards are clear that this is should never be a default setting. Unfortunately, many instances of limitation of access are not legitimate in international standards as they do not meet the criteria of legality, necessity and proportionality, and legitimate purpose, and it would be regrettable if standardized EME could end up reinforcing this unfortunate situation.

One would hope that when even organizations like Unesco are speaking up, that the W3C would take a step back from the ledge and reconsider its position.

Filed Under: drm, eme, free speech, human rights
Companies: unesco, w3c

Tim Berners-Lee Endorses DRM In HTML5, Offers Depressingly Weak Defense Of His Decision

from the welcome-to-the-locked-down-web dept

For the last four years, the Web has had to live with a festering wound: the threat of DRM being added to the HTML 5 standard in the form of Encrypted Media Extensions (EME). Here on Techdirt, we’ve written numerous posts explaining why this is a really stupid idea, as have many, many other people. Despite the clear evidence that EME will be harmful to just about everyone — except the copyright companies, of course — the inventor of the Web, and director of the W3C (World Wide Web Consortium), Sir Tim Berners-Lee, has just given his blessing to the idea:

The question which has been debated around the net is whether W3C should endorse the Encrypted Media Extensions (EME) standard which allows a web page to include encrypted content, by connecting an existing underlying Digital Rights Management (DRM) system in the underlying platform. Some people have protested “no”, but in fact I decided the actual logical answer is “yes”. As many people have been so fervent in their demonstrations, I feel I owe it to them to explain the logic.

He does so in a long, rather rambling post that signally fails to convince. Its main argument is defeatism: DRM exists, the DMCA exists, copyright exists, so we’ll just have to go along with them:

could W3C make a stand and just because DRM is a bad thing for users, could just refuse to work on DRM and push back wherever they could on it? Well, that would again not have any effect, because the W3C is not a court or an enforcement agency. W3C is a place for people to talk, and forge consensus over great new technology for the web. Yes, there is an argument made that in any case, W3C should just stand up against DRM, but we, like Canute, understand our power is limited.

But there’s a world of difference between recognizing that DRM exists, and giving it W3C’s endorsement. Refusing to incorporate DRM in HTML5 would send a strong signal that it has no place in an open Internet, which would help other efforts to get rid of it completely. That’s a realistic aim, for reasons that Berners-Lee himself mentions:

we have seen [the music] industry move consciously from a DRM-based model to an unencrypted model, where often the buyer’s email address may be put in a watermark, but there is no DRM.

In other words, an industry that hitherto claimed that DRM was indispensable, has now moved to another approach that does not require it. The video industry could do exactly the same, and refusing to include EME in HTML5 would be a great way of encouraging them to do so. Instead, by making DRM an official part of the Web, Berners-Lee has almost guaranteed that companies will stick with it.

Aside from a fatalistic acceptance of DRM’s inevitability, Berners-Lee’s main argument seems to be that EME allows the user’s privacy to be protected better than other approaches. That’s a noble aim, but his reasoning doesn’t stand up to scrutiny. He says:

If put it on the web using EME, they will get to record that the user unlocked the movie. The browser though, in the EME system, can limit the amount of access the DRM code has, and can prevent it “phoning home” with more details. (The web page may also monitor and report on the user, but that can be detected and monitored as that code is not part of the “DRM blob”)

In fact there are various ways that a Web page can identify and track a user. And if the content is being streamed, the company will inevitably know exactly what is being watched when, so Berners-Lee’s argument that EME is better than a closed-source app, which could be used to profile a user, is not true. Moreover, harping on about the disadvantages of closed-source systems is disingenuous, since the DRM modules used with EME are all closed source.

Also deeply disappointing is Berners-Lee’s failure to recognize the seriousness of the threat that EME represents to security researchers. The problem is that once DRM enters the equation, the DMCA comes into play, with heavy penalties for those who dare to reveal flaws, as the EFF explained two years ago. The EFF came up with a simple solution that would at least have limited the damage the DMCA inflicts here:

a binding promise that W3C members would have to sign as a condition of continuing the DRM work at the W3C, and once they do, they not be able to use the DMCA or laws like it to threaten security researchers.

Berners-Lee’s support for this idea is feeble:

There is currently (2017-02) a related effort at W3C to encourage companies to set up “bug bounty” programs to the extent that at least they guarantee immunity from prosecution to security researchers who find and report bugs in their systems. While W3C can encourage this, it can only provide guidelines, and cannot change the law. I encourage those who think this is important to help find a common set of best practice guidelines which companies will agree to.

One of the biggest problems with the defense of his position is that Berners-Lee acknowledges only in passing one of the most serious threats that DRM in HTML5 represents to the open Web. Talking about concerns that DRM for videos could spread to text, he writes:

For books, yes this could be a problem, because there have been a large number of closed non-web devices which people are used to, and for which the publishers are used to using DRM. For many the physical devices have been replaced by apps, including DRM, on general purpose devices like closed phones or open computers. We can hope that the industry, in moving to a web model, will also give up DRM, but it isn’t clear.

So he admits that EME may well be used for locking down e-book texts online. But there is no difference between an e-book text and a Web page, so Berners-Lee is tacitly admitting that DRM could be applied to basic Web pages. An EFF post spelt out what that would mean in practice:

A Web where you cannot cut and paste text; where your browser can’t “Save As…” an image; where the “allowed” uses of saved files are monitored beyond the browser; where JavaScript is sealed away in opaque tombs; and maybe even where we can no longer effectively “View Source” on some sites, is a very different Web from the one we have today.

It’s also totally different from the Web that Berners-Lee invented in 1989, and then generously gave away for the world to enjoy and develop. It’s truly sad to see him acquiescing in a move that could destroy the very thing that made the Web such a wonderfully rich and universal medium — its openness.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

Filed Under: drm, eme, html, html5, tim berners-lee
Companies: w3c

The Codification Of Web DRM As A Censorship Tool

from the exceptions-that-create-a-rule dept

The ongoing fight at the W3C over Encrypted Media Extensions — the HTML5 DRM scheme that several companies want ensconced in web standards — took two worrying turns recently. Firstly, Google slipped an important change into the latest Chrome update that removed the ability to disable its implementation of EME, further neutering the weak argument of supporters that the DRM is optional. But the other development is even more interesting — and concerning:

Dozens of W3C members — and hundreds of security professionals — have asked the W3C to amend its policies so that its members can’t use EME to silence security researchers and whistleblowers who want to warn web users that they are in danger from security vulnerabilities in browsers.

So far, the W3C has stonewalled on this. This weekend, the W3C executive announced that it would not make such an agreement part of the EME work, and endorsed the idea that the W3C should participate in creating new legal rights for companies to decide which true facts about browser defects can be disclosed and under what circumstances.

One of the major objections to EME has been the fact that, due to the anti-circumvention copyright laws of several countries, it would quickly become a tool for companies to censor or punish security researchers who find vulnerabilities in their software. The director of the standards body called for a new consensus solution to this problem but, unsurprisingly, “the team was unable to find such a resolution.” So the new approach will be a forced compromise of sorts in which, instead of attempting to carve out clear and broad protections for security research, they will work to establish narrower protections only for those who follow a set of best practices for reporting vulnerabilities. In the words of one supporter of the plan, it “won’t make the world perfect, but we believe it is an achievable and worthwhile goal.”

But this is not a real compromise. Rather, it’s a tacit endorsement of the use of DRM for censoring security researchers. Because the argument is not about to what degree such use is acceptable, but whether such use is appropriate at all. It’s not, but this legitimizes the idea that it is.

Remember: it’s only illegal to circumvent DRM due to copyright law, which is not supposed to have anything to do with the act of exploring and researching software and publishing findings about how it functions. On paper, that’s a side effect (though obviously a happy and intentional side effect for many DRM proponents). The argument at the W3C did not start because of an official plan to give software vendors a way to censor security research, but because that would be the ultimate effect of EME in many places thanks to copyright law. Codifying a set of practices for permissible security disclosures might be “better” than having no exception at all in that narrow practical sense, but it’s also worse for effectively declaring that to be an acceptable application of DRM technology in the first place. It could even make things worse overall, arming companies with a classic “they should have used the _proper channels_” argument.

In other words, this is a pure example of the often-misunderstood idea of an exception that proves a rule — in this case, the rule that DRM is a way to control security researchers.

Of course, security research isn’t the only thing at stake. Cory Doctorow was active on the mailing list in response to the announcement, pointing out the significant concerns raised by people who need special accessibility tools for various impairments, and the lack of substantial response:

The document with accessibility use-cases is quite specific, while all the dismissals of it have been very vague, and made appeals to authority (“technical experts who are passionate advocates for accessibility who have carefully assessed the technology over years have declared that there isn’t a problem”) rather than addressing those issues.

How, for example, would the 1 in 4000 people with photosensitive epilepsy be able to do lookaheads in videos to ensure that upcoming sequences passed the Harding Test without being able to decrypt the stream and post-process it through their own safety software? How would someone who was colorblind use Dankam to make realtime adjustments to the gamut of videos to accommodate them to the idiosyncrasies of their vision and neurology?

I would welcome substantive discussion on these issues — rather than perfunctory dismissals. The fact that W3C members who specialize in providing adaptive technology to people with visual impairments on three continents have asked the Director to ensure that EME doesn’t interfere with their work warrants a substantive reply.

For the moment, it doesn’t look like any clear resolution to this debate is on the horizon inside the W3C. But these latest moves raise the concern that the pro-DRM faction will quietly move forward with making EME the norm (Doctorow also questioned the schedule for this stuff, and whether these “best practices” for security research will lag behind the publication of the standard). Of course, the best solution would be to reform copyright and get rid of the anti-circumvention laws that make this an issue in the first place.

Filed Under: browsers, drm, eme, exceptions, html 5, research, security
Companies: google, w3c

Why Won't W3C Carve Security Research Out Of Its DRM-In-HTML 5 Proposal?

from the questions-to-ponder dept

A few years back, we wrote a few stories about the unfortunate move by the W3C to embrace DRM as a part of the official HTML5 standard. It was doubly disappointing to then see Tim Berners-Lee defending this decision as well. All along this was nothing more than a focus by the legacy content providers to try to hinder perfectly legal uses and competition on the web by baking in damaging DRM systems. Even Mozilla, which held out the longest, eventually admitted that it had no choice but to support DRM, even if it felt bad about doing so.

There are, of course, many problems with DRM, and baking it directly into HTML5 raises a number of concerns. A major one: since the part of the DMCA (Section 1201) makes it infringing to merely get around any technological protection measure — even if for perfectly legal reasons — it creates massive chilling effects on security research. To try to deal with this, Cory Doctorow and the EFF offered up something of a compromise, asking the W3C to adopt a “non-aggression covenant,” such that the W3C still gets its lame DRM, but that W3C members agree not to go after security researchers.

Who could possibly object to that? But, for whatever reason, the W3C still won’t agree to it. Cory and the EFF are looking for security researchers to sign on to tell the W3C to get with the program and to protect security research. They’ve already got some great names signed on, but if you’re in the security research field, please consider signing on as well. Or if you know people in the field, please send them to the EFF asking them to sign on as well.

Filed Under: drm, html5, security research
Companies: w3c

Hollywood Needs The Internet More Than The Internet Needs Hollywood… So Why Is The W3C Pretending Otherwise?

from the get-on-with-it dept

Last week, we wrote about the MPAA joining the W3C almost certainly as part of its ongoing effort to push for DRM to be built into HTML5. Cory Doctorow has a beautifully titled blog post about all of this, saying that “we are Huxleying ourselves into the full Orwell.” It’s a great way to think about it, and Cory’s quite pessimistic about the outcome:

Try as I might, I can’t shake the feeling that 2014 is the year we lose the Web. The W3C push for DRM in all browsers is going to ensure that all interfaces built in HTML5 (which will be pretty much everything) will be opaque to users, and it will be illegal to report on security flaws in them (because reporting a security flaw in DRM exposes you to risk of prosecution for making a circumvention device), so they will be riddled with holes that creeps, RATters, spooks, authoritarians and crooks will be able to use to take over your computer and fuck you in every possible way.

While I am quite frequently in agreement with Cory, I’m at least marginally more optimistic than he is about the eventual result here. Putting DRM into HTML is monumentally stupid, and he’s right that it will create massive security and legal liabilities for almost everyone. But the history of DRM has shown, over and over again, that it gets broken to bits in minutes once released, and even if there is legal liability involved in such things, it always happens. It will happen again here, and there will be “patches” made pretty quickly. It will be a tremendous waste of resources on pretty much everyone’s part, but I’m not convinced that it will be effective in making things that much more unsafe.

That said, the larger point that Cory raises reminds me of the key point that I’m still at a loss to understand here. Why the W3C and others who support this proposal seem to be so willing to kowtow to Hollywood on this. Yes, as Cory explains, it’s really Netflix driving the bus here, but it’s the Hollywood studios that are out there telling Netflix they need DRM:

And it’s basically all being driven by Netflix. Everyone in the browser world is convinced that not supporting Netflix will lead to total marginalization, and Netflix demands that computers be designed to keep secrets from, and disobey, their owners (so that you can’t save streams to disk in the clear).

But here’s the thing: the internet wasn’t built to be the next broadcast medium for big Hollywood blockbusters. It was built as a computing and communications platform. That’s what made it special and it’s why so many people have flocked to it. It’s why it’s “the internet.” Hollywood came late to the party and has been trying to redesign the web in its own image ever since — and that means locking it down so it’s more about a broadcasting model, in which the “professionals” in Hollywood get to determine what you, the peons, get to do.

But there’s a reason Hollywood so desperately wants to control the internet: because all the people are here. And that’s an important point. Hollywood (and, by extension, Netflix) need the internet much more than the internet needs Hollywood. Sure, the Hollywood folks like to claim that the reason the internet is so popular is because of professional content, but there’s little to no evidence to support that. Yes, people like to have access to that content, but it’s never been the driving force for why people want to be online.

And we’ve seen this game before. The record labels demanded DRM from online music stores for years… until they realized that this was a complete waste of time and money for no benefit. And they finally agreed to what the public wanted: no DRM. There’s no reason for the W3C to make this same mistake. The studios and Netflix can resist all they want. They can stick with their proprietary Silverlight players no matter how annoying and technologically backwards they might be. But, in the end, they’ll come around to better, more open technologies like HTML5 because that’s where the people are and that’s what the people want.

So, the W3C has a serious choice to make here, and it’s been betting on the wrong horse. Sure, Netflix will resist HTML5 for a while. Because that’s what it feels it needs to do. But it won’t last. Because it can’t. History has shown over and over again that companies will eventually follow the will of the public and it will happen again here. There’s no reason to go through a stupid, shortsighted and wasteful process of DRMing everything, only to see it cracked and broken within hours of being launched. Just drop the DRM, focus on building a system that better provides what the public wants… and Hollywood and Netflix will get there eventually as well.

Filed Under: drm, hollywood, html 5, internet, movies
Companies: mpaa, netflix, w3c

Not Cool: MPAA Joins The W3C

from the that's-not-going-to-end-well dept

The W3C has been at the forefront of open standards and an open internet for many years, obviously. So it’s somewhat distressing to see it announced this morning that the MPAA has now joined the group. After all, it was not that long ago that the MPAA flat out tried to break the open internet by imposing rules, via SOPA, that would have effectively harmed security protocols and basic DNS concepts. All because it refuses to update its business model at the pace of technology. The MPAA has never been about supporting open standards or an open and free internet. The W3C states that its “principles” are “web for all, web for everything” and that its vision is “web of consumers and authors, data and services, trust.” The MPAA has basically been opposed to… well…. all of that. It has tried to take a consumer web of authors and turn it into a broadcast medium for major producers. It’s tried to destroy trust, and put in place locks and keys.

In short, the MPAA has no place at all in the W3C. If there had been any indication that this was a shift in the MPAA’s thinking, that actually would be interesting. If the MPAA had shown even the slightest indication that it was finally willing to embrace real internet principles and standards, and move Hollywood into the 21st century, that would be a good thing, and they should participate. But that’s not what this is about, at all.

Instead, I fear that this is because of the stupid fight, which the W3C supports, to put DRM in HTML5. Tim Berners-Lee, who created the web and heads the W3C, has (for reasons that still don’t make any sense) supported this dangerous proposal. Despite detailed explanations for why this is a bad idea, he has continued to defend the idea, which appears to go against nearly everything he’s said in the past. Having the MPAA join the W3C is not encouraging at all.

Berners-Lee’s support of DRM in HTML5 seems to be based on the short-sighted (and simply wrong) idea that the web needs the legacy entertainment industry more than the legacy entertainment industry needs the web. Building truly open standards that the world adopts will get the MPAA and others to come along eventually, because they’ll realize they need to go where the people are, even if it isn’t crippled with restrictions and locks. Bringing the MPAA into the process only continues to perpetuate this idea that we should be building a broadcast platform for the entertainment industry to push a message at consumers, rather than building a platform for creators of all kinds to communicate and share.

Filed Under: creators, drm, html, html 5, open standards, tim berners-lee, web standards
Companies: mpaa, w3c