freedom – Techdirt (original) (raw)

Right Wingers ‘Fight’ AT&T By Embracing ‘Anti-Woke’ Cell Carrier…That’s Just Rebranded AT&T

from the anti-woke-brain-broke dept

You’d be pretty hard pressed to find a company that leans more right wing than AT&T. The company was a big ally to President Trump and drove most of his telecom policy (which was basically to give AT&T everything it wants). AT&T has a long, long record of supporting politicians who oppose civil rights and supported the January 6 insurrection. They even funded and helped create OAN.

But for some reason, the right wing propaganda echoplex got it stuck in their craw recently that AT&T was too “woke” (read: anything a modern Trump era Conservative does not like, especially if it involves showing empathy to marginalized populations). Apparently AT&T was deemed too “woke” because it owns CNN (they don’t any more, of course, as that property was spun off as part of the Discovery merger).

So the right wing grifter and propaganda echoplex has been pushing a wireless carrier alternative dubbed Puretalk, which portrays itself as a small business alternative to big companies purportedly hostile to “Conservative values.” Right wing bullshit artist Mark Levin put it this way:

“AT&T customers, your company owns far-left CNN. And T-Mobile, your CEO reportedly advised Democrats how to beat [former President Donald] Trump,” Levin exclaimed this week while reading an ad script during his radio show. “Don’t give your money to these corporatists, these corporatist wireless companies. Instead, choose PureTalk.”

But not only is AT&T fairly right wing as a company and no longer owns CNN, PureTalk is just another MVNO (mobile virtual network operator) that runs over the AT&T network under different brand name, with most of the money being funneled back to AT&T:

As a mobile virtual network operator (MVNO) of AT&T, PureTalk effectively purchases bulk network service access at wholesale prices and then resells that access at retail rates to their customers, setting the prices based on the data limit plans. While they will handle the customer service, billing and packaging, PureTalk does not have networking licensing of its own. Instead, they currently need to have a business agreement in place with AT&T in order to access its mobile network operating system.

A huge part of the Trumpist grift is fostering a perpetual victimization and aggrievement complex with endless lies, then selling rubes alternative services that cater to said victimization complex. Like that $500 MAGA Freedom Phone that promised “privacy” from “Big Tech,” but wound up being a cheap-ass Chinese-made phone running Google software in disguise.

Of course none of the used-car salesmen hawking this service on right wing media platforms could be bothered to note that the network runs over AT&T. And media trust among the MAGA sect has been so eroded by propaganda they’ll never see or hear any reports pointing out how this is just dumb showmanship. It’s all part of a seemingly endless ouroboros of bullshit and propaganda we have no real answers for, yet whose impact is painfully evident everywhere you look.

Filed Under: cellular, conservative values, freedom, mvno, right wing, wireless
Companies: at&t

Can We Save A Truly Global Internet?

from the one-would-hope... dept

As we’ve been noting for years now, the global internet is at risk. China walled off its part of the internet early on, and other authoritarian regimes followed suit, with Russia and Iran taking the lead. But, at the same time, we’ve seen other regimes start to layer on their own regulatory regimes that effectively cut off other parts of the world, including the EU, which seems to believe its writing rules for the global internet, but may only be hastening the further fragmentation of the internet.

And yet, some of us still would like to believe that the concept of a truly global internet is one worth saving. Recently, the Council on Foreign Relations put out a report that basically calls that belief naïve, saying that we need to “confront reality in cyberspace,” with that apparently “reality” being that a global internet is impossible.

The United States has heavily influenced every step of the internet’s development. The technologies that undergird the internet were born out of U.S. federal research projects, while U.S. companies and technical experts made significant contributions. Similarly, the internet’s governance structures reflected American values, with a reliance on the private sector and technical community, light regulatory oversight, and the protection of speech and the promotion of the free flow of information.

For many years, this global internet served U.S. interests, and U.S. leaders often called for countries to embrace an open internet or risk being left behind. But this utopian vision became just that: a vision, not the reality. Instead, over time the internet became less free, more fragmented, and less secure. Authoritarian regimes have managed to limit its use by those who might weaken their hold and have learned how to use it to further repress would-be or actual opponents.

The lack of regulation around something so integral to modern economies, societies, political systems, and militaries has also become dangerous. This openness presents a tempting target for both states and nonstate actors seeking to undermine democracy, promote terrorism, steal intellectual property, and cause extraordinary disruption. Even more dangerous is the vulnerability of critical infrastructure to cyberattacks. Making the circumstances all the more difficult, figuring out who is behind a given attack remains challenging, allowing states and nonstate actors to carry out cyberattacks with a high degree of deniability and avoid significant consequences. In addition, because most cyberattacks occur well below the threshold of the use of force, the threat of retaliation is less credible.

Frankly, U.S. policy toward cyberspace and the internet has failed to keep up. The United States desperately needs a new foreign policy that confronts head on the consequences of a fragmented and dangerous internet.

I guess it’s not that surprising that a group like CFR would strike such a stance. Reading it feels very much like the stance of political bureaucrats with a philosophical bent, and a belief in politics, rather than those who understand the underlying nature and promise of the internet.

It’s good to see the report getting some serious pushback. Jason Pielemeier and Chris Riley have a strong piece in response, In Defense of the Global, Open Internet.

Cyber warfare and information warfare are undoubtedly in our midst. However, embracing the CFR report’s narrative and changing the course of U.S. policy in response to the continued trajectory of attacks not only would undermine human rights, democracy, and the internet itself but also would empower governments like China and Russia that benefit most from the “every country for itself” approach to the digital world. Instead, the United States should recommit to its vision for internet freedom by articulating and demonstrating how democratic states can address complex cybersecurity threats and digital harms through innovative, collaborative, and democratic means.

As the response notes, by giving up on the belief in a global, open, and interconnected internet, we’re actually aiding authoritarians tremendously:

If the United States, in particular, portrays the future of the internet as inevitably isolationist, it is as likely to push governments toward authoritarian models as it is to incentivize governments away from them. This could result in a potentially disastrous fait accompli that will likely imperil innovation, equity, economic growth, and human rights in the decades ahead.

But I think the most important part of this response is that it points out that CFR’s underlying assumptions are not just wrong… but fundamentally weird.

In sum, the CFR report seems to equate a free and global internet with anarchy at worst and naive insecurity at best. That is simply not true. Internet freedom posits a rights-centered and rules-based approach to internet governance. Necessary efforts that restrict rights are allowed under international human rights law, when they are clearly articulated, serve legitimate purposes, are proportionately tailored, and are accompanied by relevant accountability and transparency measures. These are the yardsticks against which future actions will continue to be measured, regardless of how the United States frames its cyber policy. They also happen to be the clearest principles policymakers and analysts can use to draw distinctions between authoritarian approaches and democratic ones.

They also highlight something that is true across a wide scope of discussions about internet policy. Everyone focuses solely on the negative aspects they see as being caused by the internet, rather than even acknowledging the massive positive benefits that have accrued as well.

Focusing on negatives also risks ignoring much of the value that the internet has created and continues to create. And the primary remaining value that the United States must prioritize is freedom. As one of us has argued previously, when compared to offline spaces, the internet continues to create significant opportunities for courageous, consequential, and U.S.-interest-aligned activities including independent journalism, accountability, and the protection of minority rights.

Frankly, the fact that a group like CFR is now arguing for effectively walling up the internet should be seen as a scary turn of events. It’s exactly what countries like China and Russia want. The interconnectedness of the internet, and the freedom it has enabled (especially of expression) have long been threats to them. For the US to go back on that would be seen as a huge win for Russia and China, and suggest that (1) their approach had been correct all along, and that (2) the US’s commitment (as hollow as it may ring) to freedom was a disaster.

If you don’t think that won’t be used against the US, you haven’t been paying attention.

Obviously, the US has plenty of problems right now (as it always has), but even when it’s exaggerated, keeping our guiding star pointed towards more freedom has always been good policy. Our failures tend to be when we move away from that (and this isn’t the first time that CFR has tried to point the country in that wrong direction).

Filed Under: china, freedom, global internet, internet, open internet, regulations, splinternet, us
Companies: cfr

How Can Anyone Argue With A Straight Face That China's Approach To Speech Online Is Better Than The US's During A Pandemic

from the authoritarian-nonsense dept

We’ve been writing a number of pieces lately about how incredibly dangerous China’s internet censorship has been during COVID-19, from silencing medical professionals to hiding research results tod trying to ignore Taiwan’s success in fighting COVID-19, it’s shown a pretty clear pattern that Chinese internet censorship is literally killing people. This is not to say that the US government’s response has been much better — it’s obviously been a disaster, but at least we have more free speech online and in the press, which is enabling all sorts of useful information to spread.

But you might not know that if you read this odd piece in the Atlantic by Jack Goldsmith and Andrew Keane Woods arguing that China has the right approach to handling free speech online during a pandemic, and the US has not. While the overall piece is, perhaps, a bit more thoughtful than the headline and tagline, it has moments that simply defy any sense of what’s happening in the world.

In the great debate of the past two decades about freedom versus control of the network, China was largely right and the United States was largely wrong. Significant monitoring and speech control are inevitable components of a mature and flourishing internet, and governments must play a large role in these practices to ensure that the internet is compatible with a society?s norms and values.

Again, this defies all evidence of what we’ve seen to date.

The piece, bizarrely, conflates pervasive digital surveillance with open free speech online:

Two events were wake-up calls. The first was Edward Snowden?s revelations in 2013 about the astonishing extent of secret U.S. government monitoring of digital networks at home and abroad. The U.S. government?s domestic surveillance is legally constrained, especially compared with what authoritarian states do. But this is much less true of private actors. Snowden?s documents gave us a glimpse of the scale of surveillance of our lives by U.S. tech platforms, and made plain how the government accessed privately collected data to serve its national-security needs.

And that’s got literally nothing to do with America’s approach to free speech online.

The “second” wake up call does relate to speech, but perhaps not in the way the authors mean:

The second wake-up call was Russia?s interference in the 2016 election. As Barack Obama noted, the most consequential misinformation campaign in modern history was ?not particularly sophisticated?this was not some elaborate, complicated espionage scheme.? Russia used a simple phishing attack and a blunt and relatively limited social-media strategy to disrupt the legitimacy of the 2016 election and wreak still-ongoing havoc on the American political system. The episode showed how easily a foreign adversary could exploit the United States? deep reliance on relatively unregulated digital networks. It also highlighted how legal limitations grounded in the First Amendment (freedom of speech and press) and the Fourth Amendment (privacy) make it hard for the U.S. government to identify, prevent, and respond to malicious cyber operations from abroad.

Yes, the Russians conducted a misinformation campaign — but it still remains unclear how effective that was beyond at the margins (and, to be fair, in a close election, the margins can be meaningful). But that’s hardly a reason to throw out the 1st Amendment. The 1st Amendment has also allowed there to be widespread discussion and debate about all of this, and has helped to get companies better situated to deal with and respond to disinformation campaigns. It has also allowed tons of people to be on the digital frontlines pointing out mis- and dis-information and working on responding to it to limit its impact. There will always be some and there will always be attempts to exploit it, but the idea that China’s approach is better seems totally counterfactual to reality (or what plenty of people who have suffered from Chinese internet censorship will tell you).

Incredibly, the authors blame Section 230 for “the free for all” online… but then when they talk about the companies trying to combat disinfo just two paragraphs later, they somehow miraculously leave out the fact that it’s Section 230 and the 1st Amendment that allow them to moderate the content on the platform:

Ten years ago, speech on the American Internet was a free-for-all. There was relatively little monitoring and censorship?public or private?of what people posted, said, or did on Facebook, YouTube, and other sites. In part, this was due to the legal immunity that platforms enjoyed under Section 230 of the Communications Decency Act. And in part it was because the socially disruptive effects of digital networks?various forms of weaponized speech and misinformation?had not yet emerged. As the networks became filled with bullying, harassment, child sexual exploitation, revenge porn, disinformation campaigns, digitally manipulated videos, and other forms of harmful content, private platforms faced growing pressure from governments and users to fix the problems.

[….]

After the 2016 election debacle, for example, the tech platforms took aggressive but still imperfect steps to fend off foreign adversaries. YouTube has an aggressive policy of removing what it deems to be deceptive practices and foreign-influence operations related to elections. It also makes judgments about and gives priority to what it calls ?authoritative voices.? Facebook has deployed a multipronged strategy that includes removing fake accounts and eliminating or demoting ?inauthentic behavior.? Twitter has a similar censorship policy aimed at ?platform manipulation originating from bad-faith actors located in countries outside of the US.?

It’s the American approach to free speech that makes this even possible.

Then the article argues that misinformation in the age of COVID-19 is something… new. And that it’s so serious that perhaps we should change how we think about free speech:

What is different about speech regulation related to COVID-19 is the context: The problem is huge and the stakes are very high. But when the crisis is gone, there is no unregulated ?normal? to return to. We live?and for several years, we have been living?in a world of serious and growing harms resulting from digital speech. Governments will not stop worrying about these harms. And private platforms will continue to expand their definition of offensive content, and will use algorithms to regulate it ever more closely. The general trend toward more speech control will not abate.

Note that they seem to be conflating a few things here. There is the US government’s approach to speech (bound by the 1st Amendment, there are very few areas where speech may be limited), and there are internet companies’ approaches to hosting speech upon their private platforms. And while those platforms are becoming more aggressive in cracking down on misinformation, there remain plenty of other platforms online that are chock full of misinformation as well. But that’s got little to do with our laws (beyond the fact that, as noted above, the 1st Amendment enables platforms to decide for themselves how to handle these things).

But it seems odd for an article that suggests a governmental approach to stifling speech is a good idea literally days after the US President suggesting injecting disinfectant into people as a way to deal with COVID-19. It’s not the internet that is the cause of misinformation, guys. And saying that government should crack down on misinformation isn’t going to work when it’s the head of state spouting off the misinformation, which is then broadcast live by TV networks.

The article then tries to tie free speech to surveillance, but I’m unclear why or how those two things are as connected as the article suggests they are. You can have one without the other — yet the article continues to assume that if you want free speech, then you must have mass surveillance along with it. It uses the examples of Clearview AI and Ring as examples of greater surveillance, but those have little to nothing to do with the American approach to free speech.

The article all too glibly insists that private company data tracking is the “functional equivalent” of the infamous social score now used in China, without recognizing a number of fundamental differences — with the largest being the fact that the social score in China is a government program and is used in all sorts of nefarious ways. Yes, the article argues that thanks to COVID-19 it’s likely that the US government and companies will be more closely tied, but gives no reason to support that conclusion as inevitable:

Apple and google have told critics that their partnership will end once the pandemic subsides. Facebook has said that its aggressive censorship practices will cease when the crisis does. But when COVID-19 is behind us, we will still live in a world where private firms vacuum up huge amounts of personal data and collaborate with government officials who want access to that data. We will continue to opt in to private digital surveillance because of the benefits and conveniences that result. Firms and governments will continue to use the masses of collected data for various private and social ends.

The harms from digital speech will also continue to grow, as will speech controls on these networks. And invariably, government involvement will grow. At the moment, the private sector is making most of the important decisions, though often under government pressure. But as Zuckerberg has pleaded, the firms may not be able to regulate speech legitimately without heavier government guidance and involvement. It is also unclear whether, for example, the companies can adequately contain foreign misinformation and prevent digital tampering with voting mechanisms without more government surveillance.

The First and Fourth Amendments as currently interpreted, and the American aversion to excessive government-private-sector collaboration, have stood as barriers to greater government involvement. Americans? understanding of these laws, and the cultural norms they spawned, will be tested as the social costs of a relatively open internet multiply.

COVID-19 is a window into these future struggles.

Perhaps. It will certainly be interesting to see where the future heads, but the idea that COVID-19 inevitably means that the US will be less speech protective in the future is far from the only possible path forward. And the idea that China somehow has the right idea has little support anywhere. The authors may be correct that the government will try to expand surveillance and limit speech, but that’s been happening for years. COVID-19 changes little in that regard.

Filed Under: andrew keane woods, authoritarianism, censorship, content moderation, covid-19, free speech, freedom, internet, jack goldsmith, moderation, pandemic, surveillance

End Of An Era: Saying Goodbye To John Perry Barlow

from the pioneer dept

I was in a meeting yesterday, when the person I was meeting with mentioned that John Perry Barlow had died. While he had been sick for a while, and there had been warnings that the end might be near, it’s still somewhat devastating to hear that he is gone. I had the pleasure of interacting with him both in person and online multiple times over the years, and each time was a joy. He was always, insightful, thoughtful and deeply empathetic.

I can’t remember for sure, but I believe the last time I saw him in person was a few years back at a conference (I don’t even recall what conference), where he was on a panel that had no moderator, and literally seconds before the panel was to begin, I was asked to moderate the panel with zero preparation. Of course, it was easy to get Barlow to talk, and to make it interesting, even without preparation. But that day the Grateful Dead’s Bob Weir (for whom Barlow wrote many songs — after meeting as roommates at boarding school) was in the audience — and while the two were close, they disagreed on issues related to copyright, leading to a public debate between the two (even though Weir was not on the panel). It was fascinating to observe the discussion, in part because of the way in which Barlow approached it. Despite disagreeing strongly with Weir, the discussion was respectful, detailed and consistently insightful.

Lots of people are, quite understandably, pointing to Barlow’s famous Declaration of the Independence of Cyberspace (which was published 22 years ago today). Barlow later admitted that he dashed most of that off in a bar during the World Economic Forum, without much thought. And that’s why I’m going to separately suggest two other things by Barlow to read as well. The first was his Wired piece, The Economy of Ideas from 1994, the second year of Wired’s existence, and where Barlow’s wisdom was found in every issue. Despite being written almost a quarter of a century ago, The Economy of Ideas is still fresh and relevant today. It is more thoughtful and detailed than his later “Declaration” and, if anything, I would imagine that Barlow was annoyed that the piece is still so relevant today. He’d think we should be way beyond the points he was making in 1994, but we are not.

The other piece is more recent I’ve seen a few people pointing to is his Principles of Adult Behavior, which are a list of 25 rules to live by — rules that we should be reminded of constantly. Rules that many of us (and I’m putting myself first on this list) fail to live up to all too frequently. Update I stupidly assumed that was a more recent writing by Barlow, but as noted in the comments (thanks!) it’s actually from 1977 when Barlow turned 30.

Cindy Cohn, who is now the executive director of EFF, which Barlow co-founded, mentions in her writeup how unfair it is that Barlow (and, specifically his Declaration) are often held up as the kind of prototype for the “techno-utopian” vision of the world that has become so frequently mocked today. Yet, as Cohn points out, that’s not at all how Barlow truly viewed the world. He saw the possibilities of that utopia, while recognizing the potential realities of something far less good. The utopianism that Barlow presented to the world was not — as many assume — him claiming these things were a sort of manifest destiny, but rather by presenting such a utopia, we might all strive and push and fight to actually achieve it.

Barlow was sometimes held up as a straw man for a kind of naive techno-utopianism that believed that the Internet could solve all of humanity’s problems without causing any more. As someone who spent the past 27 years working with him at EFF, I can say that nothing could be further from the truth. Barlow knew that new technology could create and empower evil as much as it could create and empower good. He made a conscious decision to focus on the latter: “I knew it?s also true that a good way to invent the future is to predict it. So I predicted Utopia, hoping to give Liberty a running start before the laws of Moore and Metcalfe delivered up what Ed Snowden now correctly calls ‘turn-key totalitarianism.’?

Just yesterday, before I learned of Barlow’s passing, we officially launched a new website, EveryoneCreates.org, which discusses just how ridiculous the myth — pushed by the RIAA and MPAA and their friends — that there’s some sort of “war” between “content and tech.” According to that narrative, the internet has done much to harm content creators. Yet, everywhere we look, we see the opposite. How content creators have been enabled by these technologies to create, to share, to distribute and, yes, to make money from their creations. Barlow was one of the first, if not the first, content creators from the “old” world, to wholeheartedly see the promise of the internet, and spent his life dedicated to making the internet such a powerful place for all of us content creators.

Either way, this is an end of an era. We’re in an age now where the general narrative making the rounds is, once again, touching on the moral panic of how terrible everything in technology is. Barlow spent decades teaching us about the possibilities of a better world on the internet, and nudging us, sometimes gently, sometimes forcefully, in that direction. And, now, just at a point where that vision is most at risk, he’s left us to continue that fight on our own. The internet world has many challenges ahead of it — and we should all strive to be guided both by Barlow’s principles and his vision of constantly pushing to mold the technology world into that world we want it to be — not ignoring the negatives, but looking for ways to get beyond them and expand the opportunities for the good to come out. It will be harder without him being there to help guide us.

Filed Under: freedom, internet, internet freedom, john perry barlow, music
Companies: eff

Rohingya Ethnic Cleansing (Once Again) Demonstrates Why Demanding Platforms Censor Bad Speech Creates Problems

from the happens-again-and-again dept

We keep pointing to examples like this, but the examples are getting starker and more depressing. Lots of people keep arguing that internet platforms (mainly Facebook) need to be more aggressive in taking down “bad” speech — often generalized under the term “hate speech.” But, as we’ve pointed out, that puts tremendous power into the hands of those who determine what is “hate speech.” And, while the calls for censorship often come from minority communities, it should be noted that those in power have a habit of claiming criticism of the powerful is “hate speech.” Witness the news from Burma that Rohingya activists have been trying to document ethnic cleansing, only to find Facebook deleting all their posts. When questioned about this, Facebook (after a few days) claimed that the issue was that these posts were coming from a group it had designated a “dangerous organization.”

So, is it a dangerous organization or a group of activists fighting against ethnic cleansing? Like many of these things, it depends on which side you stand on. As the saying goes, one person’s terrorist is another’s freedom fighter. And this just highlights the tricky position that Facebook has taken on — often at the urging of people who demand that it block certain content. Facebook shouldn’t be the ones determining who’s a terrorist v. who’s a freedom fighter and when we keep asking the site to be that final arbiter, we’re only inviting trouble.

The real issue is how we’ve built up these silos of centralized repositories of information — rather than actually taking advantage of the distributed web. In the early days of the web, everyone controlled their own web presence, for the most part. You created your own site and posted your own content. Yes, there were still middlemen and intermediaries, but there were lots of options. But centralizing all such content onto one giant platform and then demanding that platform regulate the content — these kinds of problems are going to happen again and again and again.

Filed Under: activism, burma, censorship, filters, free speech, freedom, platforms, rohingya, terorrism
Companies: facebook

Sonos Users Forced To Choose Between Privacy And Working Hardware

from the obey-or-suffer dept

Wed, Aug 23rd 2017 10:50am - Karl Bode

For years now, we’ve highlighted how these days — you don’t technically own the things you buy. And thanks to a rotating crop of firmware and privacy policy updates delivered over the internet, what you thought you owned can very easily change — or be taken away from you entirely. Time and tine again we’ve discussed how companies love to impose new restrictions on hardware via software update, then act shocked when consumers are annoyed because they’ve had either their rights — or device functionality — stripped away from them.

The latest example of this comes courtesy of Sonos, which informed users this week that “over time,” they won’t be able to use their pricey speaker systems if they refuse a new privacy policy update:

“A spokesperson for the home sound system maker told ZDNet that, “if a customer chooses not to acknowledge the privacy statement, the customer will not be able to update the software on their Sonos system, and over time the functionality of the product will decrease.”

“The customer can choose to acknowledge the policy, or can accept that over time their product may cease to function,” the spokesperson said.

In an e-mail to users, Sonos informed customers that they can “opt out of submitting certain types of personal information to the company; for instance, additional usage data such as performance and activity information.” But users won’t be able to opt out of data collection Sonos deems necessary to the system’s core functionality. The problem with that, as we’ve seen with companies like Microsoft, is that companies aren’t traditionally transparent about just what this “necessary” data entails, and tend to be overly generous when it comes to determining what personal data is “essential” in the first place.

In this case, the “functional data” Sonos won’t let you avoid collection of includes email addresses, IP addresses, Sonos account login information, device data, information about Wi-Fi antennas and other hardware information, room names, system error data, and more. Needless to say, privacy advocacy groups like the EFF and the Center for Democracy and Technology aren’t thrilled about users having to choose between their privacy rights or working hardware. Nor are they impressed by companies’ apparent inability to cordon off essential functionality from data collection and sales:

“Sonos is a perfect illustration of how effective privacy, when it comes to not just services but also physical objects, requires more than just ‘more transparency’ — it also requires choices and effective controls for users,” said Joe Jerome, a policy analyst at the Center for Democracy & Technology.

“We’re going to see this more and more where core services for things that people paid for are going to be conditioned on accepting ever-evolving privacy policies and terms of use,” he said. “That’s not going to be fair unless companies start providing users with meaningful choices and ensure that basic functionality continues if users say no to new terms.”

Occasionally, consumer revolt is enough to change the tide. Media software developer Plex was forced to backtrack this week from an announcement that it would be issuing a new privacy policy that prevented users of its software from opting out of data collection and sales, including users that had paid for lifetime access to the service. Plex’s retreat was forced after the company’s forums lit up with complaints about what one customer called “super-duper bullshit.” A subsequent Plex blog post stated that the company heard its users loud and clear, and would be reversing course on the decision.

Filed Under: bricking, freedom, freedom to tinker, ownership, privacy, privacy policy, terms of service
Companies: sonos

Chinese Police Dub Censorship Circumvention Tools As 'Terrorist Software'

from the nice-social-score-you-have-there;-be-a-shame-if-anything-happened-to-it dept

The Great Firewall of China is pretty well-known these days, as is the fact that it is by no means impenetrable. The Chinese authorities aren’t exactly happy about that, and we have seen a variety of attempts to stop its citizens from using tools to circumvent the national firewall. These have included Chinese ISPs trying to spot and block the use of VPNs; deploying China’s Great Cannon to take out anti-censorship sites using massive DDoS attacks; forcing developers of circumvention tools to shut down their repositories; and pressuring Content Delivery Networks to remove all illegal circumvention, proxy and VPN services hosted on their servers.

Despite years of clampdown, anti-censorship tools are still being used widely in China — one estimate is that 1-3% of China’s Internet users do so, which would equate to millions of people. However, Global Voices has a report of police action in the Chinese region of Xinjiang, whose indigenous population is Turkic-speaking and Muslim, that may be the harbinger of even tougher measures against circumvention tools. It concerns a leaked police report, which contains the following passage:

> A netizen in Changji (online account number: XXXXX IP: XXXXX) is suspected of downloading a violent and terrorist circumvention software at 12:42:21 on October 13. The software can run on mobile for sending different types of documents. Once installed, the software can be operated on the mobile management tool set for searching documents, games, backing up photos and sending text messages. This software has been classified by Public Security Bureau as second class violent and terrorist software.

What’s worrying here is that the unnamed circumvention tool is classed as “violent and terrorist software,” albeit only of the second class. As Global Voices points out:

> Judging by the “function” described in the case report, the circumvention tool is merely giving its user access to overseas search engines and cloud storage. While the document specifically says that the circumvention tool has been classified by the public security bureau as a type of “second class violent and terrorist software”, there is no public information describing how it was classified as such, or what other products share this classification. This leaves Internet users with no way to know if their software or other tools are legal or not.

Labelling something as “terrorist” is an easy way to justify making it illegal, and to try to head off any criticism of doing so. The Global Voices article notes that this move may be a purely local one, reflecting the continuing state of unrest in Xinjiang. But if it proves a useful way of framing things, it could easily be rolled out across the country.

The Global Voices article points out that there’s another way the Chinese authorities might start to make the use of VPNs and other circumvention tools more risky: by making it a part of the “citizen score” system that is currently being developed to spot “pre-crime”.

> A person with no overseas business using circumvention tool to communicate with people outside the country can be viewed as suspicious. And in the case of Xinjiang, where the authorities see 90% of the violent and terrorist crimes are related to getting access to censored information, detecting the downloading and usage of circumvention tool is part of the pre-crime crackdown.

That approach would have the advantage that those doing business abroad using VPNs would be largely unaffected — important for the Chinese economy. But those who are using them purely as circumvention tools to access “forbidden” material beyond the Great Firewall might find that their citizen score drops as a result. Even if circumvention tools aren’t classed as terrorist software and banned outright, increasing the social cost of being seen to employ them might be just as effective.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

Filed Under: china, circumvention, freedom, internet freedom, law enforcement, police, terrorist software, vpn

French Prime Minister Says Government Not Considering Banning Open WiFi, Tor Connections

from the that-proposed-encryption-ban,-tho dept

The French government has issued a statement indicating it will not be participating in the nation’s law enforcement agencies’ perversely masturbatorial power fantasies. A few days ago, Le Monde published a few “highlights” from a law enforcement “wishlist,” crafted in response to the terrorist attacks in Paris.

Among the many, many things law enforcement jotted down in response to a call for input on future terrorist-related legislation was a ban on public WiFi, Tor connections and encrypted communications. This was in addition to requests for warrantless/consent-less searches of people and vehicles, and the power to arbitrarily set up roadblocks for the purposes of executing even more warrantless/consent-less searches of people and vehicles.

The French Prime Minister has now confirmed that at least one of the items on the disturbing list will not be implemented. (via the Daily Dot)

A ban on Wifi internet access will not be introduced as part of new security measures in response to the Paris attacks in November, the Prime Minister has said.

There also appears to be no government interest in banning Tor connections, although this was stated a little less firmly. And the whole “demand encryption keys from third parties” request goes entirely unaddressed, suggesting the French government still has an eye on inserting itself into encrypted relationships as a “trusted partner.”

Interestingly, Prime Minister Manual Valls appears to have not seen the same document Le Monde did.

The prime minister denied any knowledge of such police requests, adding: “Internet is a freedom, is an extraordinary means of communication between people, it is a benefit to the economy.”

Mr Valls said he understood the security services’ need for tough measures to fight terrorism but stressed that those measures had to be “effective”.

This could be taken to mean that the law enforcement wishlist compiled by the police liaison office isn’t viewed as an “official” request in any way, shape or form — that it may as well have been drunken scrawls on the back of a cocktail napkin as far as the Prime Minister is concerned. It could also mean the Prime Minister isn’t yet willing to go on record as to the numerous other, unaddressed requests made by law enforcement, most of which deal with the terrestial realm, rather than the more ethereal ‘net.

France is still under a state of emergency, which has already given law enforcement increased discretionary powers. The government will likely move forward with harmful legislation because that’s what governments tend to do in response to violent attacks. But, for now, it appears law enforcement will have to make do with the arbitrary house arrests and warrantless searches it’s already engaging in.

Filed Under: encryption, france, freedom, law enforcement, manual valls, public wifi, surveillance, tor, wifi

San Franciscans: Please Join Carl Malamud's Campaign To Help Free Up Court Documents

from the join-now dept

For many years, we’ve discussed various Carl Malamud projects to help make government information and documents more widely available (especially ones that are locked up for no good reason). One particular target of his is PACER, the court’s electronic document system that is ridiculously cumbersome to use and costs an insane amount to use for even pretty mundane tasks. Earlier this year, we wrote about his National Day of PACER Protest, designed to be held on May 1st (this Friday). At the time, we suggested everyone sign up for a new PACER account (because, as if to demonstrate how stupid the PACER system is, you have to wait for the system to snail mail you your username and password before you can start using your account) to download a few documents on May 1st (PACER waives your fees if you download less than $15 in fees per quarter).

However, as we noted, this was just part of a three-pronged approach to convincing the courts to free up the PACER system. Another part was reaching out to judges to exempt certain courts and certain documents from PACER’s charges. As part of that, Malamud is trying to send judges some beautiful postcards, and he’s asking for people to help him do so. Here are just two of the postcards:

If you’re in San Francisco this Friday, May 1st, please try to make your way to the Internet Archive’s headquarters at 300 Funston Avenue, where you can send a post card (or two, or six, or 60) to Chief Judge Sideny Thomas of the Ninth Circuit appeals court, asking him to free up access to PACER for several of the courts in the 9th circuit.

I am writing to you for help. If you are in San Francisco on Friday, May 1, from 8 AM to 5 PM, I’m hoping you can stop by the Internet Archive at 300 Funston Avenue.

May 1 is Law Day, and I’m asking people to come in and write a brief postcard about why you think that access to PACER is important. More specifically, you’ll be writing a postcard to Chief Judge Thomas of the Ninth Circuit of the U.S. Court of Appeals in support of my request that the Court grant us free access to PACER for several courts in the Ninth Circuit. It would be a really big deal if the Court said yes, we’re trying to show public support in a way the judges can relate to.

You can also go and send your postcard directly if you can’t make it to the Internet Archive:

Clerk of the Court Attn: Docket 15-80056 United States Courts of Appeals James Browning Courthouse 95 7th Street San Francisco, CA 94103

This is a worthwhile and fun project. If you’re in San Francisco, please try to stop by. If you’re not, please consider sending your own postcard.

Filed Under: 9th circuit, carl malamud, fees, freedom, pacer

Singapore's Precarious Surveillance State The Envy Of US Intelligence Agencies

from the total-information-awareness-test-market dept

If you want to build a surveillance state with a minimum of backlash, you’ll need a very controllable environment. Shane Harris at Foreign Policy has a detailed report on Singapore’s relatively peaceful coexistence with Big Brother that includes the United States’ involvement in its creation, as well as the many reasons pervasive surveillance and an out-sized government presence have been accepted, rather than rebelled against.

The genesis of Singapore’s surveillance net dates back to 2002, and traces all the way back to former US National Security Advisor, John Poindexter. Peter Ho, Singapore’s Secretary of Defense, met with Poindexter and was introduced to the Dept. of Defense’s Total Information Awareness (TIA) aspirations.

It would gather up all manner of electronic records — emails, phone logs, Internet searches, airline reservations, hotel bookings, credit card transactions, medical reports — and then, based on predetermined scenarios of possible terrorist plots, look for the digital “signatures” or footprints that would-be attackers might have left in the data space. The idea was to spot the bad guys in the planning stages and to alert law enforcement and intelligence officials to intervene.

Though initially presented as an anti-terrorism tool (something Singapore was looking for after several recent terrorist attacks), it first found usefulness as a way to track and predict the spread of communicable diseases.

Ho returned home inspired that Singapore could put a TIA-like system to good use. Four months later he got his chance, when an outbreak of severe acute respiratory syndrome (SARS) swept through the country, killing 33, dramatically slowing the economy, and shaking the tiny island nation to its core. Using Poindexter’s design, the government soon established the Risk Assessment and Horizon Scanning program (RAHS, pronounced “roz”) inside a Defense Ministry agency responsible for preventing terrorist attacks and “nonconventional” strikes, such as those using chemical or biological weapons — an effort to see how Singapore could avoid or better manage “future shocks.”

Singapore politicians sold “big data” to citizens by playing up the role it would play in public safety. Meanwhile, back in the US, the program began to fall apart as privacy advocates and legislators expressed concerns about the amount of information being gathered. In Singapore, this was just the beginning of its surveillance state. In the US, it became an expansion of post-9/11 intelligence gathering. Rather than end the program, it was simply parted-out to the NSA and other agencies under new names by sympathetic lawmakers.

Singapore’s TIA program soon swelled to include nearly anything the government felt it could get away with gathering. The government used the data to do far more than track potential terrorists. It used the massive amount of data to examine — and plan for — nearly every aspect of Singaporean existence.

Across Singapore’s national ministries and departments today, armies of civil servants use scenario-based planning and big-data analysis from RAHS for a host of applications beyond fending off bombs and bugs. They use it to plan procurement cycles and budgets, make economic forecasts, inform immigration policy, study housing markets, and develop education plans for Singaporean schoolchildren — and they are looking to analyze Facebook posts, Twitter messages, and other social media in an attempt to “gauge the nation’s mood” about everything from government social programs to the potential for civil unrest.

Making this data collection even easier is the Singaporean government’s demand that internet service can only be issued to citizens with government-issued IDs. SIM cards for phones can only be purchased with a valid passport. Thousands of cameras are installed and government law enforcement agencies actively prowl social media services to track (and punish) offensive material.

But this is accepted by Singapore citizens, for the most part. The mix of Indians, Chinese and Malays makes the government especially sensitive to racially-charged speech. The country’s dependence on everyone around it makes everyday life a bit more unpredictable than that enjoyed by its much larger neighbors. In exchange for its tightly-honed national security aims (along with housing and education), Singaporeans have given up their freedom to live an unsurveilled life. And for the doubters, the government has this familiar rationale to offer.

“In Singapore, people generally feel that if you’re not a criminal or an opponent of the government, you don’t have anything to worry about,” one senior government official told me.

What goes unmentioned is just how easy it is to become an “opponent” of the Singaporean state. It can take nothing more than appearing less than grateful for the many government programs offered in “exchange” for diminished civil liberties. While the government goes above and beyond to take care of its citizens’ needs, it acts swiftly to punish or publicly shame those who are seen to spurn its advances, so to speak. Not for nothing did sci-fi writer William Gibson calls this Singapore “Disneyland with the Death Penalty.”

So, to make the perfect police/security state, you need a small country and a mixture of government largesse and palpable threats. You need a nation so precariously balanced that it “shouldn’t [even] exist,” according to one top-ranking government official. You also need a nation not built on civil liberties. Despite this, US intelligence agencies still view Singapore as a prime example of what could have been.

[M]any current and former U.S. officials have come to see Singapore as a model for how they’d build an intelligence apparatus if privacy laws and a long tradition of civil liberties weren’t standing in the way. After Poindexter left DARPA in 2003, he became a consultant to RAHS, and many American spooks have traveled to Singapore to study the program firsthand. They are drawn not just to Singapore’s embrace of mass surveillance but also to the country’s curious mix of democracy and authoritarianism, in which a paternalistic government ensures people’s basic needs — housing, education, security — in return for almost reverential deference. It is a law-and-order society, and the definition of “order” is all-encompassing.

If this was what the NSA and others were pushing for, there’s no hope of achieving it. The Snowden leaks have undermined a lot of these agencies’ stealthy nudges in this direction. The US government can never hope to achieve the same level of deference, not even in the best of times. A melting pot that has folded in refugees from authoritarian nations — along with the country’s founding principles — have made many Americans predisposed against views of the government as an entity worthy of reverence. Widespread abuse of the public’s trust has further separated the government from any reverential thought.

This isn’t to say the desire to convert US citizens into nothing more than steady streams of data doesn’t exist. The NSA’s previous director often stated his desire to “collect it all.” In the hands of the government, useful things could be done with all of this data (like possibly heading off epidemics, etc.), but the more likely outcome would be collecting for collecting’s sake — which violates the civil liberties the country was built on — and the use of the information in abusive ways.

It may work for Singapore, an extremely controlled environment. But that doesn’t necessarily make it right. And it certainly shouldn’t be viewed as some sort of surveillance state utopia.

Filed Under: civil liberties, freedom, nsa, singapore, surveillance