theresa may – Techdirt (original) (raw)

Turkish President Visits UK To Remind Everyone He Still Wants To Punish Critical Speech

from the UK-speech-laws-really-don't-need-to-be-any-worse dept

I’m not sure why any nation with at least a passing respect for civil liberties would continue treating Turkish president Recep Tayyip Erdogan as a world leader worth discussing ideas with. Erdogan rolled into the United States with his entourage of thugs and thought he could have critics beaten and unfriendly journalists tossed from press conferences. He continually petitions other countries to punish their own citizens for insulting him.

Back at home, Erdogan is jailing journalists by the hundreds, claiming they’re terrorists. A failed coup set off the latest wave of censorial thuggery, with Erdogan bolstering his terrorist claims by pointing to criminal acts like… robbing ATMs. A massive backlog of “insulting the president” cases sit in the country’s court system — a system that’s certainly aware it’s not supposed to act as a check against executive power.

And yet, world leaders continue to act as though Erdogan is an equal, rather than an overachieving street thug with an amazingly fragile ego. UK Prime Minister Theresa May, hoping to strike a trade deal with Turkey, invited Erdogan to not only discuss a possible deal, but speak publicly.

May tried to keep Erdogan from being Erdogan

May said that while it was right that those who sought to overthrow a democratically elected government were brought to justice, “it is also important that in the defence of democracy… Turkey does not lose sight of the values it is seeking to defend”.

May added: “That is why today I have underlined to President Erdo?an that we want to see democratic values and international human rights obligations upheld.”

But Erdogan was always going to be Erdogan:

At a press conference in Downing Street alongside May, Erdo?an made no reference to May’s remarks about human rights, but instead urged her to do more to extradite Turkish exiles from the Gulenist or Kurdish movements, saying that if she did not act act against terrorists, it would come back to bite her.

And went on to make it clear that by “terrorists,” he also meant journalists who may or may not have been caught engaging in burglary, but otherwise can be assumed to be political targets jailed to ensure silence.

You can’t keep treating an overgrown child like an adult. No one should be doing business with Turkey until it cleans up its civil rights violation record. And that’s not going to happen as long as Erdogan is president. Gently nudging him towards not being a completely evil asshole obviously doesn’t work. All it does is make the government’s hosting his off-the-cuff remarks on censorship and jailing journalists look like enablers of oppression.

Filed Under: criticism, free speech, golum, insults, journalism, recep tayyip erdogan, theresa may, turkey

Theresa May Again Demands Tech Companies Do More To Right The World's Social Media Wrongs

from the in-return,-politicians-promise-to-provide-more-bad-legislation dept

In the face of “extremist” content and other internet nasties, British PM Theresa May keeps doing something. That something is telling social media companies to do something. Move fast and break speech. Nerd harder. Do whatever isn’t working well already, but with more people and processing power.

May has been shifting her anti-speech, anti-social media tirades towards the Orwellian in recent months. Her speeches and platform stances have tried to make direct government control of internet communications sound like a gift to the unwashed masses. May’s desire to bend US social media companies to the UK’s laws has been presented as nothing more than as a “balancing” of freedom of speech against some imagined right to go through life without being overly troubled by social media posts.

Then there’s the terrorism. Terrorists use social media platforms to connect with like-minded people. May would like this to stop. She’s not sure how this should be accomplished but she’s completely certain smart people at tech companies could bring an end to world terrorism with a couple of well-placed filters. So sure of this is May that she wants “extremist” content classified, located, and removed within two hours of its posting.

May’s crusade against logic and reality continues with her comments at the Davos Conference. Her planned speech/presentation contains more of her predictable demand that everyone who isn’t a UK government agency needs to start doing things better and faster.

Although she is expected to praise the potential of technology to “transform lives”, she will also call on social media companies to do much more to stop allowing content that promotes terror, extremism and child abuse.

She will say: “Technology companies still need to go further in stepping up to their responsibilities for dealing with harmful and illegal online activity.

“These companies simply cannot stand by while their platforms are used to facilitate child abuse, modern slavery or the spreading of terrorist and extremist content.

“We need to go further, so that ultimately this content is removed automatically. These companies have some of the best brains in the world. They must focus their brightest and best on meeting these fundamental social responsibilities.”

“Go further…” but to what point? This is all May has said for years. Social media companies continue to struggle with moderating content, but it’s not for a lack of trying. They’re dealing with contradictory demands from multiple world governments, each of them declaring different types of speech to be unacceptable. The pressure isn’t imaginary. Twitter has taken proactive measures in response to Germany’s new hate speech law, resulting in some spectacular collateral damage. Other platforms are doing the same thing, even if the damage hasn’t been as ironically glorious.

May wants harder nerding, up to and including all-knowing bots that kill objectionable content before it reaches human eyeballs. She wants the impossible. Even if it were theoretically possible to police speech better with AI, that’s still years away from being the deployed at scale. Efforts that have been deployed have been routinely disastrous. Ask anyone how YouTube’s Content ID is doing handling copyright infringement and you’ll get a general idea of just how well algorithms police content.

For now, the problem is handled by a mixture of algorithms, human moderators, and crowd sourcing. The algorithms can’t reliably target unwanted content. The humans are, well, human — prone to error and bias. The last part — reporting functions for users — basically give every heckler a veto button, resulting in abuse of the system to bury content certain users don’t want to see. All these efforts work well for the governments demanding them — and these governments are the entities most likely to abuse them to silence dissent.

This is what the argument has been reduced to: calls for “more” without any interest in determining whether “more” will be helpful or even possible. The result will be the suppression of speech, rather than a victory over terrorism.

Filed Under: censorship, filtering, magic wand, social media, tech companies, theresa may

Insanity: Theresa May Says Internet Companies Need To Remove 'Extremist' Content Within 2 Hours

from the a-recipe-for-censorship dept

It’s fairly stunning just how much people believe that it’s easy for companies to moderate content online. Take, for example, this random dude who assumes its perfectly reasonable for Facebook, Google and Twitter to “manually review all content” on their platforms (and since Google is a search engine, I imagine this means basically all public web content that can be found via its search engine). This is, unfortunately, a complete failure of basic comprehension about the scale of these platforms and how much content flows through them.

Tragically, it’s not just random Rons on Twitter with this idea. Ron’s tweet was in response to UK Prime Minister Theresa May saying that internet platforms must remove “extremist” content within two hours. This is after the UK’s Home Office noted that they see links to “extremist content” remaining online for an average of 36 hours. Frankly, 36 hours seems incredibly low. That’s pretty fast for platforms to be able to discover such content, make a thorough analysis of whether or not it truly is “extremist content” and figure out what to do about it. Various laws on takedowns usually have statements about a “reasonable” amount of time to respond — and while there are rarely set numbers, the general rule of thumb seems to be approximately 24 hours after notice (which is pretty aggressive).

But for May to now be demanding two hours is crazy. It’s a recipe for widespread censorship. Already we see lots of false takedowns from these platforms as they try to take down bad content — we write about them all the time. And when it comes to “extremist” content, things can get particularly ridiculous. A few years back, we wrote about how YouTube took down an account that was documenting atrocities in Syria. And the same thing happened just a month ago, with YouTube deleting evidence of war crimes.

So, May calling for these platforms to take down extremist content in two hours confuses two important things. First, it shows a near total ignorance of the scale of content on these platforms. There is no way possible to actually monitor this stuff. Second, it shows a real ignorance about the whole concept of “extremist” content. There is no clear definition of it, and without a clear definitions wrong decisions will be made. Frequently. Especially if you’re not giving the platforms any time to actually investigate. At best, you’re going to end up with a system with weak AI flagging certain things, and then low-paid, poorly trained individuals in far off countries making quick decisions.

And since the “penalty” for leaving content up will be severe, the incentives will all push towards taking down the content and censorship. The only pushback against this is the slight embarrassment if someone makes a stink about mistargeted takedowns.
Of course, Theresa May doesn’t care about that at all. She’s been bleating on censoring the internet to stop terrorists for quite some time now — and appears willing to use any excuse and make ridiculous demands along the way. It doesn’t appear she has any interest in understanding the nature of the problem, as it’s much more useful to her to be blaming others for terrorist attacks on her watch, than actually doing anything legitimate to stop them. Censoring the internet isn’t a solution, but it allows her to cast blame on foreign companies.

Filed Under: censorship, extremist content, theresa may, uk
Companies: facebook, google, twitter

May And Macron's Ridiculous Adventure In Censoring The Internet

from the these-are-bad-ideas,-marc dept

For some observers, struggling UK Prime Minister Theresa May and triumphant French President Emmanuel Macron may seem at somewhat opposite ends of the current political climate. But… apparently they agree on one really, really bad idea: that it’s time to massively censor the internet and to blame tech companies if they don’t censor enough. We’ve been explaining for many years why this is a bad idea, but apparently we need to do so again. First, the plan:

The prime minister and Emmanuel Macron will launch a joint campaign on Tuesday to tackle online radicalisation, a personal priority of the prime minister from her time as home secretary and a comfortable agenda for the pair to agree upon before Brexit negotiations begin next week.

In particular, the two say they intend to create a new legal liability for tech companies if they fail to remove inflammatory content, which could include penalties such as fines.

It’s no surprise that May is pushing for this. She’s been pushing to regulate the internet for quite some time, and it’s a core part of her platform (which is a bit “weak and wobbly” as they say these days). But, Macron… well, he’s been held up repeatedly as a “friend” to the tech industry, so this has to be seen as a bit of a surprise in the internet world. Of course, there were hints that he might not really be all that well versed in the way technology works when he appeared to support backdoors to encryption. This latest move just confirms an unfortunate ignorance about the technology/internet landscape.

Creating a new legal liability for companies that fail to remove inflammatory content is going to be a massive disaster in many, many ways. It will damage the internet economy in Europe. It will create massive harms to free speech. And, it won’t do what they seem to think it will do: it won’t stop terrorists from posting propaganda online.

First, a regime that fines companies for failing to remove “inflammatory content” will lead companies to censor broadly, out of fear that any borderline content they leave up may open them up to massive liability. This is exactly how the Great Firewall of China works. The Chinese government doesn’t just say “censor bad stuff” it tells ISPs that they’ll get fined if they allow bad stuff through. And thus, the ISPs over-censor to avoid leaving anything that might put them at risk online. And, when it comes to free speech, doing something “the way the Chinese do things” tends not to be the best idea.

Second, related to that, once they open up this can of worms, they may not be happy with how it turns out. It’s great to say that you don’t think “inflammatory content” should be allowed online, but who gets to define “inflammatory” makes a pretty big difference. As we’ve noted, you always want to design regulations as if the people you trust the least are in power. This is not to say that May or Macron themselves would do this, but would you put it past some politicians in power to argue that online content from political opponents is too “inflammatory” and thus must be removed? What about if the press reveals corruption? That could be considered “inflammatory” as well.

Third, one person’s “inflammatory content” is another’s “useful evidence.” We see this all the time in other censorship cases. I’ve written before about how YouTube was pressured to take down inflammatory “terrorist videos” in the past, and ended up taking down the account of a human rights group documenting atrocities in Syria. It’s easy to say “take down terrorist content!” but it’s not always easy to recognize what’s terrorist propaganda versus what’s people documenting the horrors that the terrorists are committing.

Fourth, time and time again, we’ve seen the intelligence community come out and argue against this kind of censorship, noting that terrorists posting inflammatory content online is a really useful way to figure out what they’re up to. Demanding that platforms take down these useful sources of open source intelligence will actually harm the intelligence community’s ability to monitor and stop plans of attack.

Fifth, this move will almost certainly be used by autocratic and dictatorial regimes to justify their own widespread crackdown on free speech. And, sure, they might do that already, but removing the moral high ground can be deeply problematic in diplomatic situations. How can UK or French diplomats push for more freedom of expression in, say, China or Iran, if they’re actively putting this in place back home. Sure, you can say that they’re different, but the officials from those countries will argue it’s the exact same thing: you’re censoring the internet to “protect” people from “dangerous content.” Well, they’ll argue, that’s the same thing that we do — it’s just that we have different threats we need to protect against.

Sixth, this will inevitably be bad for innovation and the economy in both countries. Time and time again, we’ve seen that leaving internet platforms free from liability for the actions of their users is what has helped those companies develop, provide useful services, employ lots of people and generally help create new economic opportunities. With this plan, sure, Google and Facebook can likely figure out some way to censor some content — and can probably stand the risk of some liability. But pretty much every other smaller platform? Good luck. If I were running a platform company in either country, I’d be looking to move elsewhere, because the cost of complying and the risk of failing to take down content would simply be too much.

Seventh, and finally, it won’t work. The “problem” is not that this content exists. The problem is that lots of people out there are susceptible to such content and are interested and/or swayed by it. That’s a much more fundamental problem, and censoring such content doesn’t do much good. Instead, it tends to only rally up those who were already susceptible to it. They see that the powers-that-be — who they already don’t trust — find this content “too dangerous” and that draws them in even closer to it. And of course that content will find many other places to live online.

Censoring “bad” content always seems like an easy solution if you haven’t actually thought through the issues. It’s not a surprise that May hasn’t — but we had hopes that perhaps Macron wouldn’t be swayed by the same weak arguments.

Filed Under: censorship, emmanuel macron, filtering, france, free speech, inflammatory content, intermediary liability, terrorism, terrorist content, theresa may, uk
Companies: facebook, google

Theresa May Tries To Push Forward With Plans To Kill Encryption, While Her Party Plots Via Encrypted Whatsapp

from the so-about-that... dept

As we’ve discussed a few times, Theresa May and her colleagues have been pushing to break real encryption as part of the party’s manifesto. And they’ve used recent terrorist attacks as an excuse to ramp up that effort — even though the perpetrators of recent attacks were already known to law enforcement and there’s no evidence encryption played any role. Earlier in the year, Home Secretary Amber Rudd had insisted that encrypted communications were completely unacceptable, and specifically namechecked Whatsapp:

It is completely unacceptable. There should be no place for terrorists to hide.

We need to make sure that organisations like WhatsApp, and there are plenty of others like that, don?t provide a secret place for terrorists to communicate with each other.

Of course, as you’ve certainly heard by now, last Thursday’s general election in the UK (understatement alert!) didn’t quite go according to Theresa May’s plan, and she’s now left in a much weaker position with many people expecting she will not survive long as Prime Minister. And yet, showing her uncanny ability to double down on the absolute wrong thing, May is insisting she’s moving ahead with her plans to regulate the internet, which will require vast censorship and a breaking of encryption. On the encryption angle, she’s already got some Parliamentary support given that (as we and others warned at the time!) the Snooper’s Charter (“Investigatory Powers Act”) included a bit that would require anyone offering encrypted communications to unencrypt those communications (which is impossible if the encryption is strong end-to-end, and only possible with broken, insecure, fake “encryption”).

This is both dumb and… hilarious. Because while all of this is going on, Theresa May’s own party has been trying to figure out what the hell they’re going to do. And, of course, the way they’re communicating with each other is with the encrypted Whatsapp software that Amber Rudd was trashing just months ago:

Former minister Ed Vaizey told the BBC that he supports May staying on, but that Tories were discussing possible replacements. Asked whether members were calling one another to plot May?s ouster this weekend, he denied it.

?That?s so 20th century,? he said. ?It?s all on WhatsApp.?

Indeed, soon after this came out, one of the possible candidates to replace May, Boris Johnson, claimed that he was sticking by May and… released screenshots of his own Whatsapp conversations to prove that he was supporting her.

So, there you have it. Theresa May is pushing forward to break encrypted chat apps like Whatsapp, while her own party is using encrypted chat apps, like Whatsapp, to discuss whether or not to keep her in place as Prime Minister.

Filed Under: backdoors, boris johnson, encryption, privacy, theresa may, uk
Companies: whatsapp

Strong Crypto Is Not The Problem: Manchester And London Attackers Were Known To The Authorities

from the adding-hay-to-the-stack-makes-it-harder-to-find-the-needles dept

Soon after the attack in Manchester, the UK government went back to its “encrypted communications are the problem” script, which it has rolled out repeatedly in the past. But it has now emerged that the suicide bomber was not only known to the authorities, but that members of the public had repeatedly warned about his terrorist sympathies, as the Telegraph reports:

Counter Terrorism agencies were facing questions after it emerged Salman Abedi told friends that “being a suicide bomber was okay”, prompting them to call the Government’s anti-terrorism hotline.

Sources suggest that authorities were informed of the danger posed by Abedi on at least five separate occasions in the five years prior to the attack on Monday night.

Following the more recent attacks on London Bridge, the UK prime minister, Theresa May, has gone full banana republic dictator, declaring herself ready to rip up human rights “because terrorism”. But once more, we learn that the attackers were well known to the authorities:

London attack ringleader Khuram Butt was identified as a major potential threat, leading to an investigation that started in 2015, UK counterterrorism sources tell CNN.

?

Butt was seen as a heavyweight figure in al-Muhajiroun, whose hardline views made him potentially one of the most dangerous extremists in the UK, the sources said Tuesday. The investigation into Butt involved a “full package” of investigatory measures, the sources told CNN.

Butt was filmed in a 2016 documentary with the self-explanatory title “The Jihadis Next Door”, in which a black flag associated with ISIS was publicly unfurled in London’s Regent?s Park. Even though police were present during the filming, they did not follow up that incident, according to the Guardian:

Police did not make a formal request for footage or information from the makers of a Channel 4 documentary that featured Khuram Butt, one of the London Bridge attackers.

The broadcaster of The Jihadis Next Door said no police requests were made for film or programme maker’s notes to be handed over under the Police and Criminal Evidence Act or Terrorism Act.

The UK authorities were warned last year about another of the London Bridge attackers,Youssef Zaghba, by Italian counter-terrorism officials:

An Italian prosecutor who led an investigation into the London Bridge attacker Youssef Zaghba has insisted that Italian officials did send their UK counterparts a written warning about the risk he posed last year and monitored him constantly while he was in Italy.

Giuseppe Amato, the chief prosecutor in Bologna, who investigated Zaghba when he tried to travel from Italy to join Islamic State in Syria in March 2016, told the Guardian that information about the risk he posed was shared with officials in the UK.

Amato added that he personally saw a report that had been sent to London by the chief counter-terrorism official in Bologna about the Moroccan-born Italian citizen.

Manchester and London are not the only cases where the authorities were informed in advance about individuals. A 2015 article in The Intercept looked at ten high-profile terrorist attacks around the world, and found that in every single case, at least some of the perpetrators were already known to the authorities. Strong encryption is not the problem: it is the inability of the authorities to act on the information they have that is the problem. That’s not to suggest that the intelligence services and police were incompetent, or that there were serious lapses. It’s more a reflection of the fact that far from lacking vital information because of end-to-end encryption, say, the authorities have so much information that they are forced to prioritize their scarce resources, and sometimes they pursue the wrong leads and miss threats.

We wrote about this problem back in 2014, when an FBI whistleblower confirmed what many have been trying to explain to governments keen to extend their surveillance powers: that when you are looking for a needle, adding more hay to the stack makes things worse, not better. What is needed is less mass surveillance, and a more targeted approach. Until Theresa May and leaders around the world understand and act on that, it is likely that more attacks will occur, carried out by individuals known to the authorities, and irrespective of whether they use strong crypto or not.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

Filed Under: attacks, encryption, fud, human rights, london, manchester, theresa may, uk

Theresa May's Plan To Regulate The Internet Won't Stop Terrorism; It Might Make Things Worse

from the counterproductive-not-counterterrorism dept

In the wake of Saturday’s horrific attack on London—the third high-profile terrorist incident in the United Kingdom in the past three months—British policymakers were left scrambling for better ways to combat violent extremism. Prime Minister Theresa May called for new global efforts to “regulate cyberspace to prevent the spread of extremism and terrorist planning,” charging that the internet cannot be allowed to be a “safe space” for terrorists.

While May’s desire for a strong response is easy to understand, her call for more expansive internet regulation and censorship is wrongheaded and will make it harder to win the war against violent extremism.

May didn’t specify the details of her proposal, but to many observers it was clear that she’s asking for sweeping new powers to compel tech companies to help spy on citizens and censor online content. Unfortunately, this isn’t simply a knee-jerk response to horrible circumstances, but reflects a longstanding ambition of May’s Conservative Party to impose draconian controls on cyberspace.

As home secretary, May introduced and oversaw passage of the Investigatory Powers Act, legislation that civil-liberties advocates have called the worst surveillance bill of any western democracy. Following last month’s attack in Manchester, May’s government purportedly briefed newspapers of its intent to invoke the law to compel internet companies to “break their own security so that messages can be read by intelligence agencies.” David Cameron, May’s predecessor, argued for internet companies to be compelled to create backdoors in their software so that there would be no digital communications “we cannot read.”

Even if the U.K. government got the expansive new powers it seems to want, there’s no reason to think it would stop terrorism in its tracks. Researchers have found that suicide attacks are a social phenomenon involving support networks that radicalize the perpetrators. Most people in these networks aren’t themselves terrorists. Allowing them to operate openly makes it easier both for moderating voices to intervene and for intelligence agencies to track. If the communities are forced underground and offline, they’ll be harder to infiltrate and monitor.

Moreover, there’s no way to create communications backdoors that only apply to bad guys. While committed terrorists could easily adapt to open source or analog means of communication in response to a government-mandated backdoor, law-abiding civilians would be exposed to new cybersecurity risks and have their economic and civil liberties compromised. Experience has shown that backdoors inevitably will be hacked, making everyone less safe. As the U.S. House Homeland Security Committee noted in its report on the topic, all of the proposed solutions to access encrypted information “come with significant trade-offs, and provide little guarantee of successfully addressing the issue.”

The policy also would have serious consequences for the United Kingdom’s global competitiveness. As the MIT report “Keys Under Doormats” notes, mandating architectures that allow access to encrypted communications “risks the real economic, geopolitical, and strategic benefits of an open and secure internet for law enforcement gains that are at best minor and tactical.” One of the factors behind the West’s dominance in technology and innovation is that its apps are not government-sanctioned, as they are in China or Russia. After all, what consumer would want to buy an app or device that had a built-in backdoor?

All this isn’t to say that governments should stand back and do nothing to stop terrorist activity online. It’s illegal almost everywhere in the world to provide material support to terrorist activities, not to mention the obvious crimes of murder and conspiracy. But terrorists don’t have free reign in cyberspace. In addition to the United Kingdom’s comparatively robust domestic snooping powers, the nation’s Counter Terrorism Internet Referral Unit (CTIRU) already coordinates flagging and removing unlawful terrorist-related content. Since its launch in 2010, it has worked with online service providers to remove a quarter million pieces of terrorist material.

There also are already international agreements to help authorities uncover and track people engaged in these activities and to exchange intelligence about them across borders. For instance, Mutual Legal Assistance Treaties (MLATs) allow the cross-border flow of data about criminal matters between investigative bodies. While the current MLAT agreements can be slow and cumbersome, efforts are underway to create a new process and allow U.K. authorities to go directly to U.S.-based online service providers, upon meeting certain conditions.

The United Kingdom also is already a key part of the national security data-sharing arrangements between the “Five Eyes,” under which intelligence from Canada, Australia, New Zealand and, of course, the United States is shared almost in real time. While the details are classified, there is evidence that this intelligence sharing has prevented numerous attacks.

To her credit, May emphasized the importance of improving these sorts of international agreements in her speech about fighting terrorism. This is an area where we can and should make positive steps toward reform, increasing the capacity for intelligence sharing in real time and improving cooperation, while ensuring that the right checks and balances are in place.

Combating violent extremism online doesn’t have to be a Pyrrhic victory for democratic societies. Certain risks are unavoidable, and no level of internet regulation will stop the most determined attackers. But there are real steps policymakers can take now to enhance our tools without sacrificing our security, liberty or global competitiveness in the process.

Zach Graves is tech policy director and Arthur Rizer is national security and justice policy director for the R Street Institut

Filed Under: censorship, encryption, internet, terrorism, theresa may, uk

Theresa May Blames The Internet For London Bridge Attack; Repeats Demands To Censor It

from the not-very-subtle dept

It’s no secret that Theresa May is no fan of the internet and will use basically any excuse at all to push for greater censorship on the internet. Going back to the time when she was Home Secretary, she was already slamming the internet as being responsible for ISIS and promising to censor it. Since she’s become Prime Minister it’s only gotten worse. As part of her manifesto for the general election coming up later this week, a key part of her party’s promise was to censor the internet. And May and her friends seem to leave no tragedy unexploited. With the attack in Manchester a couple weeks back, she used it as an excuse to push the plan to kill end-to-end encryption. And with this weekend’s London Bridge attack, she immediately blamed the internet and promised more censorship:

“We cannot allow this ideology the safe space it needs to breed – yet that is precisely what the internet, and the big companies that provide internet-based services provide,? Ms May said.

?We need to work with allies democratic governments to reach international agreements to regulate cyberspace to prevent the spread of extremist and terrorism planning.”

Of course, there’s no indication that the internet had anything to do with the attack at all. Indeed, another news report claimed that one of the suspects had to ask a neighbor where he could rent the van that was later used in the attack, leading some to point out that if someone can’t even Google that kind of info… the internet might not be to blame here:

Dude couldn't use Google, but yeah internet safe spaces fault. https://t.co/6sEZbLfcl6

— Aidan Walsh (@aidan_walsh) June 4, 2017

Sky Sources: one of the suspects asked a neighbour where he could hire a van prior to the London Bridge attack

— Sky News Newsdesk (@SkyNewsBreak) June 4, 2017

Or this perfectly snarky response to blaming the internet for a real world stabbing attack:

In response to all of this nonsense, Charles Arthur has an excellent column at the Guardian pointing out that responding to all this by censoring the internet not only won’t help, it will almost certainly make things worse.

The problem is this: things can be done, but they open a Pandora?s box. The British government could insist that the identities of people who search for certain terror-related words on Google or YouTube or Facebook be handed over. But then what?s to stop the Turkish government, or embassy, demanding the same about Kurdish people searching on ?dangerous? topics? The home secretary, Amber Rudd, could insist that WhatsApp hand over the names and details of every communicant with a phone number. But then what happens in Iran or Saudi Arabia? What?s the calculus of our freedom against others??

Similarly, May and Rudd and every home secretary back to Jack Straw keep being told that encryption (as used in WhatsApp particularly) can?t be repealed, because it?s mathematics, not material. People can write apps whose messages can?t be read in transit, only at the ends. Ban WhatsApp, and would-be terrorists will find another app, as will those struggling against dictators.

Blaming the internet for some angry individuals committing violent acts isn’t just dumb and nonsensical, it’s counterproductive and will almost certainly do more harm than good. It’s a way for May and her colleagues to try to pin the blame on “something else” rather than to admit that they don’t appear to have a real strategy or plan for almost anything. Blame goes a long way, but blaming a tool that people use basically everyday for all sorts of useful reasons, seems really short-sighted.

Filed Under: censorship, internet, london bridge, moral panic, terrorism, theresa may, uk

UK Government Using Manchester Attacks As An Excuse To Kill Encryption

from the say-what-now? dept

It’s no secret that there are those in the current UK government who are just itching to kill encryption. Earlier this year, Home Secretary Amber Rudd made some profoundly ill-informed comments about how encryption on the internet was “completely unacceptable” and saying that they needed to stop companies from providing end-to-end encryption. And, in the recently leaked Tory Manifesto, it was made clear that the current government sees breaking encryption as a priority:

In addition, we do not believe that there should be a safe space for terrorists to be able to communicate online and will work to prevent them from having this capability.

As has been explained time and time again, the only way you prevent bad guys from having encryption is by preventing everyone from having effective encryption… and that makes everyone significantly less safe. Seriously, the only way to do this is to put dangerous vulnerabilities into encryption that will certainly be hacked fairly quickly. This doesn’t make people safer. It makes them less safe.

But, of course, like so many politicians these days (of all major parties) it appears that the Conservative Party in the UK can’t let a good tragedy go to waste. The Independent is reporting that, because of the attack in Manchester this week, the party is ramping up its plans to outlaw encrypted communications:

Government officials appear to have briefed newspapers that they will put many of the most invasive parts of the relatively new Investigatory Powers Act into effect after the bombing at Manchester Arena.

The specific powers being discussed ? named Technical Capability Orders ? require big technology and internet companies to break their own security so that messages can be read by intelligence agencies.

Again, in case you’re just joining us, requiring that internet companies “break their own security so that messages can be read by intelligence agencies” is the nice way of saying “kill real encryption.” It means that these companies will be deliberately forced to leave vulnerabilities in encryption that will be a goldmine for hackers of all kinds, from foreign surveillance to online criminals.

And, so far, there is zero evidence that the Manchester attack had anything to do with encryption. And, even if it did, so what? If the UK forced companies to break encryption, people planning terrorist attacks would just switch to other encryption products that don’t have corporate entities in the UK. Or they’d come up with other ways to communicate. It will do basically nothing to stop terrorist attacks, but will instead make it much, much easier for all sorts of people with nefarious intent to hack into the private communications of everyone.

Filed Under: amber rudd, encryption, manchester, privacy, security, theresa may, uk

Theresa May Plans To Regulate, Tax And Censor The Internet

from the who-would-vote-for-that? dept

With UK Prime Minister Theresa May recently calling for a new election there, which she is expected to win easily (despite recent reports of narrowing polls), last week May’s Conservative party released its Manifesto (what we in the US tend to call a party’s “platform”). There are all sorts of things in there that are getting press attention, but for the stuff that matters here on Techdirt, let’s just say May’s view of the internet is not a good one. A part of the plan is basically to regulate, tax and censor the internet, because the Conservative Party leadership doesn’t seem to much like the internet — and they especially dislike the fact that Google and Facebook are so successful.

What’s hilarious is that the manifesto basically promises to put in place all sorts of rules that will absolutely kill off any internet economy in the UK, as no company in its right mind would agree to these restrictions, while, at the same time, it talks up how important it is to support digital businesses in the UK. Of course, some of the plan is couched in nice sounding language that should actually scare you:

A Conservative government will develop a digital charter, working with industry and charities to establish a new framework that balances freedom with protection for users, and offers opportunities alongside obligations for businesses and platforms. This charter has two fundamental aims: that we will make Britain the best place to start and run a digital business; and that we will make Britain the safest place in the world to be online.

“Balances” freedoms? Freedoms aren’t supposed to be “balanced.” They’re supposed to be supported and protected. And when you have your freedoms protected, that also protects users. Those two things aren’t in opposition. They don’t need to be balanced. As for “obligations for businesses and platforms” — those five words are basically the ones that say “we’re going to force Google and Facebook to censor stuff we don’t like, while making it impossible for any new platform to ever challenge the big guys.” It’s a bad, bad idea.

Of course, immediately after that, there’s a bunch of nonsense about how the UK will be the “best” place to run a digital business. That’s, uh, not even remotely true based on what is said in the immediately preceding paragraph.

We will ensure there is a sustainable business model for high-quality media online, to create a level playing field for our media and creative industries.

This is a dog whistle to the legacy film and recording industries about terrible copyright laws on the way. For a few years now, those industries have been whining about the need for a “level playing field” — which to them means no internet innovation in business models, but rather a government mandated business model that protects an old, legacy way of doing business. Promising a “sustainable business model” from the government makes no sense. That’s not how it works unless you’re giving companies monopolies… oh, wait, yeah, that’s what copyright is all about. So, basically, say goodbye to lots of innovation in the creative fields in the UK, because Theresa May wants to lock in the business model from 1998.

Our starting point is that online rules should reflect those that govern our lives offline. It should be as unacceptable to bully online as it is in the playground, as difficult to groom a young child on the internet as it is in a community, as hard for children to access violent and degrading pornography online as it is in the high street, and as difficult to commit a crime digitally as it is physically.

Again, these are the kinds of things that lots of people find reassuring… if they know absolutely fuck all about how the internet works and what it would actually take to do this. First off, the rules that govern offline do govern online. Second, it is just as socially unacceptable to bully on the playground as it is to online — but (spoilers!) it still happens in both places. It’s sad and unfortunate, but history has yet to come up with a way to stop bullying on the playground, and most suggestions for how to do it online involve ridiculous surveillance and censorship, which creates a whole host of other problems. And, the whole “grooming children” on the internet is an overblown moral panic that happens extremely rarely. As for running into pornography and violence — certainly an issue, but one that parents generally are supposed to handle, rather than the government seeking to censor the entire internet. And, what the hell does it even mean to say it should be as difficult to commit a crime digitally as it is physically? In many cases, it’s more difficult. In some cases, it’s easier. But, given the long list of crimes, it’s difficult to argue that digital crime, as a whole, is somehow “easier” than offline crime. It’s a silly, meaningless statement that just plays on bogus fears about the “dangers” of the internet.

We will put a responsibility on industry not to direct users ? even unintentionally ? to hate speech, pornography, or other sources of harm. We will make clear the responsibility of platforms to enable the reporting of inappropriate, bullying, harmful or illegal content, with take-down on a comply-or-explain basis.

Basically: we will make private internet companies our internet censorship police, or we’ll fine them millions of dollars. This will create all sorts of unnecessary problems. First, to avoid liability, companies will massively over-censor. We see this happen all the time. All sorts of perfectly fine and legitimate content will be censored just to avoid the potential liability. Second, this will be massively expensive. Sure, Facebook and Google can probably handle the expense, but no one else will be able to. If you’re trying to start the next Facebook or Google in the UK, you’re fucked. You can’t afford to police all the content on your platform, nor can you afford the potential liability. Probably best to just move somewhere else. Third, does the UK government really want private platforms like Google and Facebook making these determinations? Why is it handing off the responsibility of what kind of speech is “illegal” to private, for-profit companies (foreign companies, at that)?

In addition, we do not believe that there should be a safe space for terrorists to be able to communicate online and will work to prevent them from having this capability.

And this may be the most terrifying line of all here. That’s the dog whistle for “we’ll outlaw encryption” because encryption — in the minds of foolish, scaredy-cat politicians — creates “safe spaces” for terrorists. Nevermind that the same encryption creates “safe” spaces for every other person and that undermining that makes absolutely everyone less safe. This is a dangerous plan that seems to echo the words of the UK’s Home Secretary, Amber Rudd, from a few months ago, where she wanted to find people who knew the necessary hashtags to silence terrorists online. This isn’t policy making. This is nonsense.

We will educate today?s young people in the harms of the internet and how best to combat them, introducing comprehensive Relationships and Sex Education in all primary and secondary schools to ensure that children learn about the risks of the internet, including cyberbullying and online grooming.

First of all, why is the education only on the “risks” of the internet, and not the benefits and opportunities? What an odd thing to focus on. Second, it’s 2017. Are there really still schools that don’t already teach this stuff? And, as mentioned earlier, the bogeymen of “cyberbullying” and “online grooming” are both overblown moral panics.

We will give people new rights to ensure they are in control of their own data, including the ability to require major social media platforms to delete information held about them at the age of 18, the ability to access and export personal data, and an expectation that personal data held should be stored in a secure way.

And… there’s the “right to be forgotten.” Apparently, the plan is a blanket right to be forgotten for anything about you from before you’re 18. Look, I did stupid things before I was 18. You probably did too. It’s kind of part of being a teenager. You do stupid things. Most people then grow up. They regret what they did, but most normal people recognize that when others did stupid stuff in their teens, it was because they were teenagers who then grew up as well. In other words, most people put that stuff into context. You don’t need to delete it. You just recognize it happened, that the person was a teenager when they did it, and you assume they probably grew up and matured.

We will continue with our £1.9 billion investment in cyber security and build on the successful establishment of the National Cyber Security Centre through our world-leading cyber security strategy. We will make sure that our public services, businesses, charities and individual users are protected from cyber risks. We will further strengthen cyber security standards for government and public services, requiring all public services to follow the most up to date cyber security techniques appropriate.

How the hell are you going to do that at the same time that you’re outlawing encryption?

Some people say that it is not for government to regulate when it comes to technology and the internet. We disagree.

Yeah, we got that from all the nonsense above.

Nor do we agree that the risks of such an approach outweigh the potential benefits.

Then you need to hire at least someone in your leadership who understands the internet, because it’s clear that that’s severely lacking.

We will introduce a sanctions regime to ensure compliance, giving regulators the ability to fine or prosecute those companies that fail in their legal duties, and to order the removal of content where it clearly breaches UK law. We will also create a power in law for government to introduce an industry-wide levy from social media companies and communication service providers to support awareness and preventative activity to counter internet harms, just as is already the case with the gambling industry.

There’s the censorship and taxation bit, all in the course of a couple of sentences. Sanctions to “ensure compliance” with the censorship regime and “levies” to tax Facebook and Google to pay up because of imaginary “internet harms.”

We believe that the United Kingdom can lead the world in providing answers. So we will open discussions with the leading tech companies and other like-minded democracies about the global rules of the digital economy, to develop an international legal framework that we have for so long benefited from in other areas like banking and trade.

So, not only will they tax, regulate and censor the internet, they want to get other countries to do the same thing.

There’s much more in the manifesto, but this is basically a joke, and would destroy the tech sector in the UK, rather than help it. It shows an astounding level of ignorance about the internet and technology, and seems to be written by technically illiterate people who fall for internet hoaxes and now only think of the internet in terms of what they fear about it. It’s a bad look, and a rather stunning one from a Conservative Party that supposedly favors deregulation/free market kind of ideas. This plan is the exact opposite. It’s technically clueless, top-down paternalism.

Filed Under: censorship, copyright, internet, regulations, theresa may, uk