extremism – Techdirt (original) (raw)
Stories filed under: "extremism"
ExTwitter Is Such Hot Garbage Even British Cops No Longer Want Anything To Do With It
from the just-not-the-kind-of-extremism-we-like dept
Of course, Elon Musk had to take Twitter private. If he had to answer to shareholders, he would have been ousted months ago for his systematic, single-minded destruction of the company’s value.
Pretty much every move he’s made has been some level of bad, ranging from confusing to annoying to infuriating to catastrophic. Alienating long-term users, chasing away advertisers, amplifying the voices of some the worst people in the world, kowtowing to foreign dictators Donald Trump considers to be great leaders, turning verification into pay-to-play, and generally just being an all-around asshole, Musk has managed to turn a social media pioneer into a toxic dumpster fire in record time.
It’s not just advertisers fleeing the platform. It’s also the public sector. Government agencies also use services like Twitter to reach constituents and they’re beginning to see why it might be a bad idea to send out their messages via a service awash in a sea of hate, misinformation, extremism, and grifting.
Even entities that often align themselves with the sort of authoritarians Musk and his Trump-loving fanboys dig the most are finding ExTwitter to have moved a bit too far to the extremist side of the spectrum to continue doing (government) business with the platform. Here are Andy Bruce and Muvija M reporting for Reuters about the latest collateral damage produced by the Musk regime:
Reuters contacted all 45 territorial police forces and British Transport Police by email. Of the 33 to give details about their policy, 10 forces who collectively police nearly 13 million people said they were actively reviewing their presence on X, while 13 said they frequently reviewed all their social media platforms.
[…]
Yet of these 23 forces, six said they were cutting their presence to just one or two X accounts. One, North Wales Police, serving nearly 700,000 residents, stopped using X completely in August.
“We … felt that the platform was no longer consistent with our values and therefore we have withdrawn our use of it,” Chief Constable Amanda Blakeman said, adding that they would continue to monitor and review alternative platforms.
The quote in bold could be applied to a large number of former Twitter users, many of which began leaving the platform after Musk made it clear he’d prefer to be surrounded by conspiracy theorists, domestic extremists, far-right racists, and people willing to turn to violence rather than respect a peaceful transition of power. Then there are the Nazis. Lots of them. And all of this is surrounded by the non-stop gibbering of blue-checked asshats either trying to foist their bigotry on others or simply cluttering up threats with auto-generated replies pushing whatever crypto scam they happen to be participating in.
North Wales pulled the plug. It looks like another police agency may soon be headed for the exit door as well.
Gwent Police said they were reviewing X because of questions about “the tone of the platform and whether that is the right place to reach our communities”. All Gwent’s individual officer accounts have been removed.
It’s not just cops no longer seeing the value in maintaining ExTwitter accounts, although it’s always a surprise to see law enforcement agencies exit a platform that caters to so many of their biggest fans: bigots, white nationalists, and others who will lick any boot they see so long as it’s someone they hate being pinned under the heel.
Interacting and informing the public is an important feature of social media services. But other government agencies are now deciding it’s not worth wading through a cesspool just to hand out a few extra digital pamphlets.
Of 32 ambulance and fire services surveyed by Reuters, nine said they had actively reviewed their presence on X. England’s North East Ambulance Service announced in July that it had stopped posting there.
[…]
In recent months, some British charities, and health, and educational establishments have said they will no longer post to X.
Every day brings more news like this. Soon, ExTwitter will be of interest to no one but cultural anthropologists. Even those demanding the resurrection of banned accounts are bound to get bored with seeing nothing but their own bigotry and stupidity reflected back at them by a bunch of blue checks with similar interests and a similar lack of anything actually interesting to say.
For public services that seek to serve the entirety of their public, exiting X is the smart move, even if it means temporarily losing a little bit of reach. There’s nothing to be gained by being one of the last rational voices on a service that is pretty much just 4chan but with a more attractive UI.
Filed Under: content moderation, elon musk, extremism, extremist content, police
Companies: twitter, x
Ctrl-Alt-Speech: Between A Rock And A Hard Policy
from the ctrl-alt-speech dept
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- Stack Overflow bans users en masse for rebelling against OpenAI partnership (Tom’s Hardware)
- Tech firms must tame toxic algorithms to protect children online (Ofcom)
- Reddit Lays Out Content Policy While Seeking More Licensing Deals (Bloomberg)
- Extremist Militias Are Coordinating in More Than 100 Facebook Groups (Wired)
- Politicians Scapegoat Social Media While Ignoring Real Solutions (Techdirt)
- ‘Facebook Tries to Combat Russian Disinformation in Ukraine’ – FB Public Policy Manager (Kyiv Post)
- TikTok Sues U.S. Government Over Law Forcing Sale or Ban (New York Times)
- Swiss public broadcasters withdraw from X/Twitter (Swissinfo)
- Congressional Committee Threatens To Investigate Any Company Helping TikTok Defend Its Rights (Techdirt)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.
Filed Under: artificial intelligence, chatgpt, content moderation, disinformation, extremism, russia, ukraine
Companies: facebook, meta, openai, reddit, stack overflow, tiktok, twitter
Yet Another Study Debunks The ‘YouTube’s Algorithm Drives People To Extremism’ Argument
from the maybe-the-problem-is-us,-not-the-machines dept
A few weeks ago, we had director Alex Winter on the podcast to talk about his latest documentary, The YouTube Effect. In that film he spoke with a young man who talked about getting “radicalized” on YouTube and going down the “alt-right rabbit hole.” One thing that Alex talked about in the podcast, but was not in the documentary, was that, at one point, he asked the guy to go to YouTube and see if it would take him down that path again, and he couldn’t even get it to recommend sketchy videos no matter how hard he tried.
The story that’s made the rounds over the years was that YouTube’s algorithm was a “radicalization machine.” Indeed, that story has been at the heart of many recent attacks on recommendation algorithms from many different sites.
And yet, it’s not clear that the story holds up. It is possible that it was true at one point, but even that I’d call into question. Two years ago we wrote about a detailed study looking at YouTube’s recommendation algorithm from January 2016 through December of 2019, and try as they might, the researchers could find no evidence that the algorithm pushed people to more extreme content. As that study noted:
We find no evidence that engagement with far-right content is caused by YouTube recommendations systematically, nor do we find clear evidence that anti-woke channels serve as a gateway to the far right.
Anyway, the journal Science now has another study on this same topic that… more or less finds the same thing. This study was done in 2020 (so after the last study) and also finds little evidence of the algorithm driving people down rabbit holes of extremism.
Our findings suggest that YouTube’s algorithms were not sending people down “rabbit holes” during our observation window in 2020…
Indeed, this new research report cites the one we wrote about two years ago, saying that it replicated those findings, but also highlights that it did so during the election year of 2020.
We report two key findings. First, we replicate findings from Hosseinmardi et al. (20) concerning the overall size of the audience for alternative and extreme content and enhance their validity by examining participants’ attitudinal variables. Although almost all participants use YouTube, videos from alternative and extremist channels are overwhelmingly watched by a small minority of participants with high levels of gender and racial resentment. Within this group, total viewership is heavily concentrated among a few individuals, a common finding among studies examining potentially harmful online content (27). Similar to prior work (20), we observe that viewers often reach these videos via external links (e.g., from other social media platforms). In addition, we find that viewers are often subscribers to the channels in question. These findings demonstrate the scientific contribution made by our study. They also highlight that YouTube remains a key hosting provider for alternative and extremist channels, helping them continue to profit from their audience (28, 29) and reinforcing concerns about lax content moderation on the platform (30).
Second, we investigate the prevalence of rabbit holes in YouTube’s recommendations during the fall of 2020. We rarely observe recommendations to alternative or extremist channel videos being shown to, or followed by, nonsubscribers. During our study period, only 3% of participants who were not already subscribed to alternative or extremist channels viewed a video from one of these channels based on a recommendation. On one hand, this finding suggests that unsolicited exposure to potentially harmful content on YouTube in the post-2019 era is rare, in line with findings from prior work (24, 25).
What’s a little odd, though, is that this new study keeps suggesting that this was a result of changes YouTube made to the algorithm in 2019, and even suggests that the possible reason for this finding was that YouTube had already radicalized all the people open to being radicalized.
But… that ignores that the other study that they directly cite found the same thing starting in 2016.
It’s also a little strange in that this new study seems to want to find something to be mad at YouTube about, and seems to focus on the fact that even though the algorithm isn’t driving new users to extremist content, that content is still on YouTube, and external links (often from nonsense peddlers) drive new traffic to it:
Our data indicate that many alternative and extremist channels remain on the platform and attract a small but active audience of individuals who expressed high levels of hostile sexism and racial resentment in survey data collected in 2018. These participants frequently subscribe to the channels in question, generating more frequent recommendations. By continuing to host these channels, YouTube facilitates the growth of problematic communities (many channel views originate in referrals from alternative social media platforms where users with high levels of gender and racial resentment may congregate)
Except, if that’s the case, then it doesn’t really matter what YouTube does here. Because even if YouTube took down that content, those content providers would post it elsewhere (hello Rumble!) and the same nonsense peddlers would just point there. So, it’s unclear what the YouTube problem is in this study.
Hell, the study even finds that in the rare cases where the recommendation algorithm does suggest some nonsense peddler to a non-jackass, that most people know better than to click:
We also observe that people rarely follow recommendations to videos from alternative and extreme channels when they are watching videos from mainstream news and non-news channels.
Either way, I do think it’s fairly clear that the story you’ve heard about YouTube radicalizing people not only isn’t true today, but if it was ever true, it hasn’t been so in a long, long, time.
The issue is not recommendations. It’s not social media. It’s that there is a subset of the population who seem primed and ready to embrace ridiculous, cult-like support for a bunch of grifting nonsense peddlers. I’m not sure how society fixes that, but YouTube isn’t magically going to fix it either.
Filed Under: algorithms, extremism, rabbit holes, radicalization, recommendation algorithms
Companies: youtube
Section 230 Continues To Not Mean Whatever You Want It To
from the 230-truthers dept
Fri, Jul 9th 2021 10:41am - Ari Cohn
In the annals of Section 230 crackpottery, the “publisher or platform” canard reigns supreme. Like the worst (or perhaps best) game of “Broken Telephone” ever, it has morphed into a series of increasingly bizarre theories about a law that is actually fairly short and straightforward.
Last week, this fanciful yarn took an even more absurd turn. It began on Friday, when Facebook began to roll out test warnings about extremism as part of its anti-radicalization efforts and in response to the Christchurch Call for Action campaign. There appears to be two iterations of the warnings: one asks the user whether they are concerned that someone they know is becoming an extremist, a second warns the user that they may have been exposed to extremist content (allegedly appearing while users were viewing specific types of content). Both warnings provide a link to support resources to combat extremism.
As it is wont to do, the Internet quickly erupted into an indiscriminate furor. Talking heads and politicians raged about the “Orwellian environment” and “snitch squads” that Facebook is creating, and the conservative media eagerly lapped it up (ignoring, of course, that nobody is forced to use Facebook or to pay any credence to their warnings). That’s not to say there is no valid criticism to be lodged?surely the propriety of the warnings and definition of “extremist” are matters on which people can reasonably disagree, and those are conversations worth having in a reasoned fashion.
But then someone went there. It was inevitable, really, given that Section 230 has become a proxy for “things social media platforms do that I don’t like.” And Section 230 Truthers never miss an opportunity to make something wrongly about the target of their eternal ire.
Notorious COVID (and all-around) crank Alex Berenson led the charge, boosted by the usual media crowd, tweeting:
Yeah, I’m becoming an extremist. An anti-@Facebook extremist. “Confidential help is available?” Who do they think they are?
Either they’re a publisher and a political platform legally liable for every bit of content they host, or they need to STAY OUT OF THE WAY. Zuck’s choice.
That is, to be diplomatic, deeply stupid.
Like decent toilet paper, the inanity of this tweet is two-ply. First (setting aside the question of what exactly “political platform” means) is the mundane reality, explained ad nauseum, that Facebook needs not?in fact?make any such choice. It bears repeating: Section 230 provides that websites are not liable as the publishers of content provided by others. There are no conditions or requirements. Period. End of story. The law would make no sense otherwise; the entire point of Section 230 was to facilitate the ability for websites to engage in “publisher” activities (including deciding what content to carry or not carry) without the threat of innumerable lawsuits over every piece of content on their sites.
Of course, that’s exactly what grinds 230 Truthers’ gears: they don’t like that platforms can choose which content to permit or prohibit. But social media platforms would have a First Amendment right to do that even without Section 230, and thus what the anti-230 crowd really wants is to punish platforms for exercising their own First Amendment rights.
Which leads us to the second ply, where Berenson gives up this game in spectacular fashion because Section 230 isn’t even relevant. Facebook’s warnings are its own content, which is not immunized under Section 230 in the first place. Facebook is liable as the publisher of content it creates; always has been, always will be. If Facebook’s extremism warnings were somehow actionable (as rather nonspecific opinions, they aren’t) it would be forced to defend a lawsuit on the merits.
It simply makes no sense at all. Even if you (very wrongly) believe that Section 230 requires platforms to host all content without picking and choosing, that is entirely unrelated to a platform’s right to use its own speech to criticize or distance itself from certain content. And that’s all Facebook did. It didn’t remove or restrict access to content; Facebook simply added its own additional speech. If there’s a more explicit admission that the real goal is to curtail platforms’ own expression, it’s difficult to think of.
Punishing speakers for their expression is, of course, anathema to the First Amendment. In halting enforcement of Florida’s new social media law, U.S. District Judge Robert Hinkle noted that Florida would prohibit platforms from appending their own speech to users’ posts, compounding the statute’s constitutional infirmities. Conditioning Section 230 immunity on a platform’s forfeiture of its completely separate First Amendment right to use its own voice would fare no better.
Suppose Democrats introduced a bill that conditioned the immunity provided to the firearms industry by the PLCAA on industry members refraining from speaking out out or lobbying against gun control legislation. Inevitably, and without a hint of irony, many of the people urging fundamentally the same thing for social media platforms would find newfound outrage at the brazen attack on First Amendment rights.
At the end of the day, despite all their protestations, what people like Berenson want is not freedom of speech. Quite the opposite. They want to dragoon private websites into service as their free publishing house and silence any criticism by those websites with the threat of financial ruin. It’s hard to think of anything less free speech-y, or intellectually honest, than that.
Ari Cohn is Free Speech Counsel at TechFreedom
Filed Under: 1st amendment, alex berenson, extremism, fact checking, free speech, section 230, speech
Companies: facebook
Governments And Internet Companies Agree On Questionable Voluntary Pact On Extremist Content Online
from the well-meaning-but-misguided dept
Yesterday, there was a big process, called the Christchurch Call, in which a bunch of governments and big social media companies basically agreed to take a more proactive role in dealing with terrorist and violent extremist content online. To its credit, the effort did include voices from civil society/public interest groups that raised issues about how these efforts might negatively impact freedom of expression and other human rights issues around the globe. However, it’s not clear that the “balance” they came to is a good one.
A free, open and secure internet is a powerful tool to promote connectivity, enhance social inclusiveness and foster economic growth.
The internet is, however, not immune from abuse by terrorist and violent extremist actors. This was tragically highlighted by the terrorist attacks of 15 March 2019 on the Muslim community of Christchurch ? terrorist attacks that were designed to go viral.
The dissemination of such content online has adverse impacts on the human rights of the victims, on our collective security and on people all over the world.
The “Call” is not binding on anyone. It’s just a set of “voluntary commitments” to try to “address the issue of terrorist and violent extremist content online and to prevent the abuse of the internet….” There are a set of commitments from governments and a separate set from social media companies. On the government side the commitments are:
Counter the drivers of terrorism and violent extremism by strengthening the resilience and inclusiveness of our societies to enable them to resist terrorist and violent extremist ideologies, including through education, building media literacy to help counter distorted terrorist and violent extremist narratives, and the fight against inequality.
Ensure effective enforcement of applicable laws that prohibit the production or dissemination of terrorist and violent extremist content, in a manner consistent with the rule of law and international human rights law, including freedom of expression.
Encourage media outlets to apply ethical standards when depicting terrorist events online, to avoid amplifying terrorist and violent extremist content.
Support frameworks, such as industry standards, to ensure that reporting on terrorist attacks does not amplify terrorist and violent extremist content, without prejudice to responsible coverage of terrorism and violent extremism.
Consider appropriate action to prevent the use of online services to disseminate terrorist and violent extremist content, including through collaborative actions, such as:
* Awareness-raising and capacity-building activities aimed at smaller online service providers; * Development of industry standards or voluntary frameworks; * Regulatory or policy measures consistent with a free, open and secure internet and international human rights law.
That mostly seems to stop short of demanding content be taken down, though that last point teeters on the edge. On the social media side, there is the following list of commitments:
Take transparent, specific measures seeking to prevent the upload of terrorist and violent extremist content and to prevent its dissemination on social media and similar content-sharing services, including its immediate and permanent removal, without prejudice to law enforcement and user appeals requirements, in a manner consistent with human rights and fundamental freedoms. Cooperative measures to achieve these outcomes may include technology development, the expansion and use of shared databases of hashes and URLs, and effective notice and takedown procedures.
Provide greater transparency in the setting of community standards or terms of service, including by:
* Outlining and publishing the consequences of sharing terrorist and violent extremist content; * Describing policies and putting in place procedures for detecting and removing terrorist and violent extremist content.
Enforce those community standards or terms of service in a manner consistent with human rights and fundamental freedoms, including by:
* Prioritising moderation of terrorist and violent extremist content, however identified; * Closing accounts where appropriate; * Providing an efficient complaints and appeals process for those wishing to contest the removal of their content or a decision to decline the upload of their content.
Implement immediate, effective measures to mitigate the specific risk that terrorist and violent extremist content is disseminated through livestreaming, including identification of content for real-time review.
Implement regular and transparent public reporting, in a way that is measurable and supported by clear methodology, on the quantity and nature of terrorist and violent extremist content being detected and removed.
Review the operation of algorithms and other processes that may drive users towards and/or amplify terrorist and violent extremist content to better understand possible intervention points and to implement changes where this occurs. This may include using algorithms and other processes to redirect users from such content or the promotion of credible, positive alternatives or counter-narratives. This may include building appropriate mechanisms for reporting, designed in a multi-stakeholder process and without compromising trade secrets or the effectiveness of service providers? practices through unnecessary disclosure.
Work together to ensure cross-industry efforts are coordinated and robust, for instance by investing in and expanding the GIFCT, and by sharing knowledge and expertise.
Facebook put up its own list of actions that it’s taking in response to this, but as CDT’s Emma Llanso points out, it’s missing some fairly important stuff about making sure these efforts don’t lead to censorship, especially of marginalized groups and individuals:
In response to all of this, the White House refused to join with the other countries who signed on to the voluntary commitments of the Christchurch Call, noting that it had concerns about whether it was appropriate and consistent with the First Amendment. That’s absolutely accurate and correct. Even if the effort is voluntary and non-binding, and even if it makes references to protecting freedom of expression, once a government gets involved in advocating for social media companies to take down content, it’s crossing a line. The Washington Post quoted law professor James Grimmelmann, who makes this point concisely:
?It?s hard to take seriously this administration?s criticism of extremist content, but it?s probably for the best that the United States didn?t sign,? said James Grimmelmann, a Cornell Tech law professor. ?The government should not be in the business of ?encouraging? platforms to do more than they legally are required to ? or than they could be required to under the First Amendment.?
?The government ought to do its ?encouraging? through laws that give platforms and users clear notice of what they?re allowed to do, not through vague exhortations that can easily turn into veiled threats,? Grimmelmann said.
And he’s also right that it’s difficult to take this administration’s position seriously, especially given that the very same day that it refused to join this effort, it was also pushing forward with its sketchy plan to force social media companies to deal with non-existent “conservative bias.” So, on the one hand, the White House says it believes in the First Amendment and doesn’t want governments to get involved, and at the very same time, it’s suggesting that it can pressure social media into acting in a way that it wants. And, of course, this is also the same White House, that has made other efforts to get social media companies to remove content from governments they dislike, such as Iran’s.
So, yes, we should be wary of governments telling social media companies what content should and should not be allowed, so it’s good that the White House declined to support the Christchurch Call. But it’s difficult to believe it was doing so for any particularly principled reasons.
Filed Under: censorship, christchurch, christchurch call, extremism, free speech, human rights, social media, terrorist content, voluntary, white house
Companies: facebook, google, microsoft, twitter, youtube
FBI Acts Like It's Still 1960 With Its Report On 'Black Identity Extremists'
from the agency-eats-its-own-speculative-dog-food dept
We already knew Jeff Sessions was a throwback. The new head of the DOJ rolled back civil rights investigations by the agency while calling for harsher penalties and longer jail terms for drug-related crimes, while re-opening the door for asset forfeiture abuse with his rollback of Obama-era policy changes.
But it’s more than just the new old-school DOJ. The FBI is just as regressive. Under its new DOJ leadership, the FBI (inadvertently) published some speculative Blue Lives Matter fanfic [PDF] — an “Intelligence Assessment” entitled “Black Identity Extremists Likely Motivated to Target Police Officers.”
There’s no hedging in the title, despite what the word “likely” usually insinuates. According to the FBI, this means there’s an 80-95% chance it believes its own spin.
Here’s the opening sentence:
The FBI assesses it is very likely Black Identity Extremist (BIE) perceptions of police brutality against African Americans spurred an increase in premeditated, retaliatory lethal violence against law enforcement and will very likely serve as justification for such violence.
And here’s what the term “very likely” means when the FBI uses it:
Beyond that, the FBI says this:
The FBI has high confidence in these assessments…
And here’s how the FBI defines “high confidence.”
High confidence generally indicates the FBI’s judgments are based on high quality information from multiple sources. High confidence in a judgment does not imply the assessment is a fact or a certainty; such judgments might be wrong. While additional reporting and information sources may change analytical judgments, such changes are most likely to be refinements and not substantial in nature.
What’s in this open-and-shut report? What key elements lead the FBI to believe “BIEs” will be killing cops in the future? Well, it appears to be nothing more than a recounting of recent cop killings, coupled with anecdotal evidence, like the expression of anti-white sentiment in social media posts. Beyond that, there’s little connecting those who have killed cops with the ethereal FBI BIE ideal. There’s certainly no organization behind the killings — only a few common factors. And those factors — if the FBI is allowed to continue to treat “BIE” as a threat to police officers — will do little to discourage violence against police officers.
What it will do is allow law enforcement to engage in racial profiling and to overreact to social media rants by angry black men. And it will allow the FBI to turn into the same FBI that targeted Martin Luther King Jr. and other civil rights activists during the 1960s. In fact, it almost acknowledges as much in the report.
BIEs have historically justified and perpetrated violence against law enforcement, which they perceived as representative of the institutionalized oppression of African Americans, but had not targeted law enforcement with premeditated violence for the nearly two decades leading up to the lethal incidents observed beginning in 2014. BIE violence peaked in the 1960s and 1970s in response to changing socioeconomic attitudes and treatment of blacks during the Civil Rights Movement.
The composers of this report may have a lot of confidence in their assumptions, but no one else seems to.
Daryl Johnson, a former Department of Homeland Security intelligence agent, when asked by Foreign Policy in October why the F.B.I. would create the term “B.I.E.,” said, “I have no idea” and “I’m at a loss.” Michael German, a former F.B.I. agent and fellow with the Brennan Center for Justice’s liberty and national security program, said the “Black Identity Extremists” label simply represents an F.B.I. effort to define a movement where none exists. “Basically, it’s black people who scare them,” he said.
“Could you name an African-American organization that has committed violence against police officers?” Representative Karen Bass asked Attorney General Jeff Sessions at Tuesday’s hearing. “Can you name one today that has targeted police officers in a violent manner?” It’s no surprise that he could not. Mr. Sessions, who confessed that he had not read the report, said he would need to “confirm” and would reply in writing at a later time. The F.B.I. itself admits in the report, that, even by its own definition, “B.I.E. violence has been rare over the past 20 years.”
If the report is acted on, it will be the 1960s all over again.
Although it’s unclear what actions the F.B.I. will take as a result of the report, the conclusions pave the way for it to gather data on, monitor and deploy informants to keep tabs on individuals and groups it believes to be B.I.E.s. This could chill and criminalize a wide array of nonviolent activism in ways that have terrifying echoes its infamous Cointelpro program, which investigated and intimidated black civil rights groups and leaders, including Marcus Garvey and the Rev. Dr. Martin Luther King Jr. Under this program, F.B.I. agents concocted a false internal narrative connecting Dr. King to foreign enemies, allowing agents to justify threatening to publicize his private life and encouraging him to commit suicide. This is a reminder that while the “Black Identity Extremist” designation is new, the strategy of using a vague definition to justify broad law enforcement action is not.
This is what the report looks like from the outside. It’s unclear if those inside the agency feel the same way. The leaked report confirms many people’s suspicions about law enforcement agencies: they view minorities as threats and will concoct narratives to support these views. There’s no evidence any sort of BIE organization exists, much less the existence of a concerted effort to inflict violence on police officers. But this report is a gift to every police officer and FBI agent who really wants to believe African Americans are out to get them. Given the administration’s unqualified support for law enforcement, coupled with the Commander in Chief’s off-the-cuff encouragement of violence, this report is basically an invitation to start policing like it’s 1960, rather than 2017.
Filed Under: black lives matter, blm, doj, extremism, fbi
Repeal All UK Terrorism Laws, Says UK Government Adviser On Terrorism Laws
from the outbreak-of-sanity dept
It’s become a depressingly predictable spectacle over the years, as politicians, law enforcement officials and spy chiefs take turns to warn about the threat of “going dark“, and to call for yet more tough new laws, regardless of the fact that they won’t help. So it comes as something of shock to read that the UK government’s own adviser on terrorism laws has just said the following in an interview:
The Government should consider abolishing all anti-terror laws as they are “unnecessary” in the fight against extremists, the barrister tasked with reviewing Britain?s terrorism legislation has said.
?
the Independent Reviewer of Terrorism Legislation, argued potential jihadis can be stopped with existing “general” laws that are not always being used effectively to take threats off the streets.
As the Independent reported, the UK government’s Independent Reviewer of Terrorism Legislation, Max Hill, went on:
“We should not legislate in haste, we should not use the mantra of ‘something has to be done’ as an excuse for creating new laws,” he added. ?We should make use of what we have.”
Aside from the astonishingly sensible nature of Hill’s comments, the interview is also worth reading for the insight it provides into the changing nature of terrorism, at least in Europe:
Mr Hill noted that some of the perpetrators of the four recent terror attacks to hit the UK were previously “operating at a low level of criminality”, adding: “I think that people like that should be stopped wherever possible, indicted using whatever legislation, and brought to court.”
This emerging “crime-terror nexus” is one reason why anti-terrorism laws are unnecessary. Instead, non-terrorism legislation could be used to tackle what Hill termed “precursor criminality” — general criminal activity committed by individuals who could be stopped and prosecuted before they move into terrorism. Similarly, it would be possible to use laws against murder and making explosive devices to hand down sentences for terrorists, made harsher to reflect the seriousness of the crimes.
Even though Hill himself doubts that the UK’s terrorism laws will be repealed any time soon, his views are still important. Taken in conjunction with the former head of GCHQ saying recently that end-to-end encryption shouldn’t be weakened, they form a more rational counterpoint to the ill-informed calls for more laws and less crypto.
Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+
Filed Under: extremism, max hill, terrorism, terrorism laws, uk
Once Again, Rather Than Deleting Terrorist Propaganda, YouTube Deletes Evidence Of War Crimes
from the take-a-step-back-and-rethink dept
It really was just last week that we were discussing the problems of telling platforms like YouTube to remove videos concerning “violent extremism” because it’s often tough to tell the difference between videos that many people think are okay and ones that those same people think are not. But in that post, we also linked back to a story from 2013 in which — after getting pressure from then Senator Joe Lieberman — YouTube started removing “terrorist” videos, and in the process deleted a channel of people documenting atrocities in Syria.
It appears that history is now repeating itself, because YouTube is getting some grief because (you guessed it), it’s effort to keep extremist content off its platform has resulted in deleting a channel that was documenting evidence of war crimes in Syria.
YouTube is facing criticism after a new artificial intelligence program monitoring “extremist” content began flagging and removing masses of videos and blocking channels that document war crimes in the Middle East.
Middle East Eye, the monitoring organisation Airwars and the open-source investigations site Bellingcat are among a number of sites that have had videos removed for breaching YouTube’s Community Guidelines.
This comes just days after YouTube announced it was expanding its program to remove “terror content” from its platform — including better “accuracy.” Oops.
Again, there are no easy answers here. You can certainly understand why no platform wants to host actual terrorism propaganda. And platforms should have the right to host or decline to host whatever content they want. The real issue is that we have more and more people — including politicians — demanding that these platforms must regulate, filter and moderate the content on their platform to remove “bad” speech. But in the over 4 years I’ve been asking this question since that last time we wrote about the shut down of the channel documenting atrocities, no one’s explained to me how these platforms can distinguish videos celebrating atrocities from those documenting atrocities. And this gets even more complicated when you realize: sometimes those are the same videos. And sometimes, letting terrorists or others post the evidence of what they’re doing, people are better able to stop that activity.
There is plenty of “bad” content out there, but the kneejerk reaction that we need to censor it and take it down ignores how frequently that is likely to backfire — as it clearly did in this case.
Filed Under: extremism, platforms
Companies: youtube
Should Social Media Sites Be Forced To Pull Pastor Calling For War With North Korea?
from the countering-violent-extremism dept
There’s been a lot of debate over the past few years about forcing internet platforms — YouTube, Facebook and Twitter, mainly — to respond to terrorists (oddly only Muslim terrorists) using those platforms for propaganda and agitation by taking down that content. It’s often been discussed under the banner of “countering violent extremism” or CVE. These days, those and other platforms tend to have large staffs reviewing videos, and especially quickly pulling down videos of ISIS promoters calling for attacks on America and Europe. And, in some countries it’s now required by law that internet platforms remove such content. And you can certainly understand the gut reaction here: someone calling you evil and encouraging attacks on you is seriously unnerving.
One of the points that we make about this, though, is that while many, many people think it’s “easy” to determine which content is “good” and which content is “bad,” it’s not. The areas of gray are vast and murky. One example we pointed to is that when YouTube was first pressured into taking down terrorist propaganda videos, it resulted in YouTube killing a channel that was documenting atrocities in Syria. Understanding the difference between promoting violence and documenting violence is not easy.
And here’s another example. You may have seen the following news clip floating around, involving a Trump-connected Pastor named Robert Jeffress explaining on a news program why the Bible says it’s okay to assassinate Kim Jong Un and go to war with North Korea.
That video clip is all over the news this week and can be found all over the internet. The copy I’m posting above is from Twitter, but I’m sure it can be found elsewhere as well. But what if, instead of an evangelical pastor, that statement were coming from a Muslim cleric, and instead of North Korea and Kim Jong Un it talked about America and Donald Trump? Would it still be all over social media, or would people be demanding that the internet take it down?
And this question applies no matter what you think of the video above. I’m not making a statement one way or the other on the content of it, even if I have an opinion about that. My point is simply that when we demand that platforms pull down “radical” content pushing for “violent extremism,” it’s really, really difficult to distinguish between the video above and some of what, say, ISIS releases.
This is a point that I think frequently gets lost in these discussions. People think that it’s easy to tell what’s “bad” because it’s easy for them to determine what is bad in their opinion or bad to them. But setting up general rules that scale across an entire platform is almost impossible. And even if you argue that the context of this video is different from my Muslim cleric example, you’re only helping to make my point. Because that would mean that anyone reviewing the video to determine if it stays up or down would have to become knowledgeable in the overall context — which in this case could require understanding centuries of global religious views and conflicts. I’m sorry, but Facebook, YouTube, Twitter and everyone else can’t hire thousands of PhDs in all related fields to review these videos (within hours) with the level of understanding and context necessary to make a judgment call on each and every one.
None of this is to say that the platforms need to leave everything up (or take everything down). But if you’re going to require platforms to police content, you need to at least recognize that any “rules” on this stuff will lead to rules you don’t like. Rules that say a Muslim cleric’s call for war on America is not allowed will almost certainly lead to the video above also not being allowed. Maybe some people are comfortable with neither being allowed, but the situation sure gets tricky quickly…
Filed Under: censorship, countering violent extremism, extremism, kim jong un, north korea, propaganda, robert jeffress, takedowns, threats, videos
Companies: facebook, twitter, youtube
AT&T, Verizon Feign Ethical Outrage, Pile On Google's 'Extremist' Ad Woes
from the manufactured-outrage dept
Fri, Mar 24th 2017 06:01am - Karl Bode
So you may have noticed that Google has been caught up in a bit of a stink in the UK over the company’s YouTube ads being presented near “extremist” content. The fracas began after a report by the Times pointed out that advertisements for a rotating crop of brands were appearing next to videos uploaded to YouTube by a variety of hateful extremists. It didn’t take long for the UK government — and a number of companies including McDonald’s, BBC, Channel 4, and Lloyd’s — to engage in some extended pearl-clutching, proclaiming they’d be suspending their ad buys until Google resolved the issue.
Of course, much like the conversation surrounding “fake news,” most of the news coverage was bizarrely superficial and frequently teetering toward the naive. Most outlets were quick to malign Google for purposely letting extremist content get posted, ignoring the fact that the sheer volume of video content uploaded to YouTube on a daily basis makes hateful-idiot policing a Sisyphean task. Most of the reports also severely understate the complexity of modern internet advertising, where real-time bidding and programmatic placement means companies may not always know what brand ads show up where, or when.
Regardless, Google wound up issuing a mea culpa stating they’d try to do a better job at keeping ads for the McRib sandwich far away from hateful idiocy:
“We know advertisers don’t want their ads next to content that doesn?t align with their values. So starting today, we?re taking a tougher stance on hateful, offensive and derogatory content. This includes removing ads more effectively from content that is attacking or harassing people based on their race, religion, gender or similar categories. This change will enable us to take action, where appropriate, on a larger set of ads and sites.”
As we’ve noted countless times, policing hate speech is a complicated subject, where the well-intentioned often stumble down the rabbit hole into hysteria and overreach. Amusingly though, AT&T and Verizon — two U.S. brands not exactly synonymous with ethical behavior — were quick to take advantage of the situation, issuing statements that they too were simply outraged — and would be pulling their advertising from some Google properties post haste. This resulted in a large number of websites regurgitating said outrage with a decidedly straight face:
“We are deeply concerned that our ads may have appeared alongside YouTube content promoting terrorism and hate,” an AT&T spokesperson told Business Insider in a written statement. “Until Google can ensure this won?t happen again, we are removing our ads from Google?s non-search platforms.”
“Once we were notified that our ads were appearing on non-sanctioned websites, we took immediate action to suspend this type of ad placement and launched an investigation,” a Verizon spokesperson told Business Insider. “We are working with all of our digital advertising partners to understand the weak links so we can prevent this from happening in the future.”
Of course, if you know the history of either company, you should find this pearl-clutching a little entertaining. In just the last few years, AT&T has been busted for turning a blind eye to drug dealer directory assistance scams, ripping off programs for the hearing impaired, defrauding government programs designed to shore up low-income connectivity, and actively helping “crammers” by making scam charges on consumer bills harder to detect. Verizon, recently busted for covertly spying on internet users and defrauding cities via bogus broadband deployment promises isn’t a whole lot better.
That’s not to say that all of the companies involved in the Google fracas are engaged in superficial posturing for competitive advantage. Nor is it to say that Google can’t do more to police the global hatred brigades. But as somebody who has spent twenty years writing about these two companies specifically, the idea that either gives much of a shit about their ads showing up next to hateful ignoramuses is laughable. And it was bizarre to see an ocean of news outlets just skip over the fact that both companies are pushing hard into advertising themselves with completed or looming acquisitions of Time Warner, AOL and Yahoo.
Again, policing hateful idiocy is absolutely important. But overreach historically doesn’t serve anybody. And neither does pretentious face fanning by companies looking to use the resulting hysteria to competitive advantage.
Filed Under: ads, extremism, moral panic, pearl clutching, search, video
Companies: at&t, google, verizon, youtube