isis – Techdirt (original) (raw)

Australian Police Go Full FBI, Radicalize Autistic Teen Officers Told Parents They Were Trying To Help

from the how-do-you-sleep dept

The FBI has done some heinous things in its pursuit of its counter-terrorism objectives. While it’s true the FBI has occasionally nabbed actual terrorists, it seems to prefer creating terrorists to going after those that are already avowed terrorists.

The FBI utilizes informants and undercover agents to perform this highly questionable work. Investigations border on entrapment. Internet loudmouths, petty criminals, or people with mental health issues are pushed and prodded to make their words a reality. In most cases, the targets of these investigations can’t. They don’t have the money, the expertise, or even the will to follow through with violent acts. Informants provide the tools, weapons, plans, and constant pressure needed to turn often otherwise harmless people into so-called terrorists the FBI can swoop in and arrest them the moment they start turning the plans the informants concocted into reality.

Apparently, the FBI is not alone in its willingness to radicalize people just so it can arrest them and hit them with charges that could result in decades of imprisonment. The counter-terrorist wing of Australian law enforcement does the same thing. This truly horrific story at least has a (partially) happy ending. But the events leading up to this conclusion are cruel and inhumane.

Thomas Carrick (the pseudonym given to him by the court) is a 13-year-old with autism. Thomas has an IQ of 71 and is a recipient of national disability insurance. He became fixated on the Islamic State, spending a lot of time watching ISIS videos and, apparently, asking his parents to purchase bomb-making ingredients for him. His parents, who are not native English speakers, asked the local police for help deterring his fascination with Islamic extremism.

They provided officers with access to Thomas, his home, his phone, his mother’s phone, his room, and to personal information gathered by his school and psychologist. At the start, the police actually did what they said they’d do: they sought help for Thomas. He was assigned to a case manager and met regularly with a psychologist. An officer who accessed the contents of Thomas’ phone noted he had downloaded a lot of stuff related to China and the Communist Party, but very little related to the Islamic State. They also set him up with an imam to discuss the religion of Islam in a more peaceful context.

Had things stayed this way, there would be nothing to report. But three months after this helpful path was opened up for Thomas, the country’s War on Terror wing decided to insert itself into the mix. The Joint Counter-Terrorism Team [JCTT] (a mix of Australian federal officers, Victoria police, and Asio members) opened up a parallel investigation that actively worked to undo and undermine all the help Thomas was receiving from other law enforcement officers.

An online covert operative was tasked with communicating to Thomas using two personae: a 24-year-old Muslim man from NSW, and a more extreme person located overseas.

[…]

The operative chatted with Thomas on 55 of the next 71 days, including during breaks at school and late at night.

[…]

The first persona introduced Thomas to the second, more extreme, persona, who encouraged him to make a bomb or kill an AFP member.

But the operative gave evidence that Thomas was naive, and living a “fantasy life online”, including by asking questions like whether he could join the kids’ section of Islamic State.

On 8 August 2021, Thomas sent a photo to the operative which showed him wearing his school uniform, a hoodie and a face mask and holding a knife with “ISIS” written on it in marker.

His house was searched within days, and he was charged less than two months later.

The JCTT was well-aware the therapeutic efforts authorized by police were still underway when it decided to turn this 13-year-old into a terrorist. When seeking authorization to arrest Thomas, the detective superintendent (apparently deliberately) failed to inform his supervisors that he had evidence the JCTT’s undercover work was having a negative impact on the rehabilitation of Thomas. And, of course, that’s the point: the JCTT only wins when it arrests terrorists. If it has to do all the dirty work itself, it apparently will.

And that’s not the worst of it. There’s also this:

[Magistrate Lesley] Fleming found the JCTT also deliberately delayed charging Thomas with offences until after he turned 14, as it made it harder for him to use the defence of doli incapax, which refers to the concept that a child is not criminally responsible for their actions.

The JCTT also performed another search of Thomas’ room for criminal evidence while maintaining the pretense they were part of the parallel police effort to dissuade Thomas from fixating on the Islamic State.

Fortunately, Thomas has been freed and is no longer facing charges. Magistrate Fleming’s order rips the JCTT to shreds for its abominable actions.

“The community would not expect law enforcement officers to encourage a 13-14 year old child towards racial hatred, distrust of police and violent extremism, encouraging the child’s fixation on ISIS,” magistrate Lesley Fleming said in the decision.

“The community would not expect law enforcement to use the guise of a rehabilitation service to entice the parents of a troubled child to engage in a process that results in potential harm to the child.

“The conduct engaged in by the JCTT and the AFP falls so profoundly short of the minimum standards expected of law enforcement offices [sic] that to refuse this [stay] application would be to condone and encourage further instances of such conduct.”

Thomas had a chance to be rehabilitated. But the JCTT deliberately harmed a minor to serve its own ends.

“The rehabilitation of TC was doomed once the [operator] connected online…befriended TC and fed his fixation, providing him with a new terminology, new boundaries and an outlet for him to express, what was in part, his fantasy world.”

This is truly disgusting. One wonders how the operatives involved with the deliberate destruction of a child (and their childhood) live with themselves. What possibly justifiable ends could they have been serving with this effort? Thomas was already being closely observed by law enforcement, but in the hopes that such close supervision would encourage him to find more healthy outlets for expression.

What happened here was evil. There’s no other word for it. And the added cruelty of waiting a few months to deprive the minor of a courtroom defense is symptomatic of the sickness that seems to pervade counter-terrorist agencies. The need to win subsumes the need to serve the public’s interests. And no one’s interests were served here other than the pitiable counter-terrorists cops who can’t get through the day without the brief ego boost of an unearned “win.”

Filed Under: australia, entrapment, isis, jctt, own plot, terrorism, thomas carrick, victoria police

Some Initial Thoughts On The Supreme Court Oral Arguments In Gonzalez: Non-Experts Might Still Make A Mess Of Things, But Without Understanding Why

from the initial-thoughts dept

Despite the Supreme Court hearing what could be the most consequential case regarding the future of the internet in decades, I decided to log off for most of Tuesday and go do something fun, far away from any internet connection. I didn’t listen live to the oral arguments, but rather chose to listen (at 3x speed) to the arguments Tuesday evening, while simultaneously reading the transcript. My colleague Cathy Gellis didn’t just listen to the oral arguments, but attended live at the Supreme Court (and is doing the same today with the associated Taamneh case), so I expect that she’ll have a more thorough analysis from her later.

Instead, here are a few quick thoughts:

On the whole… well, it could have been much worse. If I had to pick someone I hope writes the eventual opinion, it would actually be Justice Kavanaugh, who seemed to have more deeply understood the issues here.

Either way, I’m sure we’ll have much more on this, though we’ll have to wait until the opinion comes out, likely in June, to figure out what this all really means.

Filed Under: eric schnapper, gonzalez v. google, intermediary liability, isis, lisa blatt, recommendation algorithms, section 230, supreme court, terrorism
Companies: google, youtube

Supreme Court Takes Section 230 Cases… Just Not The Ones We Were Expecting

from the well,-this-is-not-great dept

So, plenty of Supreme Court watchers and Section 230 experts all knew that this term was going to be a big one for Section 230… it’s just that we all expected the main issue to be around the Netchoice cases regarding Florida and Texas’s social media laws (those cases will likely still get to SCOTUS later in the term). There were also a few other possible Section 230 cases that I thought SCOTUS might take on, but still, the Court surprised me by agreeing to hear two slightly weird Section 230 cases. The cases are Gonzalez v. Google and Twitter v. Taamneh.

There are a bunch of similar cases, many of which were filed by two law firms together, 1-800-LAW-FIRM (really) and Excolo Law. Those two firms have been trying to claim that anyone injured by a terrorist group should be able to sue internet companies because those terrorist groups happened to use those social media sites. Technically, they’re arguing “material support for terrorism,” but the whole concept seems obviously ridiculous. It’s the equivalent of the family of a victim of ISIS suing Toyota after finding out that some ISIS members drove Toyotas.

Anyway, we’ve been writing about a bunch of these cases, including both of the cases at issue here (which were joined at the hip by the 9th Circuit). Most of them get tossed out pretty quickly, as the court recognizes just how disconnected the social media companies are from the underlying harm. But one of the reasons they seem to have filed so many such cases all around the country was to try to set up some kind of circuit split to interest the Supreme Court.

The first case (Gonzalez) dealt with ISIS terrorist attacks in Paris in 2015. The 9th Circuit rejected the claim that Google provided material support to terrorists because ISIS posted some videos to YouTube. To try to get around the obvious 230 issues, Gonzalez argued that YouTube recommended some of those videos via the algorithm, and those recommendations should not be covered by 230. The second case, Taamneh, was… weird. It has a somewhat similar fact pattern, but dealt with the family of someone who was killed by an ISIS attack at a nightclub in Istanbul in 2017.

The 9th Circuit tossed out the Gonzalez case, saying that 230 made the company immune even for recommended content (which is the correct outcome) but allowed the Taamneh case to move forward, for reasons that had nothing to do with Section 230. In Taamneh, the district court initially dismissed the case entirely without even getting to the Section 230 issue by noting that Taamneh didn’t even file a plausible aiding-and-abetting claim. The 9th Circuit disagreed, said that there was enough in the complaint to plead aiding-and-abetting, and sent it back to the district court (which could then, in all likelihood, dismiss under Section 230). Oddly (and unfortunately) some of the judges in that ruling issued concurrences which meandered aimlessly, talking about how Section 230 had gone too far and needed to be trimmed back.

Gonzalez appealed the issue regarding 230 and algorithmic promotion of content, while Twitter appealed the aiding and abetting ruling (noting that every other court to try similar cases found no aiding and abetting).

Either way, the Supreme Court is taking up both cases and… it might get messy. Technically, the question the Supreme Court is asked to answer in the Gonzalez case is:

Whether Section 230(c)(1) of the Communications Decency Act immunizes interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limits the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information.

Basically: can we wipe out Section 230’s key liability protections for any content recommended? This would be problematic. The whole point of Section 230 is to put the liability on the proper party: the one actually speaking. Making sites liable for recommendations creates all of the same problems that making them liable for hosting would — specifically, requiring them to take on liability for content they couldn’t possibly thoroughly vet before recommending it. A ruling in favor of Gonzalez would create huge problems for anyone offering search on any website, because a “bad” content recommendation could lead to liability, not for the actual content provider, but for the search engine.

That can’t be the law, because that would make search next to impossible.

For what it’s worth, there were some other dangerously odd parts of the 9th Circuit’s Gonzalez rulings regarding Section 230 that are ripe for problematic future interpretation, but those parts appear not to have been included in the cert petition.

In Taamneh, the question is focused on the aiding and abetting question, but ties into Section 230, because it asks if you can hold a website liable for aiding and abetting if they try to remove terrorist content but a plaintiff argues they could have been more aggressive in weeding out such content. There’s also a second question of whether or not you can hold a website liable for an “act of intentional terrorism” when the actual act of terrorism had nothing whatsoever to do with the website, and was conducted off of the website entirely.

(1) Whether a defendant that provides generic, widely available services to all its numerous users and “regularly” works to detect and prevent terrorists from using those services “knowingly” provided substantial assistance under 18 U.S.C. § 2333 merely because it allegedly could have taken more “meaningful” or “aggressive” action to prevent such use; and (2) whether a defendant whose generic, widely available services were not used in connection with the specific “act of international terrorism” that injured the plaintiff may be liable for aiding and abetting under Section 2333.

These cases should worry everyone, especially if you like things like searching online. My biggest fear, honestly, is that this Supreme Court (as it’s been known to do) tries to split the baby (which, let us remember, kills the baby) and says that Section 230 doesn’t apply to recommended content, but that the websites still win because the things on the website are so far disconnected from the actual terrorist acts.

That really feels like the kind of solution that the Roberts court might like, thinking that it’s super clever when really it’s just dangerously confused. It would open up a huge pandora’s box of problems, leading to all sorts of lawsuits regarding any kind of recommended content, including search, recommendation algorithms, your social media feeds, and more.

A good ruling (if such a thing is possible) would be a clear statement that of course Section 230 protects algorithmically rated content, because Section 230 is about properly putting liability on the creator of the content and not the intermediary. But we know that Justices Thomas and Alito are just itching to destroy 230, so we’re already down two Justices to start.

Of course, given that this court is also likely to take up the NetChoice cases later this term, it is entirely possible that next year the Supreme Court may rules that (1) websites are liable for failing to remove certain content (in these two cases) and(2) websites can be forced to carry all content.

It’ll be a blast figuring out how to make all that work. Though, some of us will probably have to do that figuring out off the internet, since it’s not clear how the internet will actually work at that point.

Filed Under: aiding and abetting, algorithms, gonzalez, isis, recommendations, section 230, supreme court, taamneh, terrorism, terrorism act
Companies: google, twitter

Prosecutors Drop Criminal Charges Against Fake Terrorist Who Duped Canadian Gov't, NYT Podcasters

from the borat-without-the-punchlines dept

For a couple of years, a prominent terrorist remained untouched by Canadian law enforcement. Abu Huzayfah claimed to have traveled to Syria in 2014 to join the Islamic State. A series of Instagram posts detailed his violent acts, as did a prominent New York Times Peabody Award-winning podcast, “Caliphate.”

But Abu Huzayfah, ISIS killer, never existed, something the Royal Canadian Mounted Police verified a year before the podcast began. Despite that, Ontario resident Shehroze Chaudhry — who fabricated tales of ISIS terrorist acts — remained a concern for law enforcement and Canadian government officials, who believed his alter ego was not only real, but roaming the streets of Toronto.

All of this coalesced into Chaudhry’s arrest for the crime of pretending to be a terrorist. Chaudry was charged with violating the “terrorism hoax” law, which is a real thing, even though it’s rarely used. Government prosecutors indicated they intended to argue Chaudhry’s online fakery caused real world damage, including the waste of law enforcement resources and the unquantifiable public fear that Ontario housed a dangerous terrorist.

Chaudry was facing a possible sentence of five years in prison, which seems harsh for online bullshit, but is far, far less than charges of actual terrorism would bring. But it appears everything has settled down a bit and the hoaxer won’t be going to jail for abusing the credulity of others, a list that includes Canadian government officials and New York Times podcasters.

A Canadian man admitted in court on Friday that he made up tales about serving as an Islamic State fighter and executioner in Syria. In exchange, Canadian authorities dropped criminal charges against him of perpetrating a hoax involving the threat of terrorism.

This is pretty much where this was always going to end up. Chaudhry’s acts were stupid, not criminal. That others were taken in by his tales shouldn’t raise it from “potentially embarrassing” to “federally criminal.” And given the fact that the RCMP had already interviewed him and decided not to pursue him as a terrorism suspect nearly a year before he became the central figure in a New York Times podcast indicates the government knew his online persona was a hoax for years before suddenly deciding it should be treated as a criminal act.

Chaudhry is now (mostly) free. He’s no longer facing criminal charges but he will be treated like a criminal for at least the next couple of years.

Under the terms of the peace bond, which is reserved for people who the authorities fear may commit terrorist acts, Mr. Chaudhry must remain in Ontario for the next year and live with his parents. He is prohibited from owning any weapons, must continue to receive counseling and is required to report any changes in his virtual or physical addresses to the police.

Fool us once, shame on us. Fool us for multiple episodes, well… that’s on you, buddy. This has wrapped up one of the weirder tales of the War on Terror, a complete reversal of the expectations here in the US, where it’s the feds who are the fake terrorists.

Filed Under: abu huzayfah, canada, fake terrorist, isis, rcmp, shehroze chaudhry, terrorism, terrorism hoax

Social Network GETTR, Which Promised To Support 'Free Speech' Now Full Of Islamic State Jihadi Propaganda

from the have-fun-with-it-folks dept

When last we checked in on GETTR, the latest in the Gab-Parler trend of very naive people setting up a new social network they hope will become the “MAGA central” social network by claiming, ridiculously, that they “won’t censor,” it was overrun by furry porn and My Little Pony porn. The site, that is run by former Trump spokesperson Jason Miller, has struggled to understand how content moderation actually works, and is now facing yet another new kind of content moderation challenge: jihadi propaganda from the Islamic State.

Politico has an article about how GETTR is now being flooded with such propaganda.

Islamic State ?has been very quick to exploit GETTR,? said Moustafa Ayad, executive director for Africa, the Middle East and Asia at the Institute for Strategic Dialogue, a think tank that tracks online extremism, who first discovered the jihadi accounts and shared his findings with POLITICO.

?On Facebook, there was on one of these accounts that I follow that is known to be Islamic State, which said ?Oh, Trump announced his new platform. Inshallah, all the mujahideen will exploit that platform,?? he added. ?The next day, there were at least 15 accounts on GETTR that were Islamic State.?

As the article notes, the Islamic State quickly urged its followers to build up a big GETTR presence:

?If this app reaches the expected success, which is mostly probable, it should be adopted by followers and occupied in order to regain the glory of Twitter, may God prevail,? one Islamic State account on Facebook wrote on July 6.

And thus, we quickly learned that Jason Miller’s commitment to free speech on GETTR isn’t as absolute as he would have you believe:

Some of the jihadi posts on GETTR from early July were eventually taken down, highlighting that the pro-Trump platform had taken at least some steps to remove the harmful material.

You don’t say? And then this statement is pretty funny as well:

?ISIS is trying to attack the MAGA movement because President Trump wiped them off the face of the earth, destroying the Caliphate in less than 18 months, and the only ISIS members still alive are keyboard warriors hiding in caves and eating dirt cookies,? Jason Miller, CEO of GETTR, said in a statement. ?GETTR has a robust and proactive moderation system that removes prohibited content, maximizing both cutting-edge A.I. technology and human moderation.?

Huh. So now you’re admitting that any social media site needs a combination of technology and human moderation in order to remove prohibited content? You mean, just like Facebook, Twitter, and basically every other site that has to set up policies for what’s allowed and what’s not and then enforce it? So very, very interesting.

I am curious, of course, whether or not our regular commenters, who still keep insisting that Twitter and Facebook must be forced to allow “all” speech on their platforms feel that GETTR is a problem as well. Or is that somehow different?

Filed Under: content moderation, donald trump, free speech, isis, islamic state, jason miller, propaganda, social media
Companies: gettr

FBI Turns A Man With Mental Health Issues Into A 'Terrorist,' Busts Him For Using The Internet

from the must-have-locked-up-all-the-real-terrorists-already dept

Another FBI counterterrorism “investigation” has turned someone with mental health issues into a potential long term tenant of the federal prison system. The arrest happened in August, but the documents related to the arrest weren’t unsealed until earlier this month.

This summation of events shows how little the FBI needs to do to get someone charged with a federal crime:

A Clarksville man has been arrested following an investigation by the FBI into ISIS-connected terrorist threats they say he made against the Clarksville Police Department and the Fort Campbell PX Exchange.

Jason Solomon Stokes, 41, was arrested Aug. 20 and charged with sending threatening communications interstate, a federal crime, according to documents unsealed today and obtained by Clarksville Now.

Using the internet for anything makes it “interstate,” which gives the feds jurisdiction. Stokes spoke about attacks in internet chat groups but never obtained anything needed to carry them out, like explosives or weapons. More details make it clear Stokes isn’t a dangerous terrorist, but rather someone who would have benefitted from some intercession from mental health professionals. The sad thing is the FBI agent who pursued the investigation knew this and just kept going until he could ring up Stokes on terrorism-related charges.

Agents accompanied by mental health professionals met with Stokes, who was being treated medically for schizoaffective disorder, according to the complaint. He was living with his mother in Summit Heights. Stokes admitted to making the posts but denied being violent and denied owning any weapons.

There was another visit later. And in that one, the mental health professional handed out a diagnosis… which was ignored by FBI agent Scot Sledd.

About a year and a half later, on about Nov. 5, 2019, the FBI was tipped that Stokes had made more terrorism-supporting statements. They again interviewed him and advised him to stop posting messages on social media. The mental health professionals advised that Stokes was not in crisis or a danger to himself or others, the documents said.

Five months later, Stokes was back in the ISIS-focused chat rooms talking about attacks again. And, again, he never showed interest in actually carrying them out. He backed out as the proposed attack date approached, saying he was worried about his elderly mother. He also failed to acquire weapons and ammunition to carry out the attack, offering to try again at some undetermined point in the future.

The criminal complaint makes it clear Stokes spent most of his time talking to FBI informants. And his “interviews” with Agent Sledd — accompanied by the mental health professional — were also part of the agency’s subterfuge. Sledd never identified himself as an FBI agent — a fact that may have made Stokes aware of the potential consequences of his online activities and perhaps pushed him away from engaging in these conversations.

Stokes’ online posts were first seen by one FBI informant. A second FBI informant initiated conversations with Stokes, hoping to secure a recording of Stokes pledging allegiance to the Islamic State. This led to a third informant pushing Stokes towards acquiring weapons. When the third informant set a date for the weapons delivery, Stokes backed out.

After that, Stokes never spoke to any of three informants the FBI set him up with. That was July 24, 2020. The FBI arrested him a month later, apparently deciding this month of silence was the perfect time to wrap up this pathetic “investigation” and pursue charges.

The FBI has not announced this arrest via its site. Neither has the DOJ. This suggests neither entity is especially proud of this takedown of a man suffering from mental health issues — issues the FBI ignored to rack up another cheap win. His online activities may have justified some additional scrutiny but when a mental health professional says someone isn’t a threat to himself or others, maybe the FBI should steer investigative resources towards people who actually are threats.

Filed Under: chatroom, fbi, internet, isis, jason stokes, mental health, own plots, terrorism

Ninth Circuit Dumps Sentencing Enhancement Handed To Defendant For Opening Social Media Accounts For ISIS Sympathizers

from the decades-in-jail-for-sock-puppet-construction dept

The FBI is still creating terrorists — finding loud-mouthed online randos to radicalize by hooking them up with undercover agents and informants seemingly far more interested in escalating things than defusing possibly volatile individuals.

The Ninth Circuit Court has, fortunately, decided to roll back a ridiculous sentencing enhancement added to another one of the FBI’s homegrown extremists. The terrorism enhancement in this case was triggered by this: the defendant’s opening of six social media accounts for alleged ISIS sympathizers.

The details of the case — contained in the court’s reversal [PDF] of the sentencing enhancement — show Amer Alhaggagi was a bit of a troll. The son of Yemeni immigrants, Alhaggagi was born in California but spent a lot of his life traveling back and forth to Yemen to spend time with his mother. He was raised in a Muslim home but that upbringing didn’t really make him a Muslim. He didn’t seem to have much interest in adhering to the religion’s rules and instead drifted towards the internet, where he developed a “sarcastic and antagonistic persona.”

This is how things were going before the FBI got involved:

In 2016, at the age of 21, Alhaggagi began participating in chatrooms, and chatting on messaging apps like Telegram, which is known to be used by ISIS. He chatted both in Sunni group chats sympathetic to ISIS and Shia group chats that were anti-ISIS. He trolled users in both groups, attempting to start fights by claiming certain users were Shia if he was in a Sunni chatroom, or Sunni if he was in a Shia chatroom, to try to get other users to block them. He was expelled from chatrooms for inviting female users to chat, which was against the etiquette of these chatrooms, as participants in those chats followed the Islamic custom of gender segregation.

One can only imagine what would have happened to Alhaggagi if the FBI hadn’t decided to step in. Probably something way less interesting than what happened when it did. The internet is full of trolls, even the parts of the internet most people don’t access. But an FBI source happened to be hanging out in a chatroom when Alhaggagi attempted to stir up the crowd there with some extremist chatter.

In one Sunni chatroom, in late July 2016, Alhaggagi caught the attention of a confidential human source (CHS) for the FBI when he expressed interest in purchasing weapons. In chats with the CHS, Alhaggagi made many claims about his ability to procure weapons, explaining that he had friends in Las Vegas who would buy firearms and ship them to him via FedEx or UPS. Alhaggagi also made disturbing claims suggesting he had plans to carry out attacks against “10,000 ppl” in different parts of the Bay Area by detonating bombs in gay nightclubs in San Francisco, setting fire to residential areas of the Berkeley Hills, and lacing cocaine with the poison strychnine and distributing it on Halloween. He claimed to have ordered strychnine online using a fake credit card, of which he sent a screenshot to the CHS, bragging that he engaged in identity theft and had his own device-making equipment to make fake credit cards.

In isolation, this sounds horrifying. Given the context of Alhaggagi’s internet trolling, it was just more bullshit. As the court notes, his online persona was not especially well-crafted and prone to delivering contradictory claims during the same chatroom conversations.

One minute his persona was selling weapons, the next he claimed to need them, all in the same chatroom. His persona allegedly had associates in Mexican cartels who could get him grenades, bazookas, and RPGs, offered to join a user in Brazil to attack the Olympics, and was considering conducting attacks in Dubai.

This isolated braggadocio led to 24-hour surveillance by the feds and the insertion of an undercover agent into Alhaggagi’s life. The FBI’s confidential informant pushed Alhaggagi to meet with the undercover agent and things kind of took off from there. The pair discussed bomb-making and visited a storage space to supposedly be used to store bomb stuff. Alhaggagi said other ridiculous things, like detailing a plan to become a cop so he could obtain more weapons. On the third visit, the FBI stocked the storage space with fake explosives. On the drive back, Alhaggagi pointed out good locations for bombs.

Then he stopped meeting with the undercover agent. Alhaggagi claimed seeing the explosives in the storage space made it clear he’d taken this too far. From that point on, he never contacted the undercover agent again.

The FBI searched his home in November of 2016, finding evidence showing Alhaggagi was back to trolling, rolling into ISIS-related chatrooms to say bad things about the US government. It also found that he had opened up Twitter and Gmail accounts for some people in the chatroom, who were alleged ISIS sympathizers. Some of these accounts were later linked to ISIS’s propaganda organization. Agents also found online searches related to bomb-making, strychnine, and flammable devices/substances.

The “material support” alleged here was the opening of social media and email accounts. But the court doesn’t find the government’s arguments persuasive. The government had the burden of proving Alhaggagi knew these accounts would be used to “intimidate or coerce” the US government or “retaliate” against it via violent acts.

The lower court simply shrugged and said there was no other possible use for the accounts, given that they were created for people in a pro-ISIS chatroom. But that shrug of indifference added decades to Alhaggagi’s sentence. The probation office’s pre-sentencing report suggested 48 months. The government countered with its terrorism enhancement, asking for a 396-month sentence. That’s a difference of 29 years. The court settled on 188 months with 10 years of supervised release.

The Appeals Court reminds the government that the enhancement doesn’t automatically apply to material support for terrorism charges. It also notes the government fell short on the burden of proof.

The district court’s conclusion rests on the erroneous assumption that in opening the social media accounts for ISIS, Alhaggagi necessarily understood the purpose of the accounts was “to bolster support for ISIS’s terrorist attacks on government and to recruit adherents.” Unlike conspiring to bomb a federal facility, planning to blow up electrical sites, attempting to bomb a bridge, or firebombing a courthouse—all of which have triggered the enhancement— opening a social media account does not inherently or unequivocally constitute conduct motivated to “affect or influence” a “government by intimidation or coercion.” 18 U.S.C. § 2332b(g)(5)(A). In other words, one can open a social media account for a terrorist organization without knowing how that account will be used; whereas it is difficult to imagine someone bombing a government building without knowing that bombing would influence or affect government conduct.

The lower court stretched the definition beyond credulity to add more than a decade to the defendant’s sentence.

The district court’s “cause and effect” reasoning is insufficient because the cause—opening social media accounts—and the effect—influencing government conduct by intimidation or coercion—are much too attenuated to warrant the automatic triggering of the enhancement. Instead, to properly apply the enhancement, the district court had to determine that Alhaggagi knew the accounts were to be used to intimidate or coerce government conduct.

The FBI found a malcontent wandering the internet and when the troll refused to keep participating in the “blow shit up” charade, it raided and arrested him for material support. Then, for the crime of opening social media accounts, the government wanted to lock him up for more years than he’d been alive. And, for all the effort — the 24-hour surveillance, the undercover agent, and the confidential informant — the FBI came no closer to heading off a terrorist attack or taking down a terrorist organization.

Filed Under: 9th circuit, amer alhaggagi, fbi, isis, own plots, own terrorists, sentencing enhancements, social media, trolling, trolls

Canadian Man Arrested For Not Being A Terrorist

from the fake-it-til-you-make-it dept

Here in the United States, we’re used to the FBI radicalizing terrorists in order to arrest terrorists. If you don’t have any aspiring terrorist friends, the FBI can set you up with some. Don’t have a plan to do some terror stuff? No problem, the FBI has all kinds of ideas. Low on cash and unable to pick up your own terrorist supplies? Petty cash has you covered, my man. Just looking for a little acceptance? The FBI can fill that void in your life, just before it arrests you and takes that life away.

A string of open net goals by the FBI’s counterterrorism division has left us a bit jaded. We need something new to shake things up a bit. Fortunately, the Royal Canadian Mounted Police have stepped up to provide a new twist: arresting and charging someone for [checks news report] not being a terrorist.

A Canadian whose widely-publicized account of conducting executions for ISIS fueled public outrage and debate in the House of Commons has been charged with allegedly making it up.

Shehroze Chaudhry, 25, who has portrayed himself as a former ISIS member living freely in Canada, was charged with faking his involvement in the terrorist group.

Not only is it a crime to be a terrorist in Canada, it’s also a crime to not be one — not if you portray yourself as a terrorist. After invoking all sorts of small-t terror with his pretending to be Jihadist Public Enemy No. 1, Chaudhry found himself arrested on the more seldom-used charge of “terrorism hoax.”

The RCMP apparently doesn’t take kindly to being duped, although it seems any investigation would have discovered Chaudry’s lack of terrorism and allowed the agency to drop him as a suspect and quit wasting tax dollars on him. There’s a hint of bitterness in this statement:

“Hoaxes can generate fear within our communities and create the illusion there is a potential threat to Canadians, while we have determined otherwise,” said stated Superintendent Christopher deGale, who heads the Toronto INSET.

“As a result, the RCMP takes these allegations very seriously, particularly when individuals, by their actions, cause the police to enter into investigations in which human and financial resources are invested and diverted from other ongoing priorities.”

I understand things like hoax bomb threats and hoax 911 calls can be taxing on a system that often portrays itself as overstretched. But Chaudry’s faux terrorism was apparently limited to shitposting on a number of social media accounts, talking a good terrorist game while never actually being involved with any terrorism group.

The hoax charge hasn’t been used often, but it appears prosecutors think this time it will stick. After all, not many faux terrorists end up the subject of multiple news reports and podcasts reaching large audiences.

[T]he Crown may intend to argue that, because the hoax was so widespread and was featured on a popular podcast, it created fear that Canadian ISIS members were “returning and running around,” and that police were powerless to stop them.

Implicit in that argument is that Chaudry is being punished for making law enforcement look inept. Moving forward with a prosecution on these charges, however, won’t make them look any less inept. In fact, it will compound the perceived ineptness. First, the RCMP can’t take down real terrorists. Second, the RCMP has to resort to arresting fake terrorists. Adding these two negatives together won’t make them a positive.

But it could be an easy win for the Crown. The best defense against charges of fake terrorism is evidence you’re a real terrorist. Either way, Chaudry is probably screwed. But fake terrorism is only five years in prison. Actual terrorism usual nets a person a whole lot more time behind bars. The best choice may be to agree to be the guy who didn’t actually do anything.

Filed Under: canada, hoax, isis, rcmp, shehroze chaudhry, terrorism, terrorism hoax

Virtual Reconstruction Of Ancient Temple Destroyed By ISIS Is Another Reason To Put Your Holiday Photos Into The Public Domain

from the fighting-terrorism-by-sharing dept

The Syrian civil war has led to great human suffering, with hundreds of thousands killed, and millions displaced. Another victim has been the region’s rich archaeological heritage. Many of the most important sites have been seriously and intentionally damaged by the Islamic State of Iraq and Syria (ISIS). For example, the Temple of Bel, regarded as among the best preserved at the ancient city of Palmyra, was almost completely razed to the ground. In the past, more than 150,000 tourists visited the site each year. Like most tourists, many of them took photos of the Temple of Bel. The UC San Diego Library’s Digital Media Lab had the idea of taking some of those photos, with their many different viewpoints, and to combine them using AI techniques into a detailed 3D reconstruction of the temple:

The digital photographs used to create the virtual rendering of the Temple of Bel were sourced from open access repositories such as the #NEWPALMYRA project, the Roman Society, Oxford University and many individual tourists, then populated into Pointcloud, which allows users to interactively explore the once massive temple compound. Additionally, artificial intelligence applications were used to isolate the temple’s important features from other elements that may have appeared in the images, such as tourists, weather conditions and foliage.

The New Palmyra site asks members of the public to upload their holiday photos of ancient Palmyra. The photos are sorted according to the monument: for example, the Temple of Bel collection currently has just over a 1000 images taken before the temple’s destruction. Combining these with other images in academic and research institutions has allowed a detailed point cloud representation of the temple to be created. The model can be tilted, rotated and zoomed from within a browser. Using AI to put together images is hardly cutting-edge these days. In many ways, the key idea is the following note on the New Palmyra home page:

Unless otherwise specified, by uploading your photos or models to #NEWPALMYRA, they will be publicly available under a CC0 license.

Putting the images into the public domain (CC0) is necessary to make the combination of them easy without having to worry about attribution or, even more impossibly, licensing them individually. As the newly-resurrected Temple of Bel shows, once we ignore the copyright industry’s obsession with people “owning” the things they create, and simply give them to the world for anyone to enjoy and build on, we all gain.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

Filed Under: 3d model, isis, palmyra, public domain, syria

Boston Globe Posts Hilarious Fact-Challenged Interview About Regulating Google, Without Any Acknowledgement Of Errors

from the and-we-wonder-why-news-is-failing dept

Warning: this article will discuss a bunch of nonsense being said in a major American newspaper about Google. I fully expect that the usual people will claim that I am writing this because I always support Google — which would be an interesting point if true — but of course it is not. I regularly criticize Google for a variety sketchy practices. However, what this story is really about is why the Boston Globe would publish, without fact checking, a bunch of complete and utter nonsense.

The Boston Globe recently put together an entire issue about “Big Tech” and what to do about it. I’d link to it, but for some reason when I click on it, the Boston Globe is now telling me it no longer exists — which, maybe, suggests that the Boston Globe should do a little more “tech” work itself. However, a few folks sent in this fun interview with noted Google/Facebook hater Jonathan Taplin. Now, we’ve had our run-ins with Taplin in the past — almost always to correct a whole bunch of factual errors that he makes in attacking internet companies. And, it appears that we need to do this again.

Of course, you would think that the Boston Globe might have done this for us, seeing as they’re a “newspaper” and all. Rather than just printing the words verbatim of someone who is going to say things that are both false and ridiculous, why not fact check your own damn interview? Instead, it appears that the Globe decided “let’s find someone to say mean things about Google” and turned up Taplin… and then no one at the esteemed Globe decided “gee, maybe we should check to see if he actually knows what he’s talking about or if he’s full of shit.” Instead, they just ran the interview, and people who read it without knowing that Taplin is laughably wrong won’t find out about it unless they come here. But… let’s dig in.

What would smart regulation look like?

You start with fairly rigorous privacy regulations where you have the ability to opt out of data collection from Google. Then you look at something like a modification of the part of the Digital Millennium Copyright Act, which is what is known as safe harbor. Google and Facebook and Twitter operate under a very unique set of legal regimes that no other company gets to benefit from, which is that no one can sue them for doing anything wrong.

Ability to opt-out of data collection — fair enough. To some extent that’s already possible if you know what you’re doing, but it would be good if Google/Facebook made that easier. Honestly, that’s not going to actually have much of an impact really. I still think the real solution to the dominance of Google/Facebook is to enable more competition that can provide better services that can help limit the power of those guys. But Taplin’s suggestion really seems to be going in the other direction, seeking to lock in their power, while complaining about them.

The “modification” of the DMCA, for example, would almost certainly lock in Google and Facebook and make it nearly impossible for competitors to step up. Also, the DMCA is not “known as safe harbor.” The DMCA — a law that was almost universally pushed by the record labels — is a law that updated copyright law in a number of ways, including giving copyright holders the power to censor on the internet, without any due process or judicial review of whether or not infringement had taken place. There is a small part of it, within Section 512, that includes a very limited safe harbor, that says that while actual infringers are still liable for infringement, the internet platforms they use are not liable if they follow a bunch of rules, including removing the content expeditiously and kicking people off their platform for repeat infringement.

The idea that “Google and Facebook and Twitter operate under a very unique set of legal regimes that no other company gets to benefit from” is complete and utter nonsense, and the Boston Globe’s Alex Kingsbury should have pushed back on it. The Copyright Office’s database of DMCA registered agents includes nearly 9,000 companies (including ours!), because the DMCA’s 512 safe harbors apply to any internet platform who registers. Google, Facebook and Twitter don’t get special treatment.

Furthermore, as a new report recently showed, taking away such safe harbors would do more to entrench the power of Google, Facebook and Twitter since all three companies can deal with such liability, while lots of smaller companies and upstarts cannot. It boggles the mind that the Boston Globe let Taplin say something so obviously false without challenging him.

And, we haven’t even gotten to the second half of that sentence, which is the bizarre and simply false claim that the DMCA’s Section 512 means that “no one can sue them for doing anything wrong.” Again, this is just factually incorrect, and a good journalist would challenge someone for making such a blatantly false claim. The DMCA’s 512 does not, in any way, stop anyone from suing anyone “for doing anything wrong.” That’s ridiculous. The DMCA’s 512 says that a copyright holder will be barred from suing a platform for copyright infringement if a user (not the platform) infringes on copyright and when notified of that alleged infringement, the platform expeditiously removes that content. In addition to that, thanks to various court rulings, the DMCA’s safe harbors are limited in other ways, including that the platforms cannot encourage their use for infringement and they must have implemented repeat infringer policies. No where in any of that does it say that platforms can’t be sued for doing anything wrong.

If the platform does something wrong, they absolutely can be sued. It’s simply a fantasy interpretation of the DMCA to pretend otherwise. Why didn’t the Boston Globe point out these errors? I have no idea, but they let the interview and its nonsense continue.

In other words, they have complete liability protection from being sued for any of the content that is on their services. That is totally unique. Obviously newspapers doesn?t get that protection. And of course also [tech giants] have other advantages over all other corporations; all of the labor that users put in is basically free. Most of us work an hour a day for Google or Facebook improving their services, and we don?t get anything for that other than just services.

Again, they do not have “complete liability protection from being sued for any content that is on their services.” Anything they post themselves, they are still liable for. Anything that a user posts on its platform, if the platform does not comply with DMCA 512, the platform can still be liable for. All DMCA 512 is saying is that they can be liable for a small sliver of content if they fail to follow the rules set out in the law that was pushed for heavily by the recording industry.

Next up, the claim that “obviously newspapers don’t get that protection” is preposterous. Of course they do. A quick search of the Copyright Office database shows registrations by tons of newspaper companies, including the Chicago Tribune, the Daily News, USA Today, the Las Vegas Review-Journal, the LA Times, the Baltimore Sun, the Chicago Sun-Times, the Albany Times Union, the NY Times, the Times Herald, the Times Picayune, the Washington Times, the Post Standard, the Palm Beach Post, the Cincinnati Post, the Kentucky Post, the Seattle Post-Intelligencer, the NY Post, the St. Louis Post-Dispatch, the Washington Post, Ann Arbor News, the Albany Business News, Reno News & Review, the Dayton Daily News, Springfiled News Sun, the Des Moines Register, the Cincinnati Enquirer, the Branson News Leader, the Bergen News, the Pennysaver News, the News-Times, the New Canaan News, Orange County News, San Antonio News-Express, the National Law Journal, the Williamsburg Journal Tribune, the Wall Street Journal, the Jacksonville Journal-Courier, the Lafayette Journal-Courier, the Oregon Statesman Journal, the Daily Journal and on and on and on. Literally I just got tired of writing down names. There are a lot more.

Notably missing? As far as I can tell, the Boston Globe has not registered a DMCA agent. Odd that.

But, back to the point: yes, newspapers get the same damn protection. There is nothing special about Google, Facebook and Twitter. And by now Taplin must know this. So should the Boston Globe.

Ah, but perhaps — you’ll argue — he means that the paper versions don’t get the same protection, while the internet sites do. And, you’d still be wrong. All the DMCA 512 says is that you don’t get to put liability on a third party who had no say in the content posted. With your normal print newspaper that’s not an issue because a newspaper is not a user-generated content thing. It has an editor who is choosing what’s in there. That’s not true of online websites. And that’s why we need a safe harbor like the DMCA’s, otherwise people stupidly blame a platform for actions of their users.

And let’s not forget — because this is important — anything a website does to directly encourage infringement would take away those safe harbors, a la the Grokster ruling in the Supreme Court, which said you lose those safe harbors if you’re inducing infringing. In other words, basically every claim made by Taplin here is wrong. Why does the Boston Globe challenge none of them? What kind of interview is this?

And we’re just on the first question. Let’s move on.

What would eliminating the ?safe harbor? provision in the Digital Millennium Copyright Act mean?

YouTube wouldn?t be able to post 44,000 ISIS videos and sell ads for them.

Wait, what? Once again, there’s so much wrong in just this one sentence that it’s almost criminal that the Boston Globe’s reporter doesn’t say something. Let’s start with this one first: changing copyright law to get rid of a safe harbor will stop YouTube from posting ISIS videos? What about copyright law has any impact on ISIS videos one way or the other? Absolutely nothing. Even assuming that ISIS is somehow violating someone’s copyright in their videos (which, seems unlikely?) what does that have to do with anything?

Second, YouTube is not posting any ISIS videos. YouTube is not posting any videos. Users of YouTube are posting videos. That’s the whole point of the safe harbors. That it’s users doing the uploading and not the platform. And the point of the DMCA safe harbor is to clarify the common sense point that you don’t blame the tool for the user’s actions. You don’t blame Ford because someone drove a Ford as a getaway car in a bank robbery. You don’t blame AT&T when someone calls in a bomb threat.

Third, YouTube has banned ISIS videos (and any terrorist propaganda videos) going back a decade. Literally back to 2008. That’s when YouTube stopped allowing videos from terrorist organizations. How could Taplin not know this? How could the Boston Globe not know this. Over the years, YouTube has even built new algorithms designed to automatically spot “extremist” content and block it (how well that works is another question). Indeed, YouTube is so aggressive in taking down such videos that it’s been known to also take down the videos of humanitarian groups documenting war crimes by terrorists.

Finally, YouTube has long refused to put ads on anything deemed controversial content. Also, it won’t put ads on videos of channels without lots and lots of followers.

So basically in this one short sentence — 14 words long — has four major factual errors in it. Wow. And he’s not done yet.

Or they wouldn?t be able to put up any musician?s work, whether they wanted it on the service or not, without having to bear some consequences. That would really change things.

Again, YouTube is not the one putting up works. Users of YouTube are. And if and when those people upload a video — that is not covered by fair use or other user rights — and it is infringing, then the copyright holder has every right under the DMCA that Taplin misstates earlier to force the video down. And if YouTube doesn’t take it down, then they face all the consequences of being an infringer.

So what would “really change” if we removed the DMCA’s safe harbors? Well, YouTube has already negotiated licenses with basically every record label and publisher at this point. So, basically nothing would change on YouTube. But, you know, for every other platform, they’d be screwed. So, Taplin’s plan to “break up” Google… is to lock the company in as the only platform. Great.

And this leaves aside the fact (whether we like it or not) that under YouTube’s ContentID system which allows copyright holders to “monetize” infringing works has actually opened up a (somewhat strange) new revenue stream for artists, who are now actually profiting greatly from letting people use their works without going through the hassle of negotiating a full license.

I also think it would change the whole fake news conversation completely, because, once Facebook or YouTube or Google had to take responsibility for what?s on their services, they would have to be a lot more careful to monitor what goes on there.

Again… what? What in the “whole fake news conversation” has anything to do with copyright? This is just utter nonsense.

Second, if platforms are suddenly “responsible” for what’s on their service, then… Taplin is saying that the very companies he hates, that he thinks are the ruination of culture and society, should be the final arbiters of what speech is okay online. Is that really what he wants? He wants Google and Facebook and YouTube — three platforms he’s spent years attacking — determining if his own speech is fake news?

Really?

Because, let’s face it, as much as I hate the term, this interview is quintessential fake news. Nearly every sentence Taplin says includes some false statement — often multiple false statements. And the Boston Globe published it. Should the Boston Globe now be liable for Taplin’s deranged understanding of the law? Should we be able to sue the Boston Globe because it published utter nonsense uttered by Jonathan Taplin? Because that’s what he’s arguing for. Oh, but, I forgot, according to Taplin, the Boston Globe — as a newspaper — has no such safe harbor, so it’s already fair game. Sue away, people…

Wouldn?t that approach subject these services to death by a thousand copyright-infringement lawsuits?

It would depend on how it was put into practice. When someone tries to upload pornography to YouTube, an artificial intelligence agent sees a bare breast and shunts it into a separate queue. Then a human looks at it and says, ?Well, is this National Geographic, or is this porn?? If it?s National Geographic it probably gets on the service, and if it?s porn it goes in the trash. So, it?s not like they?re not doing this already. It?s just they?ve chosen to filter porn off of Facebook and Google and YouTube but they haven?t chosen to filter ISIS, hate speech, copyrighted material, fake news, that kind of stuff.

This is just a business decision on their part. They know every piece of content that?s being uploaded because they used the ID to decide who gets the advertising. So they could do all of this very easily. It?s just they don?t want to do it.

First off, finally, the Boston Globe reporter pushes back slightly. Not by correcting any of the many, many false claims that Taplin has made so far, but in highlighting a broader point: that Taplin’s solution is completely idiotic and unworkable, because we already see the abuse that the DMCA takedown process gets. But… Taplin goes into spin mode and suggests there’s some magic way that this system wouldn’t be abused for censorship (even though the existing system is).

Then he explains his fantasy-land explanation of how YouTube moderation actually works. He’s wrong. This is not how it works. Most content is never viewed by a human. But let’s delve in deeper again. Taplin and some of his friends like to point to the automated filtering of porn. But porn is something that is much easier to teach a computer to spot. A naked breast is something you can teach a computer to spot pretty well. Fake news is not. Hate speech is not. Separately, notice that Taplin never ever mentions ContentID in this entire interview? Even though that does the very thing he seems to insist that YouTube refuses to do? ContentID does exactly what he claims this porn filter is doing. But he pretends it doesn’t exist and hasn’t existed for years.

And the Boston Globe just lets it slide.

Also, again, Taplin insists that YouTube and Facebook “haven’t chosen to filter ISIS” even though both companies have done so for years. How does Taplin not know this? How does the Boston Globe reporter not know this? How does the Boston Globe think that its ignorant reporter should interview this ignorant person? Why did they then decide to publish any of this? Does the Boston Globe employ fact checkers at all? The mind boggles.

Meanwhile, we really shouldn’t let it slide that Taplin — when asked specifically about copyright infringement — seems to argue that if copyright law was changed, it would somehow magically lead Google to stop ISIS videos, hate speech and fake news among other things. None of those things has anything to do with copyright law. Shouldn’t he know this? Shouldn’t the Boston Globe?

As for the second paragraph, it’s also utter nonsense. YouTube “knows every piece of content that’s being uploaded because they used the ID to decide who gets the advertising.” What does that even mean. What is “the ID”? And, even in the cases where YouTube does decide to place ads on videos (again, which is greatly restricted, and is not for all content), the fact that Google’s algorithms can try to insert relevant ads does not mean that Google “knows” what’s in the content. It just means that an algorithm does some matching. And, sure, Taplin might point out that if they can do that, why can’t they also do it for copyright and ISIS and the answer is that THEY DO. That’s the whole fucking point.

Again, why is the Boston Globe publishing utter nonsense?

Is Google trying to forestall this kind of regulation?

Ultimately YouTube is already moving towards being a service that pays content providers. They announced last month that they?re going to put up a YouTube music channel. And that will look much more like Spotify than it looks like YouTube. In other words, they will license content from providers, they will charge $10 a month for the service, and you will then get curated lists of music. From the point of view of the artists and the record company, it?ll be a lot better than the system that exists now ? where essentially YouTube says to you, your content is going to be on YouTube whether you want it to or not, so check this box if you want us to give you a little bit of the advertising.

YouTube has been paying content providers for years. I mean, it’s been years since the company announced that in one year alone, it had paid musicians, labels and publishers over a billion dollars. And Taplin claims they’re “moving” to such a model? Is he stuck in 2005? And, they already license content from providers. The $10/month thing again, is not new (it’s been available for years), but that’s a separate service, which is not the same as regular YouTube. And it has nothing to do with any of this. If the DMCA changed, then… that wouldn’t have any impact at all on any of this.

Still, let’s recap the logic here: So YouTube offering a music service, which it set up to compete with Spotify and Apple Music, and which has nothing to do with the regular YouTube platform, will somehow “forestall” taking away the DMCA’s safe harbors? How exactly does that work? I mean, wouldn’t the logic work the other way?

The whole interview is completely laughable. Taplin repeatedly makes claims that don’t pass the laugh test for anyone with even the slightest knowledge of the space. And nowhere does the Boston Globe address the multiple outright factual errors. Sure, I can maybe (maybe?) understand not pushing back on Taplin in the moment of the interview. But why let this go to print without having someone (anyone?!?) with even the slightest understanding of the law or how YouTube actually works, check to see if Taplin’s claims were based in reality? Is that really so hard?

Apparently it is for the Boston Globe and its “deputy editor” Alex Kingsbury.

Filed Under: content moderation, copyright, dmca, dmca 512, intermediary liability, internet, interview, isis, jonathan taplin, journalism, regulation, videos
Companies: boston globe, facebook, google, twitter, youtube