appeals – Techdirt (original) (raw)

An (Im)perfect Way Forward On Infrastructure Moderation?

from the infrastructure-moderation-appeals dept

Within every conversation about technology lies the moral question: is a technology good or bad? Or, is it neutral? In other words, are our values part of the technologies we create or is technology valueless until someone decides what to do?

This is the kind of dilemma Cloudflare, the Internet infrastructure company, found itself in earlier this year. Following increasing pressure to drop KiwiFarms, a troll site targeting women and minorities, especially, LGBTQ people, Cloudflare’s CEO, Matthew Prince, and Alissa Starzak, its VP for Public Policy, posted a note stating that “the power to terminate security services for the sites was not a power Cloudflare should hold”. Clouldflare was the provider of such security services to KiwiFarms.

Cloudflare’s position was impossible. On the one hand, Cloudflare, as an infrastructure provider, should not be making any content moderation decisions; on the other, KiwiFarm’s existence was putting the lives of people in danger. Although Cloudflare is not like “the fire department” as it claims (fire departments are essential for the societies to function and feel safe; Cloudflare is not essential for the functioning of the internet, though it does make it more secure), still moving content moderation down the internet stack can have a chilling effect on speech and the internet. At the end of the day, it is services, like Cloudflare’s, which get to determine who is visible in the internet.

Cloudflare ended up terminating KiwiFarms as a customer even though originally it said it wouldn’t. In a way, Cloudflare’s decision to reverse its own intention, placed content moderation at the infrastructure level front and center once again. Now though, it feels like we are running out of time; I am not sure how much more of such unpredictability and inconsistency can be tolerated before regulators step in.

Personally, the idea of content moderation at the infrastructure level makes me uncomfortable, especially because content moderation will move somewhere that is invisible to most. Fundamentally, I still believe that moving content moderation down at the infrastructure level is dangerous in terms of scale and impact. The Internet should remain agnostic of the data that moves around it and anyone who facilitates this movement should adhere to this principle. At least, this must be the rule. I don’t think this will be the priority in any potential regulation.

However, there is another reality that I’ve grown into: decisions, like the one Cloudflare was asked to make, have real consequences to real people. In cases like KiwiFarms inaction feels like aiding and abetting. If there is something that someone can do to prevent such reprehensible activity, shouldn’t they just go ahead, and do it?

That something will be difficult to accept. If content moderation is messy and complex for Facebook and Twitter, imagine for companies like Cloudflare and AWS. The same problems with speech, human rights and transparency will exist at the infrastructure level; just multiply them by a million. To be fair, infrastructure providers already engage in removal of websites and services in the internet. And, they have policies to do that. Cloudflare said so: “Thousands of times per day we receive calls that we terminate security services based on content that someone reports as offensive. Most of these don’t make news. Most of the time these decisions don’t conflict with our moral views.” Not all infrastructure providers have policies though and, in general, decisions about content removal taking place at the infrastructure level are opaque.

KiwiFarms will happen again. It might not be called that, but it’s a matter of time before a similarly disgusting case pops up. We need a way forward and fast.

So, here’s a thought: an “Oversight Board-type” of body for infrastructure. This body – let’s call it “_Infrastructure Appeals Panel_” – will be funded by as many infrastructure providers as possible and its role will be to scrutinize decisions infrastructure providers make regarding content. The Panel will need to have a clear mandate and scope and be global, which is important as the decisions made by infrastructure providers affect both issues of speech and the Internet. Its rules must be written by infrastructure providers and users, which is perhaps the single most difficult thing. As Evelyn Douek said “writing speech rules is hard”; it becomes even harder if one considers the possible chilling effect. And, this whole exercise becomes even more difficult if you need to add rules about the impact on the internet. Unlike the decisions social media companies make every day, decisions made at the infrastructure of the internet can also create unintended consequences to the way it operates.

Building such an external body is not easy and, many things can go wrong. Finding the right answers to questions regarding board member selection, independence, process and values becomes key for its success. And, although such systems can be arbitrary and abused, history shows they can also be effective. In the Middle Ages, for instance, at the time international trade was shaping, itinerant merchants sought to establish a system of adjudication, detached from local sovereign law and able to govern the practices and norms that were emerging at the time. The system of lex mercatoria originated from the need to structure a system that would be efficient in addressing the needs of merchants and, produce decisions that would carry value equivalent to the decisions reached through traditional means. Currently, content moderation at the infrastructure is an unchecked system, where players can exercise arbitrary power, which is further exacerbated by the lack of interest or understanding at what is happening at that level.

Most likely, this idea will not be enough to address all the content moderation issues at the infrastructure level. Additionally, if it is going to have any real chance of being useful, the Panel’s design, structure, and implementation as well as its legitimacy must be considered a priority. An external panel that is not scoped appropriately or does not have any authority, risks creating false accountability; the result is that policy makers get distracted while systemic issues persist. Lessons can be learned from the similar exercise of creating the Oversight Board.

The last immediate thing is for this Panel not to be seen as the answer to issues of speech or infrastructure. We should continue to discuss ways of addressing content moderation at the infrastructure level and try to institute the necessary safeguards and reforms on what is the best way to moderate content. There is never going to be a way to create fully consistent policies or agree on a set of norms. But, through transparency, which such a panel can provide, we can reach a state where the conversation becomes more focused and driven more by facts and less by emotions.

Konstantinos Komaitis is an internet policy expert and author. His website is at komaitis.org.

Filed Under: appeals, content moderation, infrastructure, oversight
Companies: cloudflare

Just How Incredibly Fucked Up Is Texas’ Social Media Content Moderation Law?

from the let-us-count-the-ways dept

So, I already had a quick post on the bizarre decision by the 5th Circuit to reinstate Texas’ social media content moderation law just two days after a bizarrely stupid hearing on it. However, I don’t think most people actually understand just how truly fucked up and obviously unconstitutional the law is. Indeed, there are so many obvious problems with it, I’m not even sure I can do them adequate justice in a single post. I’ve seen some people say that it’s easy to comply with, but that’s wrong. There is no possible way to comply with this bill. You can read the full law here, but let’s go through the details.

The law declares social media platforms as “common carriers” and this was a big part of the hearing on Monday, even though it’s not at all clear what that actually means and whether or not a state can just magically declare a website a common carrier (as we’ve explained, that’s not how any of this works). But, it’s mainly weird because it doesn’t really seem to mean anything under Texas law. The law could have been written entirely without declaring them “common carriers” and I’m not sure how it would matter.

The law applies to “social media platforms” that have more than 50 million US monthly average users (based on whose counting? Dunno. Law doesn’t say), and limits it to websites where the primary purpose is users posting content to the site, not ones where things like comments and such are a secondary feature. It also excludes email and chat apps (though it’s unclear why). Such companies with over 50 million users in the US probably include the following as of today (via Daphne Keller’s recent Senate testimony): Facebook, YouTube, Tiktok, Snapchat, Wikipedia, and Pinterest are definitely covered. Likely, but not definitely, covered would be Twitter, LinkedIn, WordPress, Reddit, Yelp, TripAdvisor, and possibly Discord. Wouldn’t it be somewhat amusing if, after all of this, Twitter’s MAUs fall below the threshold?! Also possibly covered, though data is lacking: Glassdoor, Vimeo, Nextdoor, and Twitch.

And what would the law require of them? Well, mostly to get sued for every possible moderation decision. You only think I’m exaggerating. Litigator Ken White has a nice breakdown thread of how the law will encourage just an absolutely insane amount of wasteful litigation:

https://twitter.com/Popehat/status/1524535770425401344

As he notes, a key provision and the crux of the bill is this bizarre “anti-censorship” part:

CENSORSHIP PROHIBITED. (a) A social media platform may not censor a user, a user’s expression, or a user’s ability to receive the expression of another person based on: (1) the viewpoint of the user or another person; (2) the viewpoint represented in the user’s expression or another person’s expression; or (3) a user’s geographic location in this state or any part of this state. (b) This section applies regardless of whether the viewpoint is expressed on a social media platform or through any other medium.

So, let’s break this down. It says that a website cannot “censor” (by which it clearly means moderate) based on the user’s viewpoint or geographic location. And it applies even if that viewpoint doesn’t occur on the website.

What does that mean in practice? First, even if there is a good and justifiable reason for moderating the content — say it’s spam or harassment or inciting violence — that really doesn’t matter. The user can simply claim that it’s because of their viewpoints — even those expressed elsewhere — and force the company to fight it out in court. This is every spammer’s dream. Spammers would love to be able to force websites to accept their spam. And this law basically says that if you remove spam, the spammer can take you to court.

Indeed, nearly all of the moderation that websites like Twitter and Facebook do are, contrary to the opinion of ignorant ranters, not because of any “viewpoint” but because they’re breaking actual rules around harassment, abuse, spam, or the like.

While the law does say that a site must clearly post its acceptable use policy, so that supporters of this law can flat out lie and claim that a site can still moderate as long as it follows its policies, that’s not true. Because, again, all any aggrieved user has to do is to claim the real reason is due to viewpoint discrimination, and the litigation is on.

And let me tell you something about aggrieved users: they always insist that any moderation, no matter how reasonable, is because of their viewpoint. Always. And this is especially true of malicious actors and trolls, who are in the game of trolling just to annoy in the first place. If they can take that up a notch and drag companies into court as well? I mean, the only thing stopping them will be the cost, but you already know that a cottage industry is going to pop up of lawyers who will file these cases. I wouldn’t even be surprised if cases start getting filed today.

And, as Ken notes in his thread, the law seems deliberately designed to force as much frivolous litigation on these companies as possible. It says that even if one local court has rejected these lawsuits or blocked the Attorney General from enforcing the law, you can still sue in other districts. In other words, keep on forum shopping. Also, it has a nonmutual claim and issue preclusion, meaning that even if a court says that these claims are bogus, each new claim must be judged anew. Again, this seems uniquely designed to force these companies into court over and over and over again.

I haven’t even gotten to the bit that says that you can’t “censor” based on geographic location. That portion can basically be read to be forcing social media companies to stay in Texas. Because if you block all of your Texas users, they can all sue you, claiming that you’re “censoring” them based on their geographic location.

So, yeah, here you have the “free market” GOP passing a law that effectively says that social media companies (1) have to operate in Texas and (2) have to be sued over every moderation decision they make, even if it’s in response to clear policy violations.

Making it even more fun, the law forbids any waivers, so social media companies can’t just put a new thing in their terms of service saying that you waive your rights to bring a claim under this law. They really, really, really just want to flood every major social media website with a ton of purely frivolous and vexatious litigation. The party that used to decry trial lawyers just made sure that Texas has full employment for trial lawyers.

And that’s not all that this law does. That’s just the part about “censorship.”

There is the whole transparency bit, requiring that a website “disclose accurate information regarding its content management, data management, and business practices.” That certainly raises some issues about trade secrets, general security and more. But, it also is going to effectively require that websites publish all the details that spammers, trolls, and others need to be more effective.

The covered companies will also have to keep a tally over every form of moderation and post it in its transparency report. So, every time a spam posting is removed, it will need to be tracked and recorded. Even any time content is “deprioritized.” What does that mean? All of these companies recommend stuff based on algorithms, meaning that some stuff is prioritized and some stuff is not. I don’t care to see when people I follow tweet about football, because I don’t watch football. But it appears that if the algorithm learns that about me and chooses to deprioritize football tweets just for me, the company will need to include that in its transparency report.

Now, multiply that by every user, and every possible interaction. I think you could argue that these sites “deprioritize” content billions of times a day just by the natural functioning of the algorithm. How the hell do you track all the content you don’t show someone?!

The law also requires detailed, impossible complaint procedures, including a full tracking system if someone follows a complaint. That’s required as of last night. So best of wishes to every single covered platform, none of whom have this technology in place.

It also requires that if the website is alerted to illegal content, it has to determine whether or not the content is actually illegal within 48 hours. I’ll just note that, in most cases, even law enforcement isn’t that quick, and then there’s the whole judicial process that can take years to determine if something is illegal. Yet websites are given 48 hours?

Hilariously, the law says that you don’t have to give a user the opportunity to appeal if the platform “knows that the potentially policy-violating content relates to an ongoing law enforcement investigation.” Except, won’t this kind of tip people off? Your content gets taken down, but the site doesn’t give you the opportunity to appeal… Well, the only exemption there is if you’re subject to an ongoing law enforcement investigation, so I guess you now know there is one, because the law says that’s the only reason they can refuse to take your appeal. Great work there, Texas.

The appeal must be decided within 14 days, which sure sounds good if you have no fucking clue how long some of these investigations might take — especially once the system is flooded with the appeals required under this law.

And, that’s not all. Remember last week when I was joking about how Republicans wanted to make sure your inboxes were filled with spam? I had forgotten about the provision in this law that makes a lot of spam filtering a violation of the law. I only wish I was joking. For unclear reasons, the law also amends Texas’ existing anti-spam law. It added (and it’s already live in the law) a section saying the following:

Sec. 321.054. IMPEDING ELECTRONIC MAIL MESSAGES PROHIBITED. An electronic mail service provider may not intentionally impede the transmission of another person’s electronic mail message based on the content of the message unless:

(1) the provider is authorized to block the transmission under Section 321.114 or other applicable state or federal law; or

(2) the provider has a good faith, reasonable belief that the message contains malicious computer code, obscene material, material depicting sexual conduct, or material that violates other law.

So that literally says the only reasons you can “impede” email is if it contains malicious code, obscene material, sexual content, or violates other laws. Now the reference to 321.114 alleviates some of this, since that section gives services (I kid you not) “qualified immunity” for blocking certain commercial email messages, but only with certain conditions, including enabling a dispute resolution process for spammers.

There are many more problems with this law, but I am perplexed at how anyone could possibly think this is either workable or Constitutional. It’s neither. The only proper thing to do would be to shut down in Texas, but again the law treats that as a violation itself. What an utter monstrosity.

And, yes, I know, very very clueless people will comment here about how we’re just mad that we can’t “censor” people any more (even though it’s got nothing to do with me or censoring). But can you at least try to address some of the points raised above and explain how any of these services can actually operate without getting sued out of existence, or allowing all garbage all the time to fill the site?

Filed Under: 1st amendment, appeals, common carrier, content moderation, editorial discretion, email, free speech, hb20, litigation, social media, texas, transparency, viewpoint discrimination

Texas Says Its Unconstitutional Content Moderation Law Should Still Go Into Effect While We Wait For Appeal; Judge: 'No, That's Not How This Works'

from the good-judge dept

Last week, the district court Judge Robert Pitman wrote an excellent ruling tossing out Texas’ silly content moderation law as clearly unconstitutional under the 1st Amendment. As was widely expected, Texas has appealed the ruling to the 5th Circuit (undeniably, the wackiest of the Circuits, so who knows what may happen). However, in the meantime, Texas Attorney General Ken Paxton also asked the lower court to have the law go into effect while waiting for the appeals court to rule!

A stay is also supported by the widely recognized principle that enjoining a state law inflicts irreparable harm on the state, and that the public?s interest is aligned with the state?s interest and harm. Plaintiffs, in contrast, will not be irreparably harmed if a stay is granted. This is evidenced by the fact that (1) their supportive members stated they either already comply with aspects of the law or could not explain how the law would be burdensome in practice; and (2) Plaintiffs? other members, filing as amici in opposition to the Preliminary Injunction, have demonstrated no harm will occur by enforcement of H.B. 20. For all these reasons, as further set forth below, a temporary stay while the Fifth Circuit considers the merits of this Court?s Preliminary Injunction is warranted.

It is really incredible:

The Attorney General has also raised questions never considered by the Fifth Circuit or the Supreme Court as to common carriage and the First Amendment. Correspondingly, the Attorney General has demonstrated a likelihood of success on the merits regarding Plaintiffs? claims. While this Court may have rejected the Attorney General?s arguments, it did so by relying on readily distinguishable First Amendment case law and giving dispositive weight to a novel ?fact?: whether the entity ?screen[s] and sometimes moderate[s] or curate[s]? user generated content.

Therefore, given the novel nature of Plaintiffs? claims and the substantial support for the Attorney General?s arguments, the Court of Appeals should have an opportunity to consider these issues before the injunction is implemented.

Basically, “even though we lost easily, we really made the better arguments, so therefore you should let the law go into effect.” It’s nonsense.

Remember, the key reason that the judge blocked the law from going into effect was because it so obviously violates the 1st Amendment, so letting the law go into effect fundamentally would violate 1st Amendment rights. Texas’ argument here that blocking the law from going into effect “inflicts irreparable harm on the state” is positively bizarre. “If we can’t violate the 1st Amendment rights of websites, then we’re irreparably harmed” is a dumb argument. The plaintiffs in the case, NetChoice and CCIA fired back with the proper “LOL, wut?” opposition brief, though most of that focused on Paxton wanting the other parts of the case to continue to move forward in the district court while the appeal is happening (and basically to get into the intrusive discovery process).

The judge wasted little time in rejecting Paxton’s nonsense:

The State largely rehashes the same arguments this Court rejected in its Order. The State?s new argument?that the preliminary injunction is overbroad?also asserts, again, that HB 20 is not unconstitutional. (Id. at 13). However, the Court already found that Plaintiffs are likely to establish that Sections 2 and 7 of HB 20 are unconstitutional and, as a result, fashioned a narrow, preliminary injunction. The Court is also not persuaded by the State?s contention that preliminarily enjoining the enforcement of Section 2?which contains disclosure requirements?was too broad a remedy because one of Plaintiffs? members happens to already satisfy ?several? disclosure requirements. (Id. at 13). Whether one of Plaintiffs? members makes a business decision to publish certain disclosures, even if a few of those disclosures align with Section 2?s requirements, does not impact this Court?s decision that the State cannot constitutionally enforce Section 2?s many requirements imposed on social media platforms. Accordingly, the Court declines to stay its Order.

It also sides with NetChoice in staying the other parts of the case until after the appeal.

To preserve court resources and for judicial efficiency, whatever the posture of this case when it returns to this Court, the Court will exercise its discretion to stay this case and preserve its current posture

In other words, no, Paxton, you’re not likely to succeed, and if you do, we can take up the issue then…

Filed Under: 1st amendment, appeals, content moderation, robert pitman, texas
Companies: ccia, netchoice

The Oversight Board's Decision On Facebook's Trump Ban Is Just Not That Important

from the undue-ado dept

Today is Facebook Oversight Board Hysteria Day, because today is the day that the Facebook Oversight Board has rendered its decision about Facebook’s suspension of Donald Trump. And it has met the moment with an appropriately dull decision, dripping in pedantic reasonableness, that is largely consistent with our Copia Institute recommendation.

If you remember, we were hesitant about submitting a comment at all. And the reaction to the Board’s decision bears out why. People keep reacting as though it is some big, monumental, important decision, when, in actual fact, it isn’t at all. In the big scheme of things, it’s still just a private company being advised by its private advisory board on how to run its business, nothing more. As it is, Trump himself is still on the Internet ? it’s not like Facebook actually had the power to silence him. We need to be worried about when there actually is power to silence people, and undue concern about Facebook’s moderation practices only distracts us from them. Or, worse, leads people to try to create actual law that will end up having the effect of giving others the legal power to suppress expressive freedom.

So our pride here is necessarily muted, because ultimately this decision just isn’t that big a deal. Still, as a purely internal advisory decision, one intended to help the company act more consistently in the interests of its potential user base, it does seem to be a good one given how it hews to our key points.

First, we made the observation that then-President Trump’s use of his Facebook account threatened real, imminent harm. We did, however, emphasize the point that it was generally better to try not to delete speech (or speakers). Nevertheless, sometimes it might need to be done, and in those cases it should be done “with reluctance and only limited, specific, identifiable, and objective criteria to justify the exception.” There might not ultimately be a single correct decision, we wrote, for whether speech should be left up or taken down. “[I]n the end the best decision may have little to do with the actual choice that results but rather the process used to get there.”

And this sort of reasoning is basically at the heart of the Board’s decision: Trump’s posts were serious enough to justify a sanction, including a suspension, but imposing the indefinite suspension appeared to be unacceptably arbitrary. Per the Board, Facebook needs to make these sorts of decisions consistently and transparently from here on out.

On January 6, Facebook?s decision to impose restrictions on Mr. Trump?s accounts was justified. The posts in question violated the rules of Facebook and Instagram that prohibit support or praise of violating events, including the riot that was then underway at the U.S. Capitol. Given the seriousness of the violations and the ongoing risk of violence, Facebook was justified in imposing account-level restrictions and extending those restrictions on January 7. However, it was not appropriate for Facebook to impose an indefinite suspension. Facebook did not follow a clear published procedure in this case. Facebook?s normal account-level penalties for violations of its rules are to impose either a time-limited suspension or to permanently disable the user?s account. The Board finds that it is not permissible for Facebook to keep a user off the platform for an undefined period, with no criteria for when or whether the account will be restored.

The Board has given Facebook six months to re-evaluate the suspension in accordance with clear rules.

If Facebook determines that Mr. Trump?s accounts should be restored, Facebook should apply its rules to that decision, including any modifications made pursuant to the policy recommendations below. Also, if Facebook determines to return him to the platform, it must address any further violations promptly and in accordance with its established content policies.

As for what those rules should be, the Board also made a few recommendations. First, it noted that “political leader” versus “influential user” is not always a meaningful distinction. Indeed, we had noted that Trump’s position cut both ways: as a political leader, there was public benefit to knowing what he had to say. On the other hand, that position also gave his posts greater ability to do harm. The Board for its part noted that context will matter; while the rules should ideally be the same for everyone, since the impact won’t be, it is ok for Facebook to take into account the specific probability of imminent harm in making its decisions.

The Board believes that it is not always useful to draw a firm distinction between political leaders and other influential users. It is important to recognize that other users with large audiences can also contribute to serious risks of harm. The same rules should apply to all users of the platform; but context matters when assessing issues of causality and the probability and imminence of harm. What is important is the degree of influence that a user has over other users.

In general, the Board cited to general principles of human rights law, and specifically the Rabat Plan of Action “to assess the capacity of speech to create a serious risk of inciting discrimination, violence, or other lawless action.” As for how long suspensions should generally last, they should be long enough to “deter misconduct and may, in appropriate cases, include account or page deletion.” Facebook is therefore free to re-impose Trump’s suspension as it re-evaluates it, if it feels it remains warranted. It just needs to do so in a more transparent way that would be scalable to other similar situations. As it summarized:

Facebook should publicly explain the rules that it uses when it imposes account-level sanctions against influential users. These rules should ensure that when Facebook imposes a time-limited suspension on the account of an influential user to reduce the risk of significant harm, it will assess whether the risk has receded before the suspension ends. If Facebook identifies that the user poses a serious risk of inciting imminent violence, discrimination or other lawless action at that time, another time-bound suspension should be imposed when such measures are necessary to protect public safety and proportionate to the risk. The Board noted that heads of state and other high officials of government can have a greater power to cause harm than other people. If a head of state or high government official has repeatedly posted messages that pose a risk of harm under international human rights norms, Facebook should suspend the account for a period sufficient to protect against imminent harm. Suspension periods should be long enough to deter misconduct and may, in appropriate cases, include account or page deletion.

As we suggested in our comment, the right policy choices for Facebook to make boil down to the ones that best make Facebook the community it wants to be. At its core, that’s what the Board’s decision is intended to help with: point out where it appears Facebook has fallen short of its own espoused ideals, and help it get back on track in the future.

Which is, overall, a good thing. It just isn’t, as so many critics keep complaining, everything. The Internet is far more than just Facebook, no matter what Trump or his friends think. And there are far more important things for those of us who care about preserving online expression to give our attention to than this.

Filed Under: appeals, donald trump, permanent suspension, policies, rules, suspension
Companies: facebook, oversight board

Oversight Board Tells Facebook It Needs To Shape Up And Be More Careful About Silencing Minorities Seeking To Criticize The Powerful

from the pay-attention-to-this dept

Tomorrow, the Oversight Board is set to reveal its opinion on whether Facebook made the right decision in banning former President Trump. And that will get tons of attention. But the Board came out with an interesting decision last week regarding a content takedown in India, that got almost no attention at all.

Just last week, we wrote about an ongoing issue in India, where the government of Prime Minister Narendra Modi has failed in almost every way possible in dealing with the COVID pandemic, but has decided the best thing to focus on right now is silencing critics on Twitter. That backdrop is pretty important considering that the very next day, the Oversight Board scolded Facebook for taking down content criticizing Modi’s government.

That takedown was somewhat different and the context was very different. Also, it should be noted that as soon as the Oversight Board agreed to take the case, Facebook admitted it had made a mistake and reinstated the content. However, this case demonstrates something important that often gets lost in all of the evidence free hand-wringing about “anti-conservative bias” from people who wrongly insist that Facebook and Twitter only moderate the accounts of their friends. The truth is that content all across the board gets moderated — and often the impact is strongest on the least powerful groups. But, of course, part of their lack of power is that they’re unable to rush onto Fox News and whine about how they’re being “censored.”

The details here are worth understanding, not because there was some difficult decision to make. Indeed, as noted already, Facebook realized it made a mistake almost immediately after the Oversight Board decided to look into this, and when asked why the content was taken down, basically admitted that it had no idea and that it was a complete and total mistake. Here was the content, as described by the Oversight Board ruling:

The content touched on allegations of discrimination against minorities and silencing of the opposition in India by ?Rashtriya Swayamsevak Sangh? (RSS) and the Bharatiya Janata Party (BJP). RSS is a Hindu nationalist organization that has allegedly been involved in violence against religious minorities in India. ?BJP? is India?s ruling party to which the current Indian Prime Minister Narendra Modi belongs, and has close ties with RSS.

In November 2020, a user shared a video post from Punjabi-language online media Global Punjab TV and an accompanying text. The post featured a 17-minute interview with Professor Manjit Singh, described as ?a social activist and supporter of the Punjabi culture.? In its post, Global Punjab TV included the caption ?RSS is the new threat. Ram Naam Satya Hai. The BJP moved towards extremism.? The media company also included an additional description ?New Threat. Ram Naam Satya Hai! The BJP has moved towards extremism. Scholars directly challenge Modi!? The content was posted during India?s mass farmer protests and briefly touched on the reasons behind the protests and praised them.

The user added accompanying text when sharing Global Punjab TV?s post in which they stated that the CIA designated the RSS a ?fanatic Hindu terrorist organization? and that Indian Prime Minister Narendra Modi was once its president. The user wrote that the RSS was threatening to kill Sikhs, a minority religious group in India, and to repeat the ?deadly saga? of 1984 when Hindu mobs attacked Sikhs. They stated that ?The RSS used the Death Phrase ?Ram naam sat hai?.? The Board understands the phrase “Ram Naam Satya Hai” to be a funeral chant that has allegedly been used as a threat by some Hindu nationalists. The user alleged that Prime Minister Modi himself is formulating the threat of ?Genocide of the Sikhs? on advice of the RSS President, Mohan Bhagwat. The accompanying text ends with a claim that Sikhs in India should be on high alert and that Sikh regiments in the army have warned Prime Minister Modi of their willingness to die to protect the Sikh farmers and their land in Punjab.

The post was up for 14 days and viewed fewer than 500 times before it was reported by another user for ?terrorism.? A human reviewer determined that the post violated the Community Standard on Dangerous Individuals and Organizations and took down the content, which also triggered an automatic restriction on the use of the account for a fixed period of time. In its notification to the user, Facebook noted that its decision was final and could not be reviewed due to a temporary reduction in its review capacity due to COVID-19. For this reason, the user appealed to the Oversight Board.

So, you had an ethnic minority — one who had been attacked in the past — warning about those currently in power. And Facebook took it down, refused to review the appeal… until the Oversight Board turned its eye on it, and then admitted it was a mistake, and basically threw its hands in the air and said it had no idea why it had been taken down in the first place.

According to Facebook, following a single report against the post, the person who reviewed the content wrongly found a violation of the of the Dangerous Individuals and Organizations Community Standard. Facebook informed the Board that the user?s post included no reference to individuals or organizations designated as dangerous. It followed that the post contained no violating praise.

Facebook explained that the error was due to the length of the video (17 minutes), the number of speakers (two), the complexity of the content, and its claims about various political groups. The company added that content reviewers look at thousands of pieces of content every day and mistakes happen during that process. Due to the volume of content, Facebook stated that content reviewers are not always able to watch videos in full. Facebook was unable to specify the part of the content the reviewer found to violate the company?s rules.

Got that? Facebook is basically saying “yeah, it was a mistake, but that was because it was a long video, and we just had one person reviewing who probably didn’t watch the whole video.”

Here’s the thing that the “oh no, Facebook is censoring people” don’t get. This happens all the time. And none of us hear about it because the people it happens to often are unable to make themselves heard. They don’t get to run to Fox News or Parler or some other place and yell and scream. And, this kind of “accidental” moderation especially happens to the marginalized. Reviewers may not fully understand what’s going on, or not really understand the overall context, and may take the “report” claim at face value, rather than having the ability or time to fully investigate.

In the end, the Oversight Board told Facebook to put back the content, which was a no-brainer since Facebook had already done so. However, more interesting were its policy recommendations (which, again, are not binding on Facebook, but which the company promises to respond to). Here, the Oversight Board said that Facebook should make its community standards much more accessible and understandable, including translating the rules into more languages.

However, the more interesting bit was that it said that Facebook “should restore human review and access to a human appeals process to pre-pandemic levels as soon as possible while fully protecting the health of Facebook?s staff and contractors.” There were some concerns, early in the pandemic, about how well content moderation teams could work from home, since a lot of that job involves looking at fairly sensitive material. So, there may be reasons this is not really doable just yet.

Still, this case demonstrates a key point that we’ve tried to raise about the impossibility of doing content moderation at scale. So much of it is not about biases, or incompetence, or bad policies, or not wanting to do what’s right. A hell of a lot of it is just… when you’re trying to keep a website used by half the world operating, mistakes are going to be made.

Filed Under: appeals, content moderation, free speech, india, minorities, mistakes, review, takedowns
Companies: facebook, oversight board

The Copia Institute To The Oversight Board Regarding Facebook's Trump Suspension: There Was No Wrong Decision

from the context-driven-coin-flip dept

The following is the Copia Institute’s submission to the Oversight Board as it evaluates Facebook’s decision to remove some of Trump’s posts and his ability to post. While addressed to the Board, it’s written for everyone thinking about how platforms moderate content.

The Copia Institute has advocated for social media platforms to permit the greatest amount of speech possible, even when that speech is unpopular. At the same time, we have also defended the right of social media platforms to exercise editorial and associative discretion about the user expression it permits on its services. This case illustrates why we have done both. We therefore take no position on whether Facebook’s decision to remove former-President Trump’s posts and disable his ability to make further posts was the right decision for Facebook to make because choosing to do so or choosing not to is each defensible. Instead our goal is to explain why.

Reasons to be wary of taking content down. We have long held the view that the reflex to remove online content, even odious content, is generally not a healthy one. Not only can it backfire and lead to the removal of content undeserving of deletion, but it can have the effect of preserving a false monoculture in online expression. Social media is richer and more valuable when it can reflect the full fabric of humanity, even when that means enabling speech that is provocative or threatening to hegemony. Perhaps especially then, because so much important, valid, and necessary speech can so easily be labeled that way. Preserving different ideas, even when controversial, ensures that there will be space for new and even better ones, whereas policing content for compliance with current norms only distorts those norms’ development.

Being too willing to remove content also has the effect of teaching the public that when it encounters speech that provokes the way to respond is to demand its suppression. Instead of a marketplace of ideas, this burgeoning tendency means that discourse becomes a battlefield, where the view that will prevail is the one that can amass enough censorial pressure to remove its opponent?even if it’s the view with the most merit. The more Facebook feeds this unfortunate instinct by removing user speech, the more vulnerable it will be to further pressure demanding still more removals, even when it may be of speech society would benefit from. The reality is that there will always be disagreements over the worth of certain speech. As long as Facebook assumes the role of an arbitrator, it will always find itself in the middle of an unwinnable tug-of-war between conflicting views. To break this cycle, removals should be made with reluctance and only limited, specific, identifiable, and objective criteria to justify the exception. It may be hard to employ them consistently at scale, but more restraint will in the long run mean less error.

Reasons to be wary of leaving content up. The unique challenge presented in this case is that the Facebook user at the time of the posts in question was the President of the United States. This fact cuts in multiple ways: as the holder of the highest political office in the country Trump’s speech was of particular relevance to the public, and thus particularly worth facilitating. After all, even if Trump’s posts were debauched, these were the views of the President, and it would not have served the public for him to be of this character and the public not to know.

On the other hand, as the then-President of the United States his words had greater impact than any other user’s. They could do, and did, more harm, thanks to the weight of authority they acquired from the imprimatur of his office. And those real-world effects provided a perfectly legitimate basis for Facebook to take steps to (a) mitigate that damage by removing posts and (b) end the association that had allowed him to leverage Facebook for those destructive ends.

If Facebook concludes that anyone’s use of its services is not in its interests, the interests of its user community, or the interests of the wider world Facebook and its users inhabit, it can absolutely decide to refuse that user continued access. And it can reach that conclusion based on wider context, beyond platform use. Facebook could for instance deny a confessed serial killer who only uses Facebook to publish poetry access to its service if it felt that the association ultimately served to enable the bad actor’s bad acts. As with speech removals, such decisions should be made with reluctance and based on limited, specific, identifiable, and objective criteria, given the impact of such terminations. Just as continued access to Facebook may be unduly empowering for users, denying it can be equally disempowering. But in the case of Trump, as President he did not need Facebook to communicate to the public. He had access to other channels and Facebook no obligation to be conscripted to enable his mischief. Facebook has no obligation to enable anyone’s mischief, whether they are a political leader or otherwise.

Potential middle-grounds. When it comes to deciding whether to continue to provide Facebook’s services to users and their expression, there is a certain amount of baby-splitting that can be done in response to the sorts of challenges raised by this case. For instance, Facebook does more than simply host speech that can be read by others; it provides tools for engagement such as comments and sharing and amplification through privileged display, and in some instances allows monetization. Withdrawing any or all of these additional user benefits is a viable option that may go a long way toward minimizing the problems of continuing to host problematic speech or a problematic user without the platform needing to resort to removing either entirely.

Conclusion. Whether removing Trump’s posts and further posting ability was the right decision or not depends on what sort of service Facebook wants to be and which choice it believes it best serves that purpose. Facebook can make these decisions any way it wants, but to minimize public criticism and maximize public cooperation how it makes them is what matters. These decisions should be transparent to the user community, scalable to apply to future situations, and predictable in how they would, to the extent they can be, since circumstances and judgment will inevitably evolve. Every choice will have consequences, some good and some bad. The choice for Facebook is really to affirmatively choose which ones it wants to favor. There may not be any one right answer, or even any truly right answer. In fact, in the end the best decision may have little to do with the actual choice that results but rather the process used to get there.

Filed Under: appeals, content moderation, donald trump, facebook supreme court, free speech, oversight, review
Companies: facebook, oversight board

Facebook Oversight Board's First Decisions… Seem To Confirm Everyone's Opinions Of The Board

from the take-a-deep-breath dept

Last week, the Oversight Board — which is the official name that the former Facebook Oversight Board wants you to call it — announced decisions on the first five cases it has heard. It overturned four Facebook content moderation decisions and upheld one. Following the announcement, Facebook announced that (as it had promised) it followed all of the Oversight Board’s decisions and reinstated the content on the overturned cases (in one case, involving taking down a breast cancer ad that had been deemed to violate the “no nudity” policy, Facebook actually reinstated the content last year, after the Board announced it was reviewing that decision). If you don’t want to wade into the details, NPR’s write-up of the decisions and policy recommendations is quite well done and easily digestible.

If you want a more detailed and thoughtful analysis of the decisions and what this all means, I highly recommend Evelyn Douek’s detailed analysis of the key takeaways from the rulings.

What I’m going to discuss, however, is how the decisions seem to have only reinforced… absolutely everyone’s opinions of the Oversight Board. I’ve said before that I think the Oversight Board is a worthwhile experiment, and one worth watching, but it is just one experiment. And, as such, it is bound to make mistakes and adapt over time. I can understand the reasoning behind each of the five decisions, though I’m not sure I would have ruled the same way.

What’s more interesting to me, though, is how so many people are completely locked in to their original view of the board, and how insistent they are that the first decisions only confirm their position. It’s no secret that many people absolutely hate Facebook and view absolutely everything the company does as unquestionably evil. I’m certainly not a fan of many of the company’s practices, and don’t think that the Oversight Board is as important as some make it out to be, but that doesn’t mean it’s not worth paying attention to.

But I tended to see a few different responses to the first rulings, which struck me as amusing, since the positions are simply not disprovable:

1. The Oversight Board is just here to rubberstamp Facebook’s decisions and make it look like there’s some level of review.

This narrative is slightly contradicted by the fact that the Oversight Board overturned four decisions. However, people who believe this view retort that “well, of course the initial decisions have to do this to pretend to be independent.” Which… I guess? But seems like a lot of effort for no real purpose. To me, at least, the first five decisions are not enough to make a judgment call on this point either way. Let’s see what happens over a longer time frame.

2. The Oversight Board is just a way for Facebook and Zuckerberg not to take real responsibility

I don’t see how this one is supportable. It’s kind of a no-win situation either way. Every other company in the world that does content moderation has a final say on their decisions, because it’s their website. Facebook is basically the first and only site so far to hand off those decisions to a 3rd party — and it did so after a ton of people whined that Facebook had too much power. And the fact that this body is now pushing back on Facebook’s decisions suggests that there’s at least some initial evidence that the Board might force Zuckerberg to take more responsibility. Indeed, the policy recommendations (not just the decisions directly on content moderation) suggest that the Board is taking its role as being an independent watchdog over how Facebook operates somewhat seriously. But, again, it’s perhaps too early to tell, and this will be a point worth watching.

3. The Oversight Board has no real power, so it doesn’t matter what they do.

The thing is, while this may be technically true, I’m not sure it matters. If Facebook actually does follow through and agree to abide by the Board’s rulings, and the Board continues the initial path it’s set of being fairly critical of Facebook’s practices, then for all intents and purposes it does have real power. Sometimes, the power comes just from the fact that Facebook may feel generally committed to following through, rather than through any kind of actual enforcement mechanism.

4. The Oversight Board is only reviewing a tiny number of cases, so who cares?

This is clearly true, but again, the question is how it will matter in the long run. At least from the initial set of decisions, it’s clear that the Oversight Board is not just taking a look at the specific cases in front of it, but thinking through the larger principles at stake, and making recommendations back to Facebook about how to implement better policies. That could have a very big impact on how Facebook operates over time.

As for my take on all of this? As mentioned up top, I think this is a worthwhile experiment, though I’ve long doubted it would have that big of an impact on Facebook itself. I see no reason to change my opinion on that yet, but I am surprised at the thoroughness of these initial decisions and how far they go in pushing back on certain Facebook policies. I guess I’d update my opinion to say I’ve moved from thinking the Oversight Board had a 20% chance of having a meaningful impact, to now it being maybe 25 to 30% likely. Some will cynically argue that this is all for show, and the first cases had to be like that. And perhaps that’s true. I guess that’s why no one is forced to set their opinion in stone just yet, and we’ll have plenty of time to adjust as more decisions come out.

Filed Under: appeals, breast cancer, content moderation, free speech, myanmar, nudity, review
Companies: facebook, oversight board

Another Day, Another Bad Bill To Reform Section 230 That Will Do More Harm Than Good

from the no-bad dept

Last fall, when it first came out that Senator Brian Schatz was working on a bill to reform Section 230 of the Communications Decency Act, I raised questions publicly about the rumors concerning the bill. Schatz insisted to me that his staff was good, and when I highlighted that it was easy to mess this up, he said I should wait until the bill is written before trashing it:

Feel free to trash my bill. But maybe we should draft it, and then you should read it?

— Brian Schatz (@brianschatz) September 13, 2019

Well, now he’s released the bill and I am going to trash it. I will say that unlike most other bills we’ve seen attacking Section 230, I think that Schatz actually does mean well with this bill (entitled the “Platform Accountability and Consumer Transparency Act” or the “PACT Act” and co-authored with Senator John Thune). Most of the others are foolish Senators swinging wildly. Schatz’s bill is just confused. It has multiple parts, but let’s start with the dumbest part first: if you’re an internet service provider you not only need to publish an “acceptable use policy,” you have to set up a call center with live human beings to respond to anyone who is upset about user moderation choices. Seriously.

subject to subsection (e), making available a live company representative to take user complaints through a toll-free telephone number during regular business hours for not fewer than 8 hours per day and 5 days per week;

While there is a small site exemption, at Techdirt we’re right on the cusp of the definition of a small business (one million monthly unique visitors – and we have had many months over that, though sometimes we’re just under it as well). There’s no fucking way we can afford or staff a live call center to handle every troll who gets upset that users voted down his comment as trollish.

Again, I do think Schatz’s intentions here are good — they’re just not based in the real world of anyone who’s ever done any content moderation ever. They’re based in a fantasy world, which is not a good place from which to make policy. Yes, many people do get upset about the lack of transparency in content moderation decisions, but there are often reasons for that lack of transparency. If you detail out exactly why a piece of content was blocked or taken down, then you get people trying to (1) litigate the issue and (2) skirt the rules. As an example, if someone gets kicked off a site for using a racist slur, and you have to explain to them why, you’ll see them argue “that isn’t racist” even though it’s a judgment call. Or they’ll try to say the same thing using a euphemism. Merely assuming that explaining exactly why you’ve been removed will fix problems is silly.

And, of course, for most sites the call volume would be overwhelming. I guess Schatz could rebrand this as a “jobs” bill, but I don’t think that’s his intention. During a livestream discussion put on by Yale where this bill was first discussed, Dave Willner (who was the original content policy person at Facebook) said that this requirement for a live call center to answer complaints was (a) not possible and (b) it would be better to just hand out cash to people to burn for heating, because that’s how nonsensical this plan is. Large websites make millions of content moderation decisions every day. To have to answer phone calls with live humans about that is simply not possible.

And that’s not all that’s problematic. The bill also creates a 24 hour notice-and-takedown system for “illegal content.” It seems to be more or less modeled on copyright’s frequently abused notice-and-takedown provisions, but with a 24-hour ticking time bomb. This has some similarities to the French hate speech law that was just tossed out as unconstitutional with a key difference being one element of notification of “illegal content” is a court ruling on the illegality.

Subject to subsection (e), if a provider of an interactive computer service receives notice of illegal content or illegal activity on the interactive computer service that substantially complies with the requirements under paragraph (3)(B)(ii) of section 230(c) of the Communications Act of 1934 (47 U.S.C. 230(c)), as added by section 6(a), the provider shall remove the content or stop the activity within 24 hours of receiving that notice, subject to reasonable exceptions based on concerns about the legitimacy of the notice.

The “notice requirements” then do include the following:

(I) A copy of the order of a Federal or State court under which the content or activity was determined to violate Federal law or State defamation law, and to the extent available, any references substantiating the validity of the order, such as the web addresses of public court docket information.

This is yet another one of those ideas that sounds good in theory, but runs into trouble in reality. After all, this was more or less the position that most large companies — including both Google and Facebook — took in the past. If you sent them a court ruling regarding defamation, they would take the content down. And it didn’t take long for people to start to game that system. Indeed, we wrote a whole series of posts about “reputation management” firms that would file sketchy lawsuits.

The scam worked as follows: file a real lawsuit against a “John or Jane Doe” claiming defamation. Days later, have some random (possibly made up person) “admit” to being the Doe in question, admit to the “defamation” and agree to a “settlement.” Then get the court to issue an order on the “settled” case with the person admitting to defamation. Then, send that court order to Google and Facebook to take down that content. And this happened a lot! There were also cases of people forging fake court documents.

In other words, these all sound like good ideas in theory, until they reach the real world, where people game the system mercilessly. And putting a 24 hour ticking time clock on that seems… dangerous.

Again, I understand the thinking behind this bill, but contrary to Schatz’s promise of having his “good” staffers talk to lots of people who understand this stuff, this reads like someone who just came across the challenges of content moderation and has no understanding of the tradeoffs involved. This is, unfortunately, not a serious proposal. But seeing as it’s bipartisan and an attack on Section 230 at a time when everyone wants to attack Section 230, it means that we need to take this silly proposal seriously.

Filed Under: appeals, brian schatz, call centers, censorship, john thune, notice and takedown, section 230, transparency

Jerks 'Reporting' Women Who Swipe Left On Them In Tinder, Once Again Highlighting How Content Moderation Gets Abused

from the always-another-thing dept

We keep trying to highlight (over and over and over again) how content moderation at scale is impossible to do well for a variety of reasons — and one big one is the fact that assholes and trolls will game whatever system you put in place — often in truly absurd ways. The latest example of this is that guys who are pissed off about women who reject them after meeting through Tinder are “reporting” the women in the app, trying to get their accounts shut down.

I had been banned from Tinder. It turns out, though, I?m far from the only woman to have been kicked off the app for no other reason than I rejected the wrong guy. Indeed, without the need for any apparent proof of wrongdoing, a new breed of scorned men have stumbled upon a particularly passive-aggressive way to say, ?If I can?t have her, no one can? ? tapping the report button.

Case in point: Last year, 33-year-old Amy declined to go out with a man she?d been messaging with when he started insulting her. The insults, of course, only intensified from there ? with him telling her she was shaped like Slimer from Ghostbusters and that her fertility was declining. Stunned, she put her phone away. After taking a moment, she went to block him, but when she opened Tinder, her account had been banned.

Of course, as some of the article highlights, Tinder itself seems woefully (ridiculously) unprepared to deal with even the most basic instances of this kind of abuse. Tinder apparently bans accounts based on a single report and the company states that it does “not offer an appeals process at this time.” The article highlights a bunch of tweets from women who all seem to have gone through a similar experience. They met a dude on Twitter, date doesn’t go well, she says she’d rather not go on another date… the guy flips out, acts like an asshole, and minutes later, she’s banned from the app.

The author of the article, who herself was banned from Tinder right after such an experience, found a guy who admitted to doing this.

I did, though, find one man ? 26-year-old Brian ? who admitted to reporting women who were unresponsive to his messages. ?I?ve done this,? he confides. ?It?s a huge waste of time for girls to match with you and then not reply. Like what?s the point??

He then goes on to spout a bunch of misogynistic nonsense, apparently believing that women do this on purpose to be mean to men like him, which apparently is what he needs to convince himself that getting their accounts shut down okay.

Of course, what’s left out of this discussion is a bit of the flipside. You can kind of understand why Tinder is so aggressive in banning people, because if people actually are violating its rules, the consequences could be a lot more serious, especially given that the entire point of the app is to get people to meet up in real life. If they mess that up, there will be all sorts of bad press about how Tinder failed to take down an account or something. Hell, as we’ve detailed, Grindr effectively got sued over this exact scenario and the plaintiff in that case recently asked the Supreme Court to hear his appeal.

That’s not to say the companies can’t do a better job — they can. Having an appeals process seems like a no-brainer. But an appeals process can be gamed as well. And this is the point that we keep trying to make: it’s literally impossible to do content moderation well at this kind of scale. There will always be problems and judgment calls people disagree with — and outright abuse. In both directions. People abusing the system to take down content they don’t like, and others abusing the system to keep up content or profiles that probably should be taken down. It’s easy for someone to say “oh, they shouldn’t do that,” but no one has yet come up with a system that always gets it right and stops any such abuse. Because it’s literally impossible.

Filed Under: appeals, assholes, content moderation, content moderation at scale, dating, reporting, trolls
Companies: tinder

Kim Dotcom Loses Latest Round In Extradition Fight, Will Try To Appeal Again

from the this-case-will-never-end dept

Kim Dotcom’s ongoing legal saga continues. The latest is that the New Zealand Court of Appeal has rejected his appeal of earlier rulings concerning whether or not he can be extradited to the US. Dotcom and his lawyers insist that they will appeal to the Supreme Court, though there seems to be some disagreement about whether or not that will even be possible. The full ruling is worth a read, though much of it is dry and procedural.

And, I know that many people’s opinion of this case is focused almost exclusively on whether they think Kim Dotcom and Megaupload were “good” or “bad,” but if you can get past all of that, there are some really important legal issues at play here, especially concerning the nature of intermediary liability protections in New Zealand, as well as the long-arm reach of US law enforcement around the globe. Unfortunately, for the most part it’s appeared that the courts have been much more focused on the whole “but Dotcom is obviously a bad dude…” and then used that to rationalize a ruling against him, even if it doesn’t seem to fit what the law says.

As Dotcom and his lawyers have noted, this has meant that, while there are now three rulings against him on whether or not he can be extradited, they all come to different conclusions as to why. A key issue, as we’ve discussed before, is the one of “double criminality.” For there to be an extraditable offense, the person (or people) in question need to have done something that is a crime in both the US and New Zealand. As Dotcom has argued over and over again, the “crime” that he is charged with is effectively criminal secondary copyright infringement. And that’s a big problem, since there is no such thing as secondary criminal copyright infringement under US law. Since Megaupload was a platform, it should not be held liable for the actions of its users. But the US tries to wipe all of that away by playing up that Dotcom is a bad dude, and boy, a lot of people sure infringed copyright using Megaupload. And all of that may be true, but it doesn’t change the fact that they should have to show that he actually broke a law in both countries.

Indeed, the lower court basically tossed out the copyright issue in evaluating extradition, but said he could still be extradited over “fraud” claims. Dotcom argued back that without the copyright infringement, there is no fraud, and thus the ruling didn’t make any sense.

The Court of Appeal comes to the same conclusion, but for somewhat different reasons. It appears that Dotcom’s lawyers focused heavily on what some might consider technical nitpicking in reading of the law. Pulling on a tactic that has been tried (not successfully…) in the US, they argued that reading through the text of the copyright shows that it only applies to “tangible” copies — i.e., content on a physical media — rather than on digital only files. In the US, at least, the Copyright Act is written in such a way that a plain reading of the law says that copyright also only applies to physical goods, rather than digital files. But, as has happened here, US courts have not been willing to accept that fairly plain language in the statute because it would mess up the way the world views copyright. It’s no surprise that the New Zealand court came to the same end result. While it would be better if the law itself were fixed, the courts seem pretty united in saying that they won’t accept this plain reading of the statute, because that would really muck things up. Unfortunately, in focusing on that nitpicking, it may have obscured the larger issues for the court.

Over and over again in the ruling, the court seems to bend over backwards to effectively say, “look, Dotcom’s site was used for lots of infringement, so there’s enough evidence that he had ill intent, and therefore we can hand him over to the US.” That seems like a painfully weak argument — but, again, par for the course around Dotcom. So, basically, even though it has other reasons than the lower court, this court says there’s enough here to extradite:

We have departed from the Judge in our analysis of s 131.260 But the Judge?s conclusions on ss 249 and 228 (the latter of which we will turn to shortly) were not affected by his conclusion on s 131. Each of the ss 249 and 228 pathways depended on dishonesty, as defined, and the other elements of the actus rei of those offences. Inherent in the Judge?s finding was that dishonesty for the purpose of s 249 (and s 228) did not require proof of criminal conduct under s 131. With that conclusion we agree. It is plainly sufficient that for the purposes of s 217 that the relevant acts are done without belief in the existence of consent or authority from the copyright owner. It does not need to amount to criminal conduct independently of s 249. Put another way, ?dishonestly? as defined in s 217 is not contingent on having committed another offence, but is instead simply an element of the offence.

That may be a bit confusing, but basically they’re saying it doesn’t much matter whether or not there was actual criminal copyright infringement or not, because there was enough “dishonesty” to allow Dotcom to be extradited on other issues.

Again, none of this is that surprising, but it does again feel like the courts reacting to how they perceive Dotcom himself, rather than following what the law actually says. That should worry people. At this point, it seems highly likely that Dotcom’s attempts to appeal to the Supreme Court will fail and that he will be extradited. Of course, then there would still need to be legal proceedings in the US — though the judge assigned to his case has already shown little interest in understanding the nuances of copyright and intermediary liability law, so it’s likely to be quite a mess here as well.

Whatever you think of Kim Dotcom, many of the legal arguments against him seem almost entirely based on the fact that people want to associate him with the actions of his users, and the fact that he didn’t seem to much care about what the legacy entertainment industry thought of him. Maybe he deserves to be locked up — but it’s hard to argue that the process has been fair and based on what the law actually says.

Filed Under: appeals, due process, extradition, intermediary liability, kim dotcom, new zealand
Companies: megaupload