internet regulation – Techdirt (original) (raw)

The Messy Reality Behind Trying To Protect The Internet From Terrible Laws

from the an-impossible-task dept

The recent Supreme Court case, Moody v. NetChoice & CCIA, confronted a pivotal question: Do websites have the First Amendment right to curate content they present to their global audiences? While the opinion has been dissected by many, this post peeks behind the Silicon curtain to address the practical aftermath of tech litigation.

Well before this case, there has been significant discord among the federal government about how to regulate the Internet. Democrats criticize the Silicon Valley elite for failing to shield Americans from harmful content, while Republicans decry “censorship” and revere the notion of a ‘digital public square,’ a concept lacking in both legal precision and technological reality. Despite a shared disdain for Section 230—a statute that protects websites and their users from liability for third-party content—the two sides can’t agree on a “solution,” forcing a legislative deadlock.

This impasse empowered state legislators to act independently. Initially dismissed as ‘messaging bills’ designed merely to garner political favor, legislation in Texas and Florida soon crystallized into laws that significantly curtailed the editorial discretion of social media platforms. This prompted legal challenges from two trade associations, NetChoice and the Computer & Communications Industry Association, questioning the constitutional merits (or lack thereof) of these laws.

The prolonged conflict led to a, perhaps anticlimactic, Supreme Court decision last month, focused more on the procedural nuances of facial challenges. This outcome has led observers to question why the responsibility of defending Internet freedoms fell to trade associations instead of the platforms themselves. Having been involved with these cases from the outset and drawing on my experience in the tech industry, I may have some answers.

Gone are the days when tech companies stood united for the good of the industry and the underlying internet. Recall over a decade ago when some of the biggest tech companies in the world darkened their home pages in protest of the Stop Online Piracy Act (SOPA) and the Protect IP Act (PIPA). These bills posed a serious threat not just to individual companies, but to the entire tech industry and everyday internet users. Other notable examples of collective industry protest include the battles over net neutrality, SESTA-FOSTA, and the EARN IT Act. But despite a recent influx in legislative threats, tech companies are noticeably absent.

There are several reasons for the silence. First, the sheer volume of bad bills threatening the tech sector has outpaced the resources available to fight them. California alone has introduced a flurry of legislation targeting social media companies and AI in recent years. When you add in efforts from other states, fighting these laws becomes an internal numbers game. Each bill requires a dedicated team to analyze its impact, meet with lawmakers, organize grassroots campaigns, and, as a last resort, litigate. As a result, companies must make tough decisions about where to invest their resources, often prioritizing bills that directly impact their own products, services, and users.

When bad bills reach the governor’s desk, two strategies typically unfold. The first is the veto strategy, where companies and their lobbyists work tirelessly to secure the coveted governor’s veto. The second is litigation, once the governor inevitably signs the bill into law. Litigation is a significant decision, involving a long, costly process that directly affects company shareholders and therefore typically requires executive approval. And that’s just for one bad bill. Imagine making these decisions for multiple bills across several states all year long. Companies are understandably reluctant to rush into litigation, especially when other companies could take up the fight. Why should Meta challenge a law that also impacts Google, Amazon, or Apple?

This leads to a game of chicken, where companies wait, hoping another will take action. Of course, not all legislation impacts companies equally. A law targeting Facebook, for example, may not affect others enough to justify the expense of a legal challenge. If the most impacted company decides compliance is a cheaper and safer alternative, the law may just go unchallenged. This leaves smaller companies, for whom litigation was never a realistic option, to fend for themselves—and for some of the major players, that might just be an added bonus.

Litigation also incurs significant political and public costs. Companies and lawmakers navigate a complex interplay during the legislative season, where companies vie for a seat at closed-door meetings to influence bill drafting, while politicians attempt to manage the influence of these corporate lobbyists to achieve legislative gains for their constituents. Consequently, challenging a law—particularly one backed by politicians with deep corporate ties—could be perceived as a declaration of war, potentially alienating companies from future legislative discussions.

Beyond political capital, public perception is equally critical and increasingly fragile. Contemporary portrayals in the media often depict tech advocacy as self-serving or even harmful. This is particularly evident in the discourse surrounding new youth online safety laws, where tech companies face backlash for opposing measures like parental consent and age verification—mandates that many experts claim actually harm children.

This growing disdain towards the tech industry (“techlash”) also shapes how companies assess the risks of contesting contentious laws. Which brings us to trade associations, like NetChoice and CCIA.

Trade associations manage the interests of their industry members by engaging with lawmakers on key bills, testifying at hearings, submitting comments, and initiating legal challenges. These associations can vary in structure. For example, Chamber of Progress, where I previously worked, does not allow Partner companies to vote or veto, making it a relatively independent and agile organization compared to others. NetChoice operates under a similar model, facilitating quicker legal actions without the bureaucratic hurdles often encountered by other associations. The contrasting vote/veto structure was notably a factor in the dissolution of the Internet Association (IA).

However, trade associations are not a panacea for the complexities of litigation. For starters, cost is similarly a barrier. But to initiate legal challenges, the trade associations must establish standing to sue on behalf of their industry, a task complicated by recent judicial rulings like Murthy v. Missouri. Courts are reticent to grant standing to trade associations without explicit declarations from member companies about the specific harms they would face under the law in question. But these declarations are public, compelling companies to openly oppose the law, and thus exposing them to the same political and public scrutiny they might seek to avoid by leaving it to their associations.

Moreover, filing a declaration exposes companies to legal risks. It enables state defendants to request discovery into the declaring company, potentially leading to invasive examinations of the company’s operations—a deterrent for many. Given the proliferation of problematic laws across the U.S., a company that files a declaration once may be hesitant to do so repeatedly, especially if other companies remain reluctant to expose themselves similarly. And while it may seem like NetChoice is everywhere when it comes to the laws they have successfully challenged, there still remain several unconstitutional tech laws on the books today that have yet to be challenged, like the New York SAFE For Kids Act, possibly due to many of these lingering concerns.

Even for non-declarant companies, the strategy of using trade associations like NetChoice to shield companies from public scrutiny is becoming less effective. Media coverage often portrays challenges brought by associations like NetChoice as if they are directly initiated by the member tech companies themselves. This occurs regardless of whether all of NetChoice’s members actually support the legal actions or not. The perception often leads to public backlash against the companies which can then manifest as company dissatisfaction with their own trades—a risk that all successful trade associations must constantly weigh.

Another downside to litigation is the tremendous burden placed on third parties responsible for crafting amicus briefs. These briefs, written by entities wholly independent from the litigants, are not merely echoes of a plaintiff’s arguments; they provide courts with varied legal and policy perspectives that could be influenced by the law under challenge. Yet, crafting these briefs is an expensive and time-consuming endeavor. A single brief can cost between 20,000to20,000 to 20,000to50,000 or more, depending on the law firm and the depth required. The effort to rally additional signatories for a brief further multiplies these costs. For organizations like my previous employer, the investment in amicus briefs across multiple legal challenges and at various judicial levels, such as in the cases of NetChoice & CCIA v. Moody/Paxton, represents a significant strain. And though not obligatory (like company declarations), these briefs often play a crucial role in the success or failure of a legal challenge.

Furthermore, litigation may prove to be a flawed strategy simply because it arms lawmakers with insights on how to refine their legislation against future challenges. Each legal victory for groups like NetChoice reveals to state lawmakers how to craft more resilient laws. For example, the recent Moody v. NetChoice & CCIA decision detailed all the ways in which NetChoice’s facial challenge was deficient. Of all the reasons, the biggest was that NetChoice failed to articulate for every requirement in the Texas and Florida legislation, how that requirement impacted each of the products and services offered by each of NetChoice’s tech company members. In many ways, this could be a near impossible task, especially considering the drafting limits for a party’s brief and time spent at oral arguments. In turn, what this tells lawmakers is that their bills may just survive if they write laws with immense and convoluted requirements that make a facial challenge nearly impossible to thoroughly plead.

The protracted nature of these legal battles further underscores their inefficiency. Years after the initial filing, with one Supreme Court hearing behind us, the merits of the constitutional challenge by NetChoice and CCIA have yet to be addressed. Moving forward, just refining their challenge for appellate consideration might necessitate another Supreme Court review. With states continually enacting problematic laws, the prospect of reaching substantive judicial review seems ever more distant, potentially dragging on for decades.

All this means is that tech litigation is neither a reliable nor sustainable method to address the rising hostility towards the tech industry and the degradation of our rights to access information and express ourselves online. To truly protect online expression—which, yes, means also preserving the technology companies that empower it—we must vigilantly monitor and respond to problematic legislation from its inception. If left unchecked, even seemingly innocuous messaging bills from states like Texas and Florida will gradually erode the foundations of our digital freedoms.

Jess Miers is currently Visiting Assistant Professor of Law, University of Akron School of Law. She formerly has worked for Chamber of Progress, Google, TechFreedom, and Twitter.

Filed Under: 1st amendment, content moderation, internet regulation, litigation, section 230
Companies: ccia, chamber of progress, netchoice

Driven Mad By Its Hatred For Big US Internet Companies, French Government Implements EU Digital Services Act Before It Even Exists

from the to-hell-with-the-consequences dept

The future Digital Services Act (DSA), dealing with intermediary liability in the EU, is likely to be one of the region’s most important new laws for the online world. At the moment, the DSA exists only as a proposal from the European Commission. In due course, the European Parliament and the EU’s Member States will come up with their own texts, and the three versions will ultimately be reconciled to produce legislation that will apply across the whole of the EU. As Techdirt reported last month, the Commission’s ideas are something of a mess, and the hope has to be that the text will improve as the various arms of the EU start to work on it over the coming months.

The French government, however, is unwilling to wait before it can start imposing intermediary liability on the US Internet giants it seems to hate so much. It has decided to bring in key parts of the DSA immediately — even though it doesn’t formally exist — using what it calls a “pretranscription” of the proposed EU law. Next Inpact has the details (original in French), but what matters most is the way the “pretranscription” of the DSA clashes with an important existing EU law, the e-Commerce Directive. The European Commission explains:

While the e-Commerce Directive remains the cornerstone of digital regulation, much has changed since its adoption 20 years ago. The DSA will address these changes and the challenges that have come with them, particularly in relation to online intermediaries.

The DSA is intended to update and supersede the e-Commerce Directive. But until the DSA is passed — something that is years off — the e-Commerce Directive remains in force, and is incompatible with France’s local pretranscription. The best the politicians in France can come up with to justify this extraordinary course of action, which cuts across how the EU is supposed to work collectively to draw up laws that apply uniformly across the whole region, is that all-purpose excuse — the threat of terrorist attacks. Even the French Senate’s Law Committee warned of the “extreme legal fragility” of the government’s logic here. In reality, it’s just another case of the French government keen to bash Internet companies as soon as it can, and to hell with the political, economic or social consequences.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

Filed Under: digital services act, dsa, eu, france, internet regulation

The Need For A Robust Critical Community In Content Policy

from the it's-coming-one-way-or-the-other dept

Over this series of policy posts, I?m exploring the evolution of internet regulation from my perspective as an advocate for constructive reform. It is my goal in these posts to unpack the movement towards regulatory change and to offer some creative ideas that may help to catalyze further substantive discussion. In that vein, this post focuses on the need for “critical community” in content policy — a loose network of civil society organizations, industry professionals, and policymakers with subject matter expertise and independence to opine on the policies and practices of platforms that serve as intermediaries for user communications and content online. And to feed and vitalize that community, we need better and more consistent transparency into those policies and practices, particularly intentional harm mitigation efforts.

The techlash dynamic is seen in both political parties in the United States as well as in a broad range of political viewpoints globally. One reason for the robustness of the response is that so much of the internet ecosystem feels like a black box, thus undermining trust and agency. One of my persistent refrains in the context of artificial intelligence, where the ?black box? feeling is particularly strong, is that trust can?t be restored by any law or improved corporate practice operating in isolation. (And certainly, the answer isn?t just “community psychology and social justice contexts. For example, this talk by Professor Silvia Bettez offers a specific definition of critical community as “interconnected, porously bordered, shifting webs of people who through dialogue, active listening, and critical question posing, assist each other in critically thinking through issues of power, oppression,and privilege.” While in the field of internet policy the issues are different, the themes of power, oppression, and privilege strike me as resonant in the context of social media platform practices.

I wrote an early version of this community-centric theory of change in a piece last year focused specifically on recommendation engines. In that piece, I looked at the world of privacy, where, over the past few decades, a seed of transparency offered voluntarily in the form of privacy policies helped to fuel the growth of a professional community of privacy specialists who are now able to provide meaningful feedback to companies, both positive and critical. We have a rich ecosystem in privacy with institutions ranging from IAPP to the Future of Privacy Forum to EPIC.

The tech industry has a nascent ecosystem built around specifically content moderation practices, which I tend to think of as a (large) subset of content policy focused specifically on moderation — policies regarding the permissible use of a platform and actions taken to enforce those policies for specific users or pieces of content. (The biggest part of content policy not included within my framing of content moderation is the work of recommendation engines to filter information and present users with an intentional experience.) The Santa Clara Principles and extensive academic research have helped to advance norms around moderation. The new Trust & Safety Professionals Association could evolve into a IAPP or FPF equivalent. Content moderation was the second Techdirt Greenhouse topic after privacy, reflecting the diversity of voices in this space. And plenty of interesting work is being done beyond the moderation space as well, such as Mozilla?s “YouTube Regrets” campaign, to illustrate online harm arising from recommendation engines steering permissible and legal content to poorly chosen audiences.

As the critical community around content policy grows, regulation races ahead. The Digital Services Act consultation submissions closed this month; here?s my former team?s post about that. The regulatory posture of the European Commission has advanced a great deal over the past couple of years, shifting toward a paradigm of accountability and a focus on processes and procedures. The DSA will prove to be a turning point on a global scale, just as the GDPR was for privacy. Going forward, platforms will expect to be held accountable. Just as it?s increasingly untenable to assume that an internet company can collect data and monetize it at will, so, too will it be untenable to dismiss harms online through tropes like ?more speech is a solution to bad speech.? While the First Amendment court challenges in the U.S. legal context will be serious and difficult to navigate, the normative reality will more and more be set: tech companies must confront and respond to the real harms of hate speech, as Brandi Collins-Dexter?s Greenhouse post so well illustrates.

The DSA has a few years left in its process. The European Commission must adopt a draft law, the Parliament will table hundreds of amendments and put together a final package for vote, the Council will produce its own version, trialogue will hash out a single document, and then, finally, Parliament will vote again — a vote that might not succeed, restarting some portions of the process. Yet, even at this early stage, it seems virtually certain that the DSA legislative process will produce a strong set of principles-based requirements without specific guidance for implementing practices. To many, such an outcome seems vague and hard to work with. But it?s preferable in many ways to specifying technical or business practices in law which can easily result in outdated and insufficient guidance to address evolving harm, not to mention restrictions that are easier for large companies to comply with, at least facially, than smaller firms.

So, there?s a gap here. It?s the same gap seen in the PACT Act. As both a practical consideration in the context of American constitutional law and in the state of collective understanding of policy best practices, the PACT Act doesn?t specify exactly what practices need to be adopted. Rather, it requires transparency and accountability to those self asserted practices. The internet polity needs something broader than just a statute to determine what ?good? means in the context of intermediary management of user-generated content.

Ultimately, that gap will be filled by the critical community in content policy, working collectively to develop norms and provide answers to questions that often seem impossible to answer. Trust will be strongest, and the norms and decisions that emerge the most robust and sustainable, if that community is diverse, well resourced, and with broad and deep expertise.

The impact of critical community on platform behavior will depend on two factors: first, the receptivity of powerful tech companies to outside pressure, and second, sufficient transparency into platform practices to enable timely and informed substantive criticism. Neither of these should be assumed, particularly with respect to harm occurring outside the United States. Two Techdirt Greenhouse pieces (by Aye Min Thant and Michael Karanicolas) and the recent Buzzfeed Facebook expose illustrate the limitations of both transparency and influence to shape international platform practices.

I expect legal developments to help strengthen both of these. Transparency is a key component of the developing frameworks for both the DSA and thoughtful Section 230 reform efforts like the PACT Act. While it may seem like low-hanging fruit, the ability of transparency to support critical community is of great long-term strategic importance. And the legal act of empowering of a governmental agency to adopt and enforce rules going forward will, hopefully, help create incentives for companies to take outside input very seriously (the popular metaphor here is to the ?sword of Damocles?).

We built an effective critical community around privacy long ago. We?ve been building it on cybersecurity for 20+ years. We built it in telecom around net neutrality over the past ~15 years. The pieces of a critical community for content policy are there, and what seems most needed right now to complete the puzzle is regulatory ambition driving greater transparency by platforms along with sufficient funding for coordinated, constructive, and sustained engagement.

Filed Under: civil society, content policy, critical community, internet regulation, policy, reform

The Silver Lining Of Internet Regulation: A Regulatory Impact Assessment

from the will-this-regulation-harm-the-internet? dept

To design better regulation for the Internet, it is important to understand two things: the first one is that today’s Internet, despite how much it has evolved, still continues to depend on its original architecture; and, the second relates to how preserving this design is important for drafting regulation that is fit for purpose. On top of this, the Internet invites a certain way of networking ? let’s call it the Internet way of networking. There are many types of networking out there, but the Internet way ensures interoperability and global reach, operates on building blocks that are agile, while its decentralized management and general purpose further ensure its resilience and flexibility. Rationalizing this, however, can be daunting because the Internet is multifaceted, which makes its regulation complicated. The entire regulatory process involves the reconciliation of a complex mix of technology and social rules that can be incompatible and, in some cases, irreconcilable. Policy makers, therefore, are frequently required to make tough choices, which often manage to strike the desired balance, while, other times, they lead to a series of unintended consequences.

Europe’s General Data Protection Regulation (GDPR) is a good example. The purpose of the regulation was simple: fix privacy by providing a framework that would allow users to understand how their data is being used, while forcing businesses to alter the way they treat the data of their customers. The GDPR was set to create much-neededstandardsfor privacy in the Internet and, despite continuous enforcement and compliance challenges, this has majorly been achieved. But, when it comes to the effect it has had on the Internet, the GDPR has posed some challenges. Almost two months after going into effect, it was reportedthat more than 1000 websites were affected, becoming unavailable to European users. And, even now, two years after, fragmentationcontinues to be an issue.

So, what is there to do? How can policy makers strike a balance between addressing social harms online and policies that do not harm the Internet?

A starting point is to perform a regulatory impact assessment for the Internet. It atestedmethod of policy analysis, intended to assist policy makers in the design, implementation and monitoring of improvements to the regulatory system; it provides the methodology for producing high quality regulation, which can, in turn, allow for sustainable development, market growth and constant innovation. A regulatory impact assessment constitutes a tool that ensures regulation is_proportional_(appropriate to the size of the problem it seeks to address),targeted(focused and without causing any unintended consequences),predictable(it creates legal certainty), accountable(in terms of actions and outcomes) and, transparent(on how decisions are made).

This type of thinking can work to the advantage of the Internet. The Internet is an intricate system of interconnected networks that operates according to certain rules. It consists of a set of fundamental properties that contribute to its flexible and agile character, while ensuring its continuous relevance and constant ability to support emerging technologies; it is self-perpetuating in the sense that it systematically evolves while its foundation remains intact. Understanding and preserving the idiosyncrasy of the Internet should be key in understanding how best to approach regulation.

In general, determining the context, scope and breadth of Internet regulation is important to determine whether regulation is needed and the impact it may have. Asking questions that under normal circumstances policy makers contemplate when seeking to make informed choices is the first step. These include: Does the proposed new rule solve the problem and achieve the desired outcome? Does it balance problem reduction with other concerns, such as costs? Does it result in a fair distribution of the costs and benefits across segments of society? Is it legitimate, credible and, trustworthy? But, there should be an additional question: Does the regulation create any consequences for the Internet?

Actively seeking answers to these questions is vital because regulation is generally risky, and risks arise from acting as well as from not acting. To appreciate this, imagine if the choices made in the early days of the Internet dictated a high regulatory regime in the deployment of advanced telecommunications and information technologies. The Internet would, most certainly, not be able to evolve the way it has and, equally the quality of regulation would suffer.

In this context, the scope of regulation is important. The fundamental problem with much of the current Internet regulation is that it seeks to fix social problems by interfering with the underlying technology of the Internet. Across a wide range of policymaking, we know that solely technical fixes rarely fix social problems. It is important that governments do not regulate aspects of the Internet that could be seen as compromising network interoperability, to solve societal problems. This is a “category error” or, more elaborately, a misunderstanding of the technical design and boundaries of the Internet. Such a misunderstanding tends to confuse the salient similarities and differences between the problem and where this problem occurs; it not only fails to tackle the root of the problem but causes damage to the networks we all rely on. Take, for instance, data localization rules, which seek to force data to remain within certain geographical boundaries. Various countries, most recently India, are trying to forcibly localize data, and risk impeding the openness and accessibility of the global Internet. Data will not be able to flow uninterrupted on the basis of network efficiency; rather, special arrangements will need to be put in place in order for that data to stay within the confines of a jurisdiction. The result will be increased barriers to entry, to the detriment of users, businesses and governments seeking to access the Internet. Ultimately, forced data localization makes the Internet less resilient, less global, more costly, and less valuable.

This is where a regulatory risk impact analysis can come in handy. Generally, what the introduction of a risk impact analysis does is that it shows how policy makers can make informed choices about how some of the regulatory claims can or cannot possibly be true. This would require a shift in the behavior of policy makers from solely focusing on process to a more performance-oriented and result-based approach.

This sounds more difficult than it actually is. Jurisdictions around the world are accustomed to performing regulatory impact assessments which has successfully been integrated in many governments’ policy making process for more than 35 years. So, why can’t it be part of Internet regulation?

Dr. Konstantinos Komaitis is the Senior Director, Policy Strategy and Development at the Internet Society.

Filed Under: do no harm, impact assessment, internet regulation

A Conversation With EU Parliament Member Marietje Schaake About Digital Platforms And Regulation, Part I

from the the-view-from-the-eu dept

We are cross posting the following interview conducted by Danish journalist, Cato Institute Senior Fellow, and author of The Tyranny of Silence, Flemming Rose with European Parliament Member from the Netherlands, Marietje Schaake — who we’ve discussed on the site many times, and who has even contributed here as well. It’s an interesting look at how she views the question of regulating internet platforms. Since this is a relatively long interview, we have broken it up into two parts, with the second part running tomorrow. Update Part II is now available.

Marietje Schaake is a leading and influential voice in Europe on digital platforms and the digital economy. She is the founder of the European Parliament Intergroup on the Digital Agenda for Europe and has been a member of the European Parliament since 2009 representing the Dutch party D66 that is part of the Alliance of Liberals and Democrats for Europe (ALDE) political group. Schaake is spokesperson for the center/right group in the European Parliament on transatlantic trade and digital trade, and she is Vice-President of the European Parliament’s US Delegation. She has for some time advocated more regulation and accountability of the digital platforms.

Recently, I sat down with Marietje Schaake in a caf? in the European Parliament in Brussels to talk about what’s on the agenda in Europe when it comes to digital platforms and possible regulation.

FR: Digital platforms like Facebook, Twitter and Google have had a consistent message for European lawmakers: Regulation will stifle innovation. You have said that this is a losing strategy in Brussels. What do you mean by that?

MS: I think it’s safe to say that American big tech companies across the board have pushed back against regulation, and this approach is in line with the quasi- libertarian culture and outlook that we know well from Silicon Valley. It has benefited these companies that they have been free from regulation. They have been free not only from new regulation but also have had explicit exemptions from liability in both European and American law (Section 230 in the US and the Intermediary Liability Exemption in the E-commerce Directive in the EU). At the same time they have benefited from regulations like net neutrality and other safeguards in the law. We have been discussing many new initiatives here in the European Parliament including measures against copyright violations, terrorist content, hate speech, child pornography and other problems. digital platforms reaction to most of the initiatives has at been at best an offer to regulate themselves. They in effect say, “We as a company will fix it, and please don’t stifle innovation.” This has been the consistent counter-argument against regulation. Another counter-argument has been that if Europe starts regulating digital platforms, then China will do the same.

FR: You don’t buy that argument?

MS: Well, China does what it wants anyway. I think we have made a big mistake in the democratic world. The EU, the US and other liberal democracies have been so slow to create a rule-based system for the internet and for digital platforms. Since World War II, we in the West have developed a rules on trade, on human rights, on war and peace, and on the rule of law itself; not because we love rules in and by themselves, but because it has created a framework that protects our way of life. Rules mean fairness and a level playing field with regard to the things I just mentioned. But there has been a push-back against regulation and rules when it comes to digital platforms due to this libertarian spirit and argument about stifling innovation, this “move fast and break things” attitude that we know so well from Silicon Valley.

This is problematic for two reasons. First, we now see a global competition between authoritarian regimes with a closed internet with no rule of law and democracies with an open internet with the rule of law. We have stood by and watched as China, the leading authoritarian regime, has offered its model to the world of a sovereign, fragmented internet. This alternative model stifles innovation, and if people are concerned about stifling innovation, they should take much more interest in fostering an internet governance model that beats the Chinese alternative. Second, because with the current law of the jungle on the internet, liberal democracy and democratic rights of people are suffering, because we have no accountability for the algorithms of digital platforms. At this point profit is much more important than the public good.

FR: But you said that emphasizing innovation is a losing strategy here in Brussels.

MS: I feel there is a big turning point happening as we speak. It is not only here in Brussels but even Americans are now advocating regulation.

FR: Why?

MS: They have seen the 2016 election in the US, they have seen conspiracy after conspiracy rising to the top ranks of searches, and it’s just not sustainable.

FR: What kind of regulation are you calling for and what regulation will there be political support for here in Brussels?

MS: I believe that the e-commerce directive with the liability exemptions in the EU and Section 230 with similar exemptions in the US will come under pressure. It will be a huge game changer.

FR: A game changer in what way?

MS: I think there will be forms of liability for content. You can already see more active regulation in the German law and in the agreements between the EU- Commission and the companies) to take down content (the code of conduct on hate speech and disinformation). These companies cannot credibly say that they are not editing content. They are offering to edit content in order not to be regulated, so they are involved in taking down content. And their business model involves promoting or demoting content, so the whole idea that they would not be able to edit is actually not credible and factually incorrect. So regulation is coming, and I think it will cause an earthquake in the digital economy. You can already see the issues being raised in the public debate about more forceful competition requirements, whether emerging data sets should also be scrutinized in different ways, and net neutrality. We have had an important discussion about the right to privacy and data protection here in Europe. Of course, in Europe we have a right to privacy. The United States does not recognize such a right, but I think they will start to think more about it as a basic principle as well.

FR: Why?

MS: Because of the backlash they have seen.

FR: Do you have scandals like Cambridge Analytica in mind?

MS: Yes, but not only that one. Americans are as concerned about protection of children as Europeans are if not more. I think we might see a backlash against smart toys. Think about dolls that listen to your baby, capture its entire learning process, its voice, its first words, and then use that data for AI to activate toys. I am not sure American parents are willing accept this. The same with facial recognition. It’s a new kind of technology that is becoming more sophisticated. Should it be banned? I have seen proposals to that end coming from California of all places.

FR: Liability may involve a lot of things. What kind of liability is on the political menu of the European Union? Filtering technology or other tools?

MS: Filtering is on the menu, but I would like to see it off the menu because automatic filtering is a real risk to freedom of expression, and it’s not feasible for SME (Small and Medium Enterprises) so it only helps the big companies. We need to look at accountability of algorithms. If we know how they are built, and what could be their flaws or unintended consequences, then we will be able to set deadlines for companies to solve these problems. I think we will look much more at compliance deadlines than just methods. We already have principles in our laws like non-discrimination, fair competition, freedom of expression and access to information. They are not disputed, but some of these platforms are in fact discriminating. It has been documented that Amazon, the biggest tech company and the front runner of AI had a gender bias in favor of men in its AI-algorithm for hiring. I think future efforts will be directed toward the question of designing technology and fostering accountability for its outcomes.

FR: Do you think the governments in the US and Europe are converging on these issues?

MS: Yes. Liberal democracies need to protect themselves. Democracy is in decline for 13th year in a row (according to Freedom House). It’s a nightmare, and it’s something that we cannot think lightly about. Democracy is the best system in spite of all its flaws, it guarantees the freedoms of our people. It also can be improved by holding the use of power accountable through checks and balances and other means.

FR: Shouldn’t we be careful not to throw out the baby with the bath water? We are only in the early stages of developing these technologies and businesses. Aren’t you concerned that too much regulation will have unintended consequences?

MS: I don’t think there is a risk of too much regulation. There is a risk of poorly drafted regulation. We can already see some very grave consequences, and I don’t want to wait until there are more. Instead, let’s double down on principles that should apply in the digital world as they do in the physical world. It doesn’t matter if we are talking about a truck company, a gas company or a tech company. I don’t think any technology or AI should be allowed to disrupt fundamental principles and we should begin to address it. I believe such regulation would be in the companies’ interest too because the trust of their customers is at stake. I don’t think regulation is a goal in and by itself, but everything around us is regulated: the battery in your recording device, the coffee we just drank, the light bulbs here, the sprinkler system, the router on the ceiling, the plastic plants behind you so that if a child happens to eat it, it will not kill them as fast as it might without regulation, and the glass in the doors over there, so if it breaks it does so in a less harmful way and so on and so forth. There are all kinds of ideas behind regulation, and regulation is not an injustice to technology. If done well, regulation works as a safeguard of our rights and freedoms. And if it is bad, we have a system to change it.

The status quo is unacceptable. We already have had manipulation of our democracies. We just learned that Facebook paid teenagers $20 to get to their most private information. I think that’s criminal, and there should be accountability for that. We have data breach after data breach, we have conspiracy theories still rising to the top search at YouTube in spite of all their promises to do better. We have Facebook selling data without consent, we have absolutely incomprehensible terms of use and consent agreements, we have lack of oversight over who is paying for which messages, how the algorithms are pushing certain things up and other things down. It’s not only about politics. Look at a public health issues like anti- vaccination hoaxes. Online sources say it is dangerous to vaccinate your child. People hear online that vaccinations are dangerous and do not vaccinate their children leading to a new outbreak of measles. My mother and sister are medical doctors, cancer specialists, and they have patients who have been online and studied what they should do to treat their cancer, and they get suggestions without any medical or scientific proof. People will not get the treatment that could save their lives. This touches upon many more issues than politics and democracy.

FR: So you see here a conflict between Big Tech and democracy and freedom?

MS: Between Big Tech with certain business models and democracy, yes.

FR: Do you see any changes in the attitudes and behaviour of the tech companies?

MS: Yes, it is changing, but it’s too little, too late. I think there is more apologizing, and there is still the terminology, “Oh we still have to learn everything, we are trying.” But the question is, is that good enough?

FR: It’s not good enough for you?

MS: It’s not convincing. If you can make billions and billions tweeking your algorithm every day to sell ever more adds, but you claim that you are unable to determine when conspiracies or anti-vaccination messages rise to the top of your search. At one point I looked into search results on the Eurozone. I received 8 out of 10 results from one source, an English tabloid with a negative view of the Euro. How come?

FR: Yes, how come, why should that be in the interest of the tech companies?

MS: I don’t think it’s in their interest to change it, but it’s in the interest of democracy. Their goal is to keep you online as long as possible, basically to get you hooked. If you are trying to sell television, you want people to watch a lot of television. I am not surprised by this. It was to be expected. However, it becomes a problem, when hundreds of millions of people only use a handful of these platforms for their information. It’s remarkably easy for commercial or political purposes to influence people whether it’s about anti-vaccination or politics. I understand from experts that the reward mechanism of the algorithm means that sensation sells more, and once you click on the first sensational message it pulls you in a certain direction where it becomes more and more sensational, and one sensation after another is being automatically presented to you.

I say to the platforms, you are automatically suggesting more of the same. They say no, no, no, we just changed our algorithm. What does that mean to me? Am I supposed to blindly believe them? Or do I have a way of finding out? At this point I have no way of finding out, and even AI machine learning coders tell me that even they don’t know what the algorithms will churn out at the end of the day. One aspect of AI is that the people who code don’t know exactly what’s going come out. I think it’s too vague about safeguards, and clear that the impact is already quite significant.

I don’t pretend to know everything about how the systems work. We need to know more because it impacts so many people, and there is no precedent of any service or product that so many people use for such essential activities as accessing information about politics, public health and other things with no oversight. We need oversight to make sure that there are no excesses, that there is fairness, non- discrimination and free expression.

You can read Part II of this interview now.

Filed Under: eu, eu parliament, flemming rose, free speech, internet regulation, marietje schaake, platform liability, privacy
Companies: facebook, google

Pompous 'International Grand Committee' Signs Useless But Equally Pompous 'Declaration On Principles Of Law Governing The Internet'

from the they're-really-going-all-in-on-this dept

So just a few weeks after a bunch of countries (and companies and organizations) signed onto a weird and mostly empty Paris Call for Trust and Safety in Cyberspace, a group of nine countries — Argentina, Belgium, Brazil, Canada, France, Ireland, Latvia, Singapore and the UK, have declared themselves the “International Grand Committee on Disinformation and Fake News” and signed onto a Principles of the Law Governing the Internet. If that list of countries sound familiar, that’s because it’s the same list of countries that put on that grandstanding inquisition of Facebook that produced fake news in its own way, by falsely claiming that Facebook had discovered Russians extracting 3 billion data points via its API back in 2014 (it wasn’t Russia, it was Pinterest; it wasn’t 3 billion, it was 6 million; it wasn’t abuse of the API, but using it correctly).

The Declaration makes some grand pronouncements:

Noting that:? the world in which the traditional institutions of democratic government operate is changing at an unprecedented pace; it is an urgent and critical priority for legislatures and governments to ensure that the fundamental rights and safeguards of their citizens are not violated or undermined by the unchecked march of technology; the democratic world order is suffering a crisis of trust from the growth of disinformation, the proliferation of online aggression and hate speech, concerted attacks on our common democratic values of tolerance and respect for the views of others, and the widespread misuse of data belonging to citizens to enable these attempts to sabotage open and democratic processes, including elections.

Affirming that:? representative democracy is too important and too hard-won to be left undefended from online harms, in particular aggressive campaigns of disinformation launched from one country against citizens in another, and the co-ordinated activity of fake accounts using data-targeting methods to try manipulate the information that people see on social media.

Believing that:? it is incumbent on us to create a system of global internet governance that can serve to protect the fundamental rights and freedoms of generations to come, based on established codes of conduct for agencies working for nation states, and govern the major international tech platforms which have created the systems that serve online content to billions of users around the world.

Okay. So what does it all mean? Well, here are the details of the “declaration”:

i. The internet is global and law relating to it must derive from globally agreed principles; ii. The deliberate spreading of disinformation and division is a credible threat to the continuation and growth of democracy and a civilising global dialogue; iii. Global technology firms must recognise their great power and demonstrate their readiness to accept their great responsibility as holders of influence; iv. Social Media companies should be held liable if they fail to comply with a judicial, statutory or regulatory order to remove harmful and misleading content from their platforms, and should be regulated to ensure they comply with this requirement; v. Technology companies must demonstrate their accountability to users by making themselves fully answerable to national legislatures and other organs of representative democracy.

Of course, in the context of the committee who created this Declaration having now been revealed to have created “fake news” itself, this kind comes off pretty… weak. But also, the whole thing is kind of meaningless. The companies do recognize their “power” and have been trying to deal with this issue. Yes, perhaps they didn’t grasp the severity of the issue in the past, but they certainly have more recently. But simple declarations and pronouncements don’t really do anything useful in “solving” those issues. That’s because much of it is a human nature issue, and expecting tech companies to “take responsibility” for human nature is… well… nonsense.

Filed Under: argentina, belgium, brazil, canada, content moderation, disinformation, fake news, france, internet regulation, ireland, latvia, prais call, singapore, uk
Companies: facebook

The US Refusing To Sign 'The Paris Call' Is Not As Big A Deal As Everyone Is Making It Out To Be

from the this-is-a-pointless-document dept

On Monday, a bunch of countries and companies officially announced and signed “The Paris Call,” or more officially, “the Paris Call for Trust and Security in Cyberspace.” It’s getting a fair bit of press coverage, with a lot of that coverage playing up the decision of the US not to sign the agreement, even as all of the EU countries and most of the major tech companies, including Google, Facebook, Microsoft, Cisco and many many more signed on.

But, most of those news stories don’t actually explain what’s in the agreement, beyond vague hand-waving around “creating international norms” concerning “cyberspace.” And the reports have been all over the place. Some talk about preventing election hacking while others talk about fighting both “online censorship and hate speech.” Of course, that’s fascinating, because most of the ways that countries (especially in the EU) have gone about fighting “hate speech” is through outright censorship. So I’m not quite sure how they propose to fight both of those at the same time…

Indeed, if the Paris Call really did require such silly contradictory things it would be good not to sign it. But, the reality is that it’s good not to sign it because it appears to be a mostly meaningless document of fluff. You can read the whole thing here, where it seems to just include a bunch of silly platitudes that most people already agree with and mean next to nothing. For example:

We reaffirm our support to an open, secure, stable, accessible and peaceful cyberspace, which has become an integral component of life in all its social, economic, cultural and political aspects.

We also reaffirm that international law, including the United Nations Charter in its entirety, international humanitarian law and customary international law is applicable to the use of information and communication technologies (ICT) by States.

I mean, great. But so what? The “measures” the agreement seeks to implement are almost equally as meaningless. Here’s the entire list:

* Prevent and recover from malicious cyber activities that threaten or cause significant, indiscriminate or systemic harm to individuals and critical infrastructure; * Prevent activity that intentionally and substantially damages the general availability or integrity of the public core of the Internet; * Strengthen our capacity to prevent malign interference by foreign actors aimed at undermining electoral processes through malicious cyber activities; * Prevent ICT-enabled theft of intellectual property, including trade secrets or other confidential business information, with the intent of providing competitive advantages to companies or commercial sector; * Develop ways to prevent the proliferation of malicious ICT tools and practices intended to cause harm; * Strengthen the security of digital processes, products and services, throughout their lifecycle and supply chain; * Support efforts to strengthen an advanced cyber hygiene for all actors; * Take steps to prevent non-State actors, including the private sector, from hacking-back, for their own purposes or those of other non-State actors; * Promote the widespread acceptance and implementation of international norms of responsible behavior as well as confidence-building measures in cyberspace.

I mean, sure? Some of that is meaningless. Some of that is silly. Some of it is obvious. But none of it actually matters because it’s not binding. Could this lead to something that matters? Perhaps. But it seems silly to condemn the US for failing to sign onto a meaningless document of platitudes and meaningless fluff, rather than anything substantial. There’s no problem with those who did choose to sign on, but it’s hard to see how this is a meaningful document, rather than just an agreement among signatories to make them all feel like they’ve done something.

Filed Under: cybersecurity, eu, france, internet, internet regulation, paris call, security, us

ITU Boss Explains Why He Wants The UN To Start Regulating The Internet

from the not-good dept

We’ve written a few times about why we should be worried about the ITU (a part of the UN) and its attempts to regulate the internet, to which some have responded by arguing that the ITU/UN doesn’t really want to regulate the internet. However, the Secretary-General of the ITU, Hamadoun Toure has now taken to the pages of Wired, to explicitly state why he believes the UN needs to regulate the internet. And it appears that many of the initial fears are 100% accurate. We’ve already covered how the ITU seems to be hiding all sorts of awful scary things by claiming they all fall under the “cybersecurity” banner, and we’ve noted that the ITU’s mandate over cybersecurity is imaginary and its history with the subject is sketchy, at best. However, in the op-ed, Toure doubles down on why the UN should be there helping countries censor things like “porn and propaganda” on the internet as a part of its “cybersecurity” efforts

Governments are looking for more effective frameworks to combat fraud and other crimes. Some commentators have suggested such frameworks could also legitimize censorship. However, Member States already have the right, as stated in Article 34 of the Constitution of ITU, to block any private telecommunications that appear “dangerous to the security of the State or contrary to its laws, to public order or to decency.” The treaty regulations cannot override the Constitution.

Many authorities around the world already intervene in communications for various reasons – such as preventing the circulation of pornography or extremist propaganda. So a balance must be found between protecting people’s privacy and their right to communicate; and between protecting individuals, institutions, and whole economies from criminal activities.

First, it should be made clear that Toure is being somewhat disingenuous here. The ITU’s mandate concerning such communications were written for a different time, when telecommunications meant limited communications systems — initially the telegraph (yes, that’s how far this goes back) and then the telephone. Toure claims that the ITU is “charged with coordinating global information and communication technology (ICT) resources,” but that’s only in his own mind. The “Constitution” he so proudly points to only refers to telecommunications — which in this context has a very, very different meaning than broader “information and communications technology.” The ITU’s charter is for telecommunications only. That is, old telephone networks (and telegraphs before that). In such cases, there was a need for a group like the ITU to help deal with standardization and interconnection among large companies. But, with the internet, their role is basically obsolete. There are other basic standards bodies — ones that are more open and understanding. But Toure is focused on helping out authoritarian states like Russia and China that want to claim that “pornography or extremist propaganda” should be censored.

This is a serious problem for those who support an open and free internet that provides greater ability for free expression to occur. If people are doing things that violate local laws, go after them legally and prosecute them under those laws. To put it on telcos — often ones with close ties to state governments — to block and censor, all in the name of “cybersecurity,” is opening up a huge can of worms. There is no need for the ITU to get involved in this situation at all.

Then, there’s the second big problem — and what this story is all about in reality. As we’ve noted in the past, large, slow, lumbering legacy telcos (many of them either state owned or formerly state owned) haven’t innovated at all. They see big internet companies, who are building awesome and fantastic services that consumers want — and getting rich doing so. In response, they get jealous, and say that they deserve some of that money. And that’s what this plan is really about — the ITU helping its “member” telcos try to divert money from the successful services out there to the big lumbering telcos who failed to innovate. Toure more or less says that in his op-ed, by labeling it as a more “fair” distribution of revenue:

An important and influential factor is network financing, so the conference may consider strategies around sharing revenues more fairly, stimulating investment, mainstreaming green ICTs, and expanding access as widely as possible to meet booming demand.

And that’s what this comes down to. It’s about diverting revenues from companies who earned it in the market, to the telcos who did nothing, often getting fat and lazy on the back of government subsidies and who are now jealous. But since they make up the core of the ITU and give it its purpose, suddenly it’s all about “sharing revenues more fairly.”

Thankfully, it appears that most of the commenters on the Wired piece see through this and are calling Toure out on it.

Filed Under: cybersecurity, free speech, hamadoun toure, internet regulation, itu, pornography, propaganda, un

The Hypocrisy Of Congress: As Big A Threat To The Internet As The UN They're Condemning

from the we-don't-regulate-the-internet,-except-when-we-do dept

While it’s great to see Congress continue to speak out against the UN’s dangerous efforts to tax and track the internet to help out governments and local telco monopolies, it’s pretty ridiculous for Congress to pretend that it’s declaring “hands off the internet” when it has its own hands all over the internet these days. As Jerry Brito and Adam Theirer write, over at the Atlantic, if Congress is really serious about supporting a free and open internet, it should look in the mirror first:

The fear that the ITU might be looking to exert greater control over cyberspace at the conference has led to a rare Kumbaya moment in U.S. tech politics. Everyone — left, right, and center — is rallying around the flag in opposition to potential UN regulation of the Internet. At a recent congressional hearing, one lawmaker after another lined up and took a turn engaging in the UN-bashing. From the tone of the hearing, and the language of the House resolution, we are being asked to believe that “the position of the United States Government has been and is to advocate for the flow of information free from government control.”

If only it were true. The reality is that Congress increasingly has its paws all over the Internet. Lawmakers and regulators are busier than ever trying to expand the horizons of cyber-control across the board: copyright mandates, cybersecurity rules, privacy regulations, speech controls, and much more.

Earlier this year, Congress tried to meddle with the Internet’s addressing system in order to blacklist sites that allegedly infringe copyrights — a practice not unlike that employed by the Chinese to censor political speech. The Stop Online Piracy Act (SOPA) may have targeted pirates, but its collateral damage would have been the very “stable and secure” Internet Congress now wants “free from government control.” A wave of furious protests online forced Congress to abandon the issue, at least for the moment.

It goes on to discuss other proposals to regulate parts of the internet, including CISPA and other online security laws. Of course, in each of these cases, the politicians in Congress will come out with a litany of reasons why it “makes sense” (or more accurately “we have to do something!”) to pass these laws. But that pre-supposes that all those countries that Congress is now condemning for wanting more ability to spy on and control citizens don’t have reasons to do so. Given the increasing evidence that the US government, via the NSA, is already spying on wide swaths of the population — and Congress’ apparent total lack of concern about this, it’s incredibly hypocritical to pretend that the US government supports a free and open internet with privacy protections for citizens, when its own actions reveal something very, very different.

Filed Under: cispa, congress, internet regulation, itu, sopa, united nations

Be Afraid: Russia And China Seek To Put In Place Top-Down Regulation Of The Internet

from the pay-attention dept

For all the talk of SOPA/PIPA/ACTA/TPP, there’s another much bigger threat to “the internet as we know it.” It’s a bunch of countries who are seeking to use the UN’s International Telecommunication Union (ITU) to create a top-down regulatory scheme for the internet. This process began a few months back, but FCC Commissioner Robert McDowell has a pretty good summary of the situation in the WSJ, and why those who believe in internet freedom should be afraid. It is worth noting, of course, that things like ICANN and IETF are far from perfect today, but handing many of their functions over to the ITU with the goal of a pretty broad top-down regulatory plan for the internet is not the solution. McDowell highlights a few of the key points in the plan:

* Subject cyber security and data privacy to international control; * Allow foreign phone companies to charge fees for “international” Internet traffic, perhaps even on a “per-click” basis for certain Web destinations, with the goal of generating revenue for state-owned phone companies and government treasuries; * Impose unprecedented economic regulations such as mandates for rates, terms and conditions for currently unregulated traffic-swapping agreements known as “peering.” * Establish for the first time ITU dominion over important functions of multi-stakeholder Internet governance entities such as the Internet Corporation for Assigned Names and Numbers, the nonprofit entity that coordinates the .com and .org Web addresses of the world; * Subsume under intergovernmental control many functions of the Internet Engineering Task Force, the Internet Society and other multi-stakeholder groups that establish the engineering and technical standards that allow the Internet to work; * Regulate international mobile roaming rates and practices.

Again this attempt to give the UN and certain governments unprecedented control over parts of the internet is not new. It’s actually been in process for a few years, but it’s expected to heat up in the next few months, and most in the US don’t seem to even know it’s about to happen. While there are some issues that are worth discussing among the proposals, it’s been pretty transparent from the start that a lot of the plan is to give certain governments much more control over how the internet is used… and not in a good way. The internet thrives today in large part because it’s not controlled by governments, no matter how much they’ve slowly tried to encroach (and the US is particularly guilty of that lately).

The fact that this effort is mainly being led by Russia and China should give you a sense of the intentions here. Neither country is particularly well-known for supporting the principles of open communications or freedom of speech.

Unfortunately, as McDowell notes, the US doesn’t seem to be taking the issue particularly seriously, and hasn’t even assigned a negotiator to handle the discussions (though, I’m afraid to find out who they eventually do assign to that role). McDowell also points out that simply saying “no” to any changes probably won’t go over well with many countries — and all Russia and China need to get this approved are half of the countries to side with them on this proposal. Since doing nothing is often seen as ceding the internet to the US, that could be a problem. Of course, that doesn’t mean caving in. It means engaging and getting enough people aware of these issues so they can make a reasonable case for why a top-down management system would have massive unintended (or, um, intended) consequences that the world doesn’t want:

As part of this conversation, we should underscore the tremendous benefits that the Internet has yielded for the developing world through the multi-stakeholder model.

Upending this model with a new regulatory treaty is likely to partition the Internet as some countries would inevitably choose to opt out. A balkanized Internet would be devastating to global free trade and national sovereignty. It would impair Internet growth most severely in the developing world, but also globally as technologists are forced to seek bureaucratic permission to innovate and invest. This would also undermine the proliferation of new cross-border technologies, such as cloud computing.

A top-down, centralized, international regulatory overlay is antithetical to the architecture of the Net, which is a global network of networks without borders. No government, let alone an intergovernmental body, can make engineering and economic decisions in lightning-fast Internet time. Productivity, rising living standards and the spread of freedom everywhere, but especially in the developing world, would grind to a halt as engineering and business decisions become politically paralyzed within a global regulatory body.

Any attempts to expand intergovernmental powers over the Internet—no matter how incremental or seemingly innocuous—should be turned back. Modernization and reform can be constructive, but not if the end result is a new global bureaucracy that departs from the multi-stakeholder model. Enlightened nations should draw a line in the sand against new regulations while welcoming reform that could include a nonregulatory role for the ITU.

This issue is going to pick up steam pretty quickly in the next few months, so educate yourselves now…

Filed Under: china, internet regulation, itu, robert mcdowell, russia, un