regulations – Techdirt (original) (raw)
SB 1047: California’s Recipe For AI Stagnation
from the not-how-this-should-work dept
As California edges closer to enacting SB 1047, the state risks throwing the entire AI industry into turmoil. The bill has already cleared the legislative process and now sits on Governor Newsom’s desk, leaving him with a critical decision: veto this ill-conceived policy or sign away the U.S.’ future in AI. While Newsom appears skeptical of 1047, he still has not made it clear if he’ll actually veto the bill.
SB 1047 Summary
SB 1047 establishes an overly rigid regulatory framework that arbitrarily divides artificial intelligence systems into two categories: “covered models” and “derivative models.” Both are subject to extensive requirements, though at different stages of development. Developers of covered models face strict pre-training and pre-release requirements, while those developing derivative models are burdened with the responsibility of ensuring the model’s long-term safety, anticipating future hazards, and mitigating potential downstream abuses.
The bill also imposes a reasonableness standard on developers to demonstrate that they have exercised “reasonable care” in preventing their models from causing critical risks. This includes the implementation and adherence to extensive safety protocols before and after development. In practice, the standard merely introduces legal ambiguity. The vague nature of what constitutes “reasonable care” opens the door to costly litigation, with developers potentially stuck in endless legal battles over whether they’ve done enough to comply not only with the ever-evolving standards of care and best practices for AI development, but also their own extensive state-mandated safety protocols.
It’s no surprise that industry experts have raised serious concerns about SB 1047’s potential to stifle innovation, limit free expression through restrictions on coding, and undermine the future of U.S. AI development.
SB 1047 Will Cede U.S. AI Lead to Foreign Competitors
Under the bill, a covered model is defined as any advanced AI system that meets certain thresholds of computing power and cost. Models trained before January 1, 2027, are classified as covered if they use more than 1026 integer or floating-point operations and cost more than $100 million to develop.
But these thresholds are inherently flawed. Even cutting-edge AI systems like GPT-4, which are among the most advanced in the world, were trained using significantly less computing power than the bill’s benchmark. For example, estimates suggest that GPT-3 required around 1023 operations—far below the bill’s threshold. This highlights a key problem: the bill’s requirements for covered models primarily targets large, resource-intensive AI labs today, but as AI technologies and hardware improve, even smaller developers could find themselves ensnared by these requirements.
However, there’s a deeper irony here: scaling laws in AI suggest larger AI models generally perform better.The more computational power used to train a model, the better it tends to handle complex tasks, reduce errors like hallucinations, and generate more reliable results. In fact, larger AI models could actually reduce societal harms, making AI systems safer and more accurate over time—a result for which the California Legislature is supposedly striving.
This is why larger AI firms, like OpenAI and Google, are pushing for more computationally intensive models. While it may seem that the covered model requirements exclude startup companies for now, current advancements in hardware—such as specialized AI chips and quantum computing—suggest that even smaller commercial AI developers could potentially surpass this threshold within the next 5-10 years (i.e. Moore’s Law). In other words, as time goes on, we can expect more market entrants to fall under the bill’s regulatory framework sooner than expected.
What’s worse, the threshold component seems to discourage companies from pushing the limits of AI. Instead of harnessing high-computing power to build truly transformative systems, businesses might deliberately scale down their models just to avoid falling under the bill’s scope. This short-sighted approach won’t just slow down AI innovation; it could stifle progress in computing power as a whole. If companies are reducing their need for cutting-edge processors and hardware, the broader tech ecosystem—everything from next-gen chips to data centers—will stagnate. The very innovation we need to lead the world in technology could grind to a halt, all because we’ve made it too risky for AI labs to aim big.
Pre-Training Requirements & Commercial Use Restrictions for Covered Models
Before training (i.e. developing) a covered model, developers must first decide whether they can make a “positive safety determination” about the model. Developers must also implement a detailed “safety and security protocol,” including cybersecurity protections, testing procedures to assess potential harms, and the ability to enact a full shutdown if needed. Developers are prohibited from releasing their models for any purpose beyond training unless they can certify that the models pose no unreasonable risks of harm, either now or in the future.
The bill’s vague language around “hazardous capabilities” opens a Pandora’s box of potential issues. While it primarily aims to address catastrophic risks like cyberattacks or mass casualties, it includes a broad catch-all provision for other risks to public safety or infrastructure. Given the many “black-box” aspects of AI model development, developers will struggle to confidently rule out any unforeseen hazards, especially those arising from third-party developed derivatives. The reality is that most developers will find themselves constantly worried about potential legal and regulatory risks, chilling progress at a time when the global AI race is in full throttle.
SB 1047’s Reporting Requirements Will Bring AI Innovation to A Grinding Halt
Developers must also maintain and regularly update their safety and security protocols for both covered models and derivative models. Several additional requirements follow:
- Model developers must conduct an annual review of their safety and security protocols to ensure that protocols are kept current with evolving risks and industry standards. This includes any rules adopted per the bill’s requirements after January 1, 2027. Developers must also update their protocols based on these reviews.
- Beginning in 2026, developers must hire a third-party auditor to independently verify compliance with the safety protocols. The auditor’s report must include an assessment of the steps taken by the developer to meet SB 1047’s requirements (and any additional guidelines post-enactment) and identify any areas of non-compliance. Developers are required to address any findings by updating their protocols to resolve issues identified during these audits.
- Model developers must retain an unredacted copy of the safety and security protocols for as long as the covered model is in commercial or public use, plus five years. They are also required to provide the Attorney General with an updated copy of the safety protocol upon request.
- A conspicuously redacted copy of the safety and security protocols must be made publicly available.
In practice, the process of releasing new or updated models will be bogged down with arbitrary bureaucratic delays. This will demand significant resource allocation well before companies can even gauge the success of their products.
Not only that, the mandatory assessments will effectively derail essential safety practices, especially when it comes to red teaming—where teams simulate attacks to uncover vulnerabilities. The reality is that red teaming works best behind closed doors and with minimal friction, empowering developers to quickly (and honestly) address security issues. Yet, with the added layers of auditing and mandatory updates, developers may avoid these rigorous safety checks, fearing that each vulnerability discovered could generate legal liability and trigger more scrutiny and further delays.
In the same vein, the mandatory reporting component adds a layer of government scrutiny that will discourage timely security updates and continued transparency about discovered vulnerabilities. Knowing that every security flaw might be scrutinized by regulators, developers may hesitate to disclose issues or rapidly iterate on their models for fear of legal or regulatory backlash. Worse, developers may simply try their hardest to not “know” about the vulnerabilities. Instead of fostering collaboration, the mandatory reporting requirement pits developers against the California AG.
As Eric Goldman observed, government-imposed reporting inherently chills expression, where companies become more conservative (i.e. even less transparent) to avoid regulatory scrutiny. The same applies to SB 1047.
SB 1047 Will Insulate Established AI Companies at the Expense of Startups
In contrast to covered models, derivative models—those fine-tuned or modified from existing covered models—are subject to safety assessments post-modification. Fine-tuning, a routine process where a model is adapted using new data, empowers AI to perform better on targeted tasks without requiring full retraining. But SB 1047 places undue burdens on developers of derivative models, forcing them to conduct safety assessments every time they make updates.
The lifeblood of AI innovation is this iterative, adaptive process. Yet, SB 1047 effectively punishes it, creating significant hurdles for developers looking to refine and improve their models. This not only flies in the face of software engineering principles—where constant iteration is key—but also discourages innovation in AI, where flexibility is essential to keeping pace with technological progress.
Worse, SB 1047 shifts liability for derivative models to the original developers. This means companies like Google or OpenAI could be held liable for risks introduced by third-party developers who modify or fine-tune their models. This liability doesn’t just extend to the original version of the model but also to all subsequent changes, imposing a continuous duty of oversight. Such a framework not only contradicts long standing legal principles governing third-party liability for online platforms but also makes the AI marketplace unworkable for startups and independent developers.
Derivative Models Fuel the Current AI Marketplace
Derivative models are integral to the AI ecosystem. For example, Google’s BERT model—a covered model under SB 1047—has been fine-tuned by countless companies for specialized tasks like sentiment analysis and question answering. Similarly, OpenAI’s GPT-3 has been adapted for chatbots, writing tools, and automated customer service applications. OpenAI even operates a marketplace for third-party developers to customize GPT models for specific needs, similar to an app store for AI. While these derivative models serve legitimate purposes, there’s a real risk that third-party modifications could lead to abuse, potentially resulting in harmful or malicious applications anticipated by the bill.
Drawing on lessons learned from online platform regulation, SB 1047’s framework risks making the AI marketplace inaccessible to independent developers and startups. Companies like Google, Meta, and OpenAI, which develop powerful covered models, may become hesitant to allow any modifications, effectively dismantling a growing ecosystem that thrives on the ability to adapt and refine existing AI technologies. For venture capitalists, the message is clear: open models come with significant legal risk, turning them into liability-laden investments. The repercussions of this would be profound. Just as a diverse media landscape is crucial for maintaining a well-rounded flow of information, a variety of AI models is essential to ensuring the continued benefit of different methodologies, data sets, and fine-tuning strategies. Limiting innovation in this space would stifle the dynamic evolution of AI, reducing its potential to meet varied societal needs.
Ironically, for a state that has been increasingly hellbent on destroying “big tech,” California’s approach to AI will (once again) ensure that only the largest, most well-funded AI companies—those capable of developing their own powerful covered models—will not only dominate, but single handedly shape the future of AI, while smaller applications that currently build on and refine models from the larger players evaporate.
SB 1047 Will Drag California Into More Costly Litigation Over Ill-Conceived Tech Regulations
California is already mired in legal battles over poorly crafted tech regulations. Now, with SB 1047, the state risks plunging into yet another costly, uphill legal fight. The bill’s restrictions on the development and release of AI models could infringe on the constitutional right to code, which courts have recognized as a form of protected expression. For instance, in Bernstein v. U.S. Department of State, export controls on encryption code were deemed to violate the First Amendment, affirming that code is a form of speech. More broadly, courts have consistently upheld the rights of developers to code, underscoring that limitations on innovation through code can encroach on constitutional protections.
This debate is strikingly similar to the legal battles over social media regulation. Just as social media platforms are fundamentally speech products, entitled to editorial discretion under the First Amendment, so too are today’s Generative AI services. Many of these AI systems center around the processing and production of expression, making them direct facilitators of speech. As with the algorithms that curate social media content, regulations targeting these models will inevitably raise serious First Amendment concerns, challenging the constitutionality of such measures.
SB 1047, far from being a model for “responsible” AI innovation, risks debilitating the U.S.’s leadership in AI, reinforcing the dominance of existing tech firms, and punishing developers for improving and iterating upon their models. Governor Newsom has a choice: veto this bill and support the growth of AI innovation, or sign it and watch California lead the charge in destroying the very industry it claims to protect.
Jess Miers is currently Visiting Assistant Professor of Law, University of Akron School of Law.
Filed Under: ai, california, competition, derivative models, foundational models, gavin newsom, generative ai, hazardous capabilities, reasonable care, regulations, sb 1047
ChatGPT Seems To Recognize That Internet Regulations Really Regulate Speech, No Matter What Politicians Say
from the when-gpt-speaks-the-truth dept
Over the last few years, we’ve seen a bunch of politicians trying to frame their regulation of the internet as not being about regulating speech, but about “design” or “process” or some such. But when you scratch beneath the surface, they’re always really about regulating speech. Whether it’s KOSA or California’s Age Appropriate Design Code (AADC) in the US, or the DSA in the EU, there is plenty of ink spilt to defend the claim that they’re not really about censorship.
Just recently we wrote about the Ninth Circuit seeing through California’s AADC. The politicians behind the law insisted it wasn’t about regulating content, only conduct. But the court recognized that was obviously not true. Then, over in the EU, we have the DSA, which European officials insist is never supposed to be used for moderating content, but where the guy in charge of enforcing it seems to think that of course he should be using it for that.
Daphne Keller, over at Stanford, recently came across a custom ChatGPT instance, designed to act as a “trust & safety regulation expert.” The custom tool was created by Inbal Goldberger, a long-time trust & safety executive. Whether or not the tool is any good is not the point. What’s really fascinating is that when Daphne used the tool to explore how websites should best comply with the various regulatory regimes they’re facing, over and over again, it suggested removing speech.
For platforms planning their compliance with laws like KOSA and the DSA, the most basic question is “what do I need to do?” ChatGPT has a lot of answers. One of the main answers is that platforms need to take down more lawful speech.
You can read the full transcripts that Daphne generated exploring both EU and US law and how companies should respond. They’re quite interesting, and I’m actually somewhat impressed by the ChatGPT responses.
For example, she asks the custom GPT how to comply with KOSA, and it immediately tells her that “the following categories of content must be prevented from being accessed by children,” and it includes plenty of First Amendment-protected speech including “violent content” and “cyberbullying.” Whether or not it’s good policy (or, even possible) to block such content, just the fact that this is the first instinct of this custom GPT says something.
And yes, some people will dismiss this by saying that you can’t trust the GPT in the first place. But if it’s reading these laws and finding that’s the best way to comply, it’s pretty clearly saying something about them. Back in July, we had that guest post by Matthew Lane, rightly pointing out that companies are lazy and they will seek to take the path of least resistance for compliance. And that least resistance will often be just pulling down any content that might, conceivably, be deemed problematic under the law, just as the custom GPT recommends.
KOSA wasn’t the only law that the GPT interpreted this way:
ChatGPT gave very similar answers about California’s Age Appropriate Design Code (AADC), telling me that as a platform operator “you need to ensure that your platform protects children from encountering” a list of “key types of content” including depictions of violence, substance abuse, and misinformation. Coincidentally, the same day that ChatGPT said this, the Ninth Circuit Court of Appeals said pretty much the same thing, ruling that the law’s clearly content-related provisions violated the First Amendment.
The answers I got about European law were to my mind equally blunt, though they may require a little more translation for readers not steeped in EU policy debates. I asked a question that deliberately reused language from a recent Commission letter demanding that X mitigate risks arising from the interview it hosted with former President Trump. (That letter, from Commissioner Thierry Breton, prompted outcry from civil society groups and rare publiccensure from other Commission authorities.) The question was, “What must I do to mitigate risks of detrimental effects on civic discourse and public security under the DSA?”
ChatGPT’s answer went awry in a way that really matters for small and mid-sized platforms: it described obligations that won’t apply unless a platform has over 45 million users in the EU, without mentioning that these simply don’t affect everyone else.
Importantly for the rights of Internet users, ChatGPT’s advice also crossed a number of important red lines in EU law that exist to protect freedom of expression and information. First, it instructed me to act not only against illegal content but also “harmful content,” through changes such as “adjusting your content moderation policies.” Using the word “harmful” is a big deal. Throughout the DSA legislative process that term was used to refer to lawful but awful expression, or else to a mixed category that includes both legal and illegal material. For example, the Commission’s explanatory memorandum for the DSA said
There is a general agreement among stakeholders that ‘harmful’ (yet not, or at least not necessarily, illegal) content… should not be subject to removal obligations, as this is a delicate area with severe implications for the protection of freedom of expression.
ChatGPT’s advice to remove disinformation has a version of the same problem since in the EU, as in the US, not all disinformation is illegal.
Also, Daphne notes that if this is just AI “hallucinations,” it’s still notable that it always hallucinated in the same way (“censor more legal content.”).
Several people suggested that the repeated and consistent answers I got were just AI hallucinations. It would be pretty odd for ChatGPT to happen to hallucinate the same interpretation of the DSA that Commissioner Breton did, or the same interpretation of the AADC that a federal court of appeals did, but… I guess it’s possible
But, as she notes, the mere plausibility of these results from the custom GPT also suggests that humans will find these recommendations plausible as well:
The real reason ChatGPT’s answers should ring alarm bells is that they were so plausible. These are the answers that humans may well arrive at—whether or not they ask an AI what to do—in parsing confusing legal obligations. They are answers we can find in sloppy secondary materials purporting to explain the laws, and in news coverage that uncritically repeat politicians’ exaggerated claims about their legal authority. ChatGPT’s answers may not correctly answer questions about the law. But its “take it all down” directives may be the ones that both humans and machines get trained to apply all the same.
These points are really key. When we talk about these laws, people keep pointing to little caveats and phrases in the law to say “but it’s not supposed to be interpreted that way.” But they are interpreted that way over and over again. This is why the Ninth Circuit was right to call out California’s AADC as inherently censorial in how it works. Because that’s exactly how most companies would interpret it.
And this impacts other things as well. One point that we’ve raised repeatedly in our articles about “age verification” is that when politicians try to hide the ball and call for “age estimation” or “age assurance” rather than outright age verification, it doesn’t matter. It will still lead to age verification as the tool that companies will feel most drawn to simply to avoid any risk of liability.
That also appears to be what the Custom GPT felt about some of these laws. The transcript Daphne posted regarding KOSA gives a really good example of this. Supporters of KOSA insist that it doesn’t require age verification, and technically it doesn’t. Not directly. But, as the Custom GPT explains, if you want to avoid a legal fight, you probably should use age verification anyway. So while it might not mandate it, it certainly is going to create pressure for companies to adopt it anyway, just to avoid legal liability.
PROMPT: Do I need to use age assurance or verification tools to comply with KOSA?
RESPONSE:
To comply with the Kids Online Safety Act (KOSA), you are not explicitly required to implement age assurance or age verification tools. However, it is strongly recommended to ensure that age-appropriate content is provided to users, particularly to prevent children from accessing harmful content.
The KOSA includes provisions for the Federal Trade Commission (FTC) and other bodies to conduct studies and provide guidance on effective methods for age verification. These may include systems at the device or operating system level, but KOSA does not mandate affirmative collection of personal data solely for age verification purposes.
If your platform already collects certain user data, you can utilize that for age-related purposes, but you are not obligated to implement new age gating or age verification functionalities if they aren’t part of your normal operations. However, if your platform includes sensitive content such as pornography or extreme violence, using robust age verification methods is strongly encouraged to ensure compliance and protect children from inappropriate content.
Daphne even pushed it further, asking if it violated the law to not use age verification, and the GPT suggested (reasonably under the law) that not using age verification might be deemed as taking insufficient measures:
PROMPT: I know it is recommended, but will I violate the law if I do not do age verification?
RESPONSE:
Under the Kids Online Safety Act (KOSA), while the use of age verification or age assurance tools is strongly recommended, failing to implement such tools does not automatically constitute a violation of the law. However, the key legal requirement is that you must take “reasonable measures” to protect children from harmful content. If you fail to implement any effective mechanism to prevent children from accessing such content, and this results in harm, you could be found non-compliant with KOSA.
The law emphasizes that platforms must make a good-faith effort to restrict access to harmful content by minors. Age verification is one of the methods to demonstrate compliance, but it is not the only method. If your platform is found to allow minors to access harmful content without taking sufficient steps to prevent this, it could lead to legal action under KOSA.
Thus, while not using age verification tools doesn’t directly violate the law, it might leave your platform vulnerable to being deemed non-compliant if other protective measures are not sufficiently robust.
There’s a lot more in these docs, and they really do a nice job of highlighting how, for all the winks, nods, and denials from the lawmakers, the simplest and most obvious explanations for how to comply with these laws is to take down First Amendment-protected speech and implement privacy-harming age verification policies.
This is one of the reasons we spend so much time digging into the nuances and details. People sometimes complain that all I do is criticize these laws, but it’s important to understand how they will actually be implemented and how that implementation could do more harm than good.
Filed Under: 1st amendment, aadc, california, compliance, design codes, dsa, eu, free speech, kosa, regulations
The Kids Online Safety Act And The Tyranny Of Laziness
from the it's-a-censorship-bill dept
There is some confusion about whether the Kids Online Safety Act (KOSA) regulates content or design on digital platforms like Instagram or TikTok. It’s easy to see why that is, because the bill’s authors claim they are attempting to make the bill about design. This is a good move on their part, as regulations on design can allow us to stop bad behavior from tech companies without endangering speech.
Unfortunately, KOSA didn’t nail it.
That’s because KOSA is trying to regulate the design of content recommendation systems, i.e. the digital chutes that all our online speech filters through, which are unique to each online platform. In KOSA’s case, it’s proven impossible so far to separate the design of content recommendation systems from the speech itself. The duty of care and insistence that it covers “personal recommendation systems” means the bill will inevitably impact the speech itself.
The reason is pretty simple: tech companies are inherently lazy — those with decision making authority will want to comply with regulations in the cheapest and easiest way possible. This means they will take shortcuts wherever possible, including building censorship systems to simply make difficult-to-manage content go away. That will almost certainly include politically targeted content, like speech related to LGBTQ+ communities, abortion, and guns. And they will conduct this censorship with a lazily broad brush, likely sweeping up more nuanced content that would help minors with problems like eating disorders or suicide.
The difference between the aspirations of KOSA and its inevitable impacts work like this: KOSA wants systems engineers to design algorithms that put safety first and not user engagement. While some companies are already pivoting away from pure engagement focused algorithms, doing so can be really hard and expensive because algorithms aren’t that smart. Purely engagement focused algorithms only need to answer one question — did the user engage? By asking that one question, and testing different inferences, the algorithms can get very good at delivering content to a user that they will engage with.
But when it comes to multi-purpose algorithms, like those that want to only serve positive content and avoid harmful content, the task is much harder and the algorithms are unreliable. Algorithms don’t understand what the content they are ranking or excluding is or how it will impact the mental health and well-being of the user. Even human beings can struggle to predict what content will cause the kinds of harm described by KOSA.
To comply with KOSA, tech companies will have to show that they are taking reasonable steps to make sure their personalized recommendation systems aren’t causing harm to minors’ mental health and well-being. The only real way to do that is to test the algorithms to see if they are serving “harmful” content. But what is “harmful” content? KOSA leans on the FTC and a government-created Kids Online Safety Council to signal what that content might be. This means that Congress will have significant influence over categorizing harmful speech and platforms will use those categories to implement keywords, user tags, and algorithmically-estimated tags to flag this “harmful” content when it appears in personal recommendation feeds and results. This opens the door to government censorship.
But it gets even worse. The easiest and cheapest way to make sure a personal recommendation system doesn’t return “harmful” content is to simply exclude any content that resembles the “harmful” content. This means adding an additional content moderation layer that deranks or delists content that has certain keywords or tags, what is called “shadowbanning” in popular online culture.
There are three major problems with this. The first is obviously that the covered platforms will have created a censorship machine that accepts inputs from the government. A rogue FTC could use KOSA explicitly for censorship, by claiming that any politically targeted content leads to the harms described in the bill. We cannot depend on big tech to fight back against this, because being targeted by an administration comes with a cost and making an administration happy might come with some benefits.
Big tech may even eventually benefit from this relationship because content moderation is impossible to do well: too often there are nuanced decisions where content moderators simply have to make the choice they estimate to be the least harmful. In some ways, KOSA allows tech companies to push the responsibility for these decisions onto the FTC and the Kids Online Safety Council. Additionally, tech companies are likely to over-correct and over-censor anything they think the government may take action against, and take zero responsibility for their laziness, just like they did after SESTA-FOSTA.
The second problem is that these systems will leak across the Internet. While they are targeted at minors, the only way to tell if a user is a minor is to use expensive and intrusive age verification systems. Covered platforms will want to err on the side of compliance unless they have explicit safe harbors, which aren’t exactly in the bill. So users may accidentally get flagged as minors when they aren’t. Worse, even the accounts that users proactively choose to “follow” aren’t safe from censorship under KOSA because the definition of “personal recommendation system” includes those that “suggest, promote, or rank content” based on personal data. Almost all feeds of content a user is following are ranked based on the algorithm’s estimation of how engaging that content will be to you. A user is less likely to read a post that says “I ate a cookie today” than one that says “I got married today” because users don’t like scrolling through boring content. And so even much of what we want to see online will more than likely be wrapped into the KOSA censorship system that tech companies create to comply.
The third problem is that these sorting systems will not be perfect and will lead to mistakes on both sides: “harmful” content will get through and “harmless” content will be shadowbanned. In some ways this could make the problems KOSA is explicitly attempting to solve worse. For example, imagine the scenario in which a young user posts a cry for help. This content could easily get flagged as suicide or other harmful content, and therefore get deranked across the feeds of all those that follow the person and care for them. No one may see it.
This example shows how challenging content moderation is. But KOSA is creating incentives to avoid solving these tricky issues, and instead do whatever the government says is legally safe. Again, KOSA will remove some degree of culpability for the tech companies by giving them someone else to blame so long as they are “complying” when bad things happen.
Content moderation is hard and KOSA contains the worst of all worlds in the places it touches content moderation by neither trying to understand what it is regulating nor giving tech companies clear guidance that would do no harm. Congress would be better off stripping out the content moderation parts of the bill and leaving them for the future after it has engaged in a good deal of fact finding and discussions with experts.
There are problems here that Congress assumes can be solved simply by telling tech companies to do better, when I’ve never met a content moderation professional who could tell me how they could actually solve these problems. We shouldn’t allow Big Tech to pass the buck to government boards at the expense of online speech — the next generation deserves a better internet than the one KOSA creates.
Matt Lane is Senior Policy Counsel at Fight for the Future.
Filed Under: censorship, content moderation, ftc, harmful content, kosa, laziness, regulations
Musk’s DSA Debacle: From ‘Exactly Aligned’ To Accused Of Violations
from the what-comes-around dept
Elon Musk declaring the EU DSA regulation as “exactly aligned with my thinking” and agreeing with “everything” it mandates is looking pretty hilarious at this point.
Elon Musk loves endorsing things he clearly doesn’t understand and then lashes out when they backfire. Last week, we had the story of how he was demanding criminal prosecution of the Global Alliance for Responsible Media (GARM) just one week after ExTwitter announced it had “excitedly” rejoined GARM. But now he’s really outdone himself.
Two years ago, soon after Elon announced his bid to takeover Twitter because (he claimed) he was a “free speech absolutist,” he met with the EU’s Internal Market Commissioner, Thierry Breton, and gave a full-throated endorsement of the EU’s Digital Services Act (DSA). At the time, we pointed out how ridiculous this was, as the DSA, at its heart, is an attack on free speech and the rights of companies to moderate as they wish.
At the time, we pointed out how it showed just how incredibly naïve and easily played Elon was. He was endorsing a bill that clearly went against everything he had been saying about “free speech” on social media. Indeed, the previous management of Twitter — the one so many people mocked as being against free speech — had actually done important work pushing back on the worst aspects of the DSA when it was being negotiated. And then Musk came in and endorsed the damn thing.
So, of course, the EU has been on the attack ever since he’s taken over the company. Almost immediately Breton publicly started lashing out at Musk over his moderation decisions, and insisting that they violated the DSA. As we highlighted at the time, this seemed ridiculously censorial and extremely problematic regarding free expression.
But, of course, the whole thing was pretty much a foregone conclusion. And late last week, the EU formally charged ExTwitter with violating the DSA, the very law that Elon originally said was great and he agreed with its approach.
The commission has three findings, and each of them seems problematic in the typical simplistic EU paternalistic manner, written by people who have never had any experience having to manage social media.
To be clear, in all three cases, I do wish that ExTwitter were doing what the EU is demanding, because I think it would be better for users and the public. But, I don’t see how it’s any business of EU bureaucrats to demand that ExTwitter do things the way they want.
First, they don’t like how Elon changed the setup of the “blue check” “verified account”.
- First, X designs and operates its interface for the “verified accounts” with the “Blue checkmark” in a way that does not correspond to industry practice and deceives users. Since anyone can subscribe to obtain such a “verified” status, it negatively affects users’ ability to make free and informed decisions about the authenticity of the accounts and the content they interact with. There is evidence of motivated malicious actors abusing the “verified account” to deceive users.
And, I mean, I’ve written a ton of words about why Elon doesn’t understand verification, and why his various attempts to change the verification system have been absurd and counterproductive. But that doesn’t mean it “deceives users.” Nor does it mean that the government needs to step in. Let Elon fall flat on his face over and over again. This entire approach is based on Breton and EU technocrats assuming that the public is too stupid to realize how broken ExTwitter has become.
As stupid as I think Musk’s approach to verification is, the fact that it doesn’t “correspond to industry practice” shouldn’t matter. That’s how experimentation happens. Sometimes that experimentation is stupid (as we see with Musk’s constantly changing and confusing verification system), but sometimes it allows for something useful and new.
Here the complaint from the EU seems ridiculously elitist: how dare it be that “everyone” can get verified?
Are there better ways to handle verification? Absolutely. Do I trust EU technocrats to tell platforms the one true way to do so? Absolutely not.
Second, the EU is mad about ExTwitter’s apparent lack of advertising transparency:
- Second, X does not comply with the required transparency on advertising, as it does not provide a searchable and reliable advertisement repository, but instead put in place design features and access barriers that make the repository unfit for its transparency purpose towards users. In particular, the design does not allow for the required supervision and research into emerging risks brought about by the distribution of advertising online.
I wish there were more details on this because it’s not entirely clear what the issue is here. Transparency is a good thing, but as we’ve said over and over again, mandated transparency leads to very real problems.
There are serious tradeoffs with transparency, and having governments require it can lead to problematic outcomes regarding privacy and competition. It’s quite likely that ExTwitter’s lack of a searchable repository has more to do with (1) Elon having a barebones engineering staff that only focuses on the random things he’s interested in and that doesn’t include regulatory compliance, (2) Elon really, really hates it when the media is able to point out that ads are showing up next to awful content, and (3) a repository might give more of a view into how the quality of ads on the site has gone from top end luxury brands to vapes and crypto scams.
So, yes, in general, more transparency on ads is a good thing, but I don’t think it’s the kind of thing the government should be mandating, beyond the basic requirements that ads need to be disclosed.
Finally, the last item is similar to the second one in some ways, regarding researcher access to data:
- Third, X fails to provide access to its public data to researchers in line with the conditions set out in the DSA. In particular, X prohibits eligible researchers from independently accessing its public data, such as by scraping, as stated in its terms of service. In addition, X’s process to grant eligible researchers access to its application programming interface (API) appears to dissuade researchers from carrying out their research projects or leave them with no other choice than to pay disproportionally high fees.
And, again, in general, I do wish that ExTwitter was better at giving researchers access to data. I wish that they made it possible for researchers to have API access for free, and not tryin to charge them $42,000 per month.
But, again, there’s a lot of nuance here that the EU doesn’t understand or care about. Remember that Cambridge Analytica began as an “academic research project” using the Facebook API. Then it turned into one of the biggest (though, quite over-hyped) privacy scandals related to social media in the last decade.
I have no doubt that if ExTwitter opened up its API access to researchers and another Cambridge Analytica situation happened, the very same EU Commissioners issuing these charges would immediately condemn the company for the sin of making that data available.
Meanwhile, Elon is claiming in response to all of this that the Commission offered him an “illegal secret deal” that they wouldn’t face these charges if they “quietly censored speech without telling anyone, they would not fine” the company. Musk also claimed that other companies accepted that deal, while ExTwitter did not.
So, this is yet another situation in which both sides are being misleading and confusing. Again, the structure of the DSA is such that its very nature is censorial. This is what we’ve been pointing out for years, and why we were horrified that Elon so loudly endorsed the DSA two years ago.
But, it does not at all match with how things actually work with the EU Commission to suggest that the EU would offer “secret deals” to companies to avoid fines. Thierry Breton’s explanation that there was no “secret deal” with anyone, and that it was ExTwitter’s own staff that asked what terms might settle the complaint rings very true.
In the end, both sides are guilty of overblown dramatics. Elon Musk continues to flounder spectacularly at managing a social media platform, making a series of blunders that even his fiercest advocates can’t overlook. However, the EU’s role is equally questionable. Their enforcement of the DSA seems overly paternalistic and censorial, enforcing best practices that may not even be best and reeking of condescension.
The allegations of an “illegal secret deal” are just another smoke screen in this complex spectacle. It’s far more likely that the EU Commission pointed to the DSA and offered standard settlement terms that ExTwitter rebuffed, turning it into a grandiose narrative.
This debacle offers no real heroes — just inflated egos and problematic regulations. What we’re left with is an unending mess where no one truly wins. Musk’s mistaken endorsement of the DSA was a red flag from the beginning, showing that hasty alliances in the tech-policy arena often lead to chaos rather than clarity.
There are a ton of nuances and tradeoffs in the tech policy space, and neither Musk nor Breton seem to care about those details. It’s all about the grandstanding and the spectacle.
So, here we stand: a free speech absolutist who endorsed censorship regulations and a regulatory body enforcing broad and suspect mandates. It’s a circus of hypocrisy and heavy-handedness, proving that in the clash between tech giants and bureaucratic juggernauts, the rest of us become unwilling spectators.
Filed Under: ads, blue checks, dsa, elon musk, eu, regulations, research, thierry breton, transparency, verification
Companies: twitter, x
California Says AT&T Can’t Just Hang Up On Unwanted, Taxpayer Funded Copper DSL And Phone Connections
from the you-weren't-still-using-that,-were-you? dept
Wed, Jul 3rd 2024 05:26am - Karl Bode
Four years years ago AT&T, a company that for years cheapened out on upgrading its broadband lines to fiber, effectively stopped selling DSL. While that’s understandable given the limitations of the dated copper-based tech, the problem is that thanks to concentrated telecom monopolization, many of these customers were left without any replacement options due to a lack of competition.
There are other issues at play too. AT&T has, for decades, received countless billions in tax cuts, subsidies, merger approvals, and regulatory favors (remember how killing net neutrality, broadband privacy rules, or approving a wave of doomed mergers were all supposed to unleash untold innovation, job creation, and network expansion? Yeah, AT&T doesn’t either).
In many states, AT&T has managed to lobby lawmakers into removing any requirement that the company continue servicing these users, many of which are elderly folks still using traditional landlines used for 911 access. But in California those efforts aren’t going too well after the California Public Utilities Commission (CPUC) told the company it can’t just hang up on these unwanted (taxpayer subsidized) connections.
AT&T had tried to argue that it shouldn’t be held to the state’s “carrier of last resort” requirements because these users now have the option of numerous voice services and “outdated copper-based landline facilities are expensive to maintain.” But the PUC found there weren’t replacement options available for many rural users, and nudged AT&T to upgrade its network to fiber instead of complaining:
“Carrier of last resort rules are technology-neutral and do not distinguish between voice services offered… and do not prevent AT&T from retiring copper facilities or from investing in fiber or other facilities/technologies to improve its network.”
AT&T is responding to the demand by shifting more lobbying resources toward changing California state law and eliminating protections for folks left in a lurch from disconnected DSL and landline connections. In most cases this is justified by insisting that wireless is good enough for these impacted users (even if reliability may be worse and wireless provider coverage maps routinely over-state coverage).
You can understand superficially why AT&T doesn’t want to adhere to aging regulations governing technology it doesn’t want. But those arguments, again, tend to forget AT&T has been slathered in tax breaks, regulatory favors, and subsidies for 30+ years in exchange for network upgrades that are always, quite mysteriously, left only half-deployed. It also has a history of defrauding subsidy programs.
This hasn’t just been a problem with AT&T. Verizon has also routinely found itself under fire over the last fifteen years for letting aging phone and DSL networks fall into total disrepair despite billions in taxpayer subsidies. In most cases, well-lobbied lawmakers just forget telecom history and let these companies dictate all state telecom policy, making CA’s brief window of accountability a rare exception.
Telecom monopolies historically want to have their cake and eat it too. They want all of the taxpayer perks and subsidies that come along with being an essential utility, but none of the obligations. And with very, very limited exceptions, state and federal corruption usually nets them the outcome they’re looking for.
Filed Under: california PUC, cpuc, dsl, fiber, landline, regulations, subsidies, telecom
Companies: at&t
Dropshipping Creates Novel Online Business Opportunities For The Long-Term Unemployed In South Africa (And Others)
from the low-barrier-to-entry dept
Dropshipping – memorably described by Sirin Kale in Wired as people “selling products they’ve never handled, from countries they’ve never visited, to consumers they’ve never met” – is one of the best examples of “pure” Internet activity around. As Kale explains:
Dropshipping is a “fulfilment” method. At one end of the supply chain, an entrepreneur identifies a product – usually through Chinese e-commerce platform AliExpress – which they think they can sell to European or American consumers. They create a website using Shopify, and identify and target buyers, typically using Facebook ads, although you will find dropshippers on other platforms, including Instagram, or selling through marketplaces such as online homeware store Wayfair.
A key feature of dropshipping, and one reason it is attractive to its practitioners, is that products are sourced without creating an inventory. However, the opaque nature of dropshipping ecommerce means there is plenty of scope for customers to be ripped off by someone along the supply chain, with little hope of redress. In Kale’s 2020 Wired feature, there was already a sense that it was a fad whose moment had passed, and that the smart dropshippers were getting out and moving on. That makes a post on the Rest of the World blog about the rise of dropshipping in South Africa particularly interesting – and hopeful:
A marketing graduate from the University of Lilongwe in Malawi, [Saba] Mika moved to South Africa in 2018 with the hope of finding a job that matched his qualifications. After two years of a fruitless search, he decided to become a drop-shipping agent. “Drop-shipping is an easy way to make money,” Mika told Rest of World. “For those of us who cannot find work, it has become a way out of poverty.”
Since nearly 32% of South Africa’s population is unemployed, according to the Rest of the World post, dropshipping represents a rare opportunity to make money independently:
the 36-year-old [Saba Mika] spends around five hours each day taking screenshots of trendy clothes and shoes from Chinese e-commerce websites, sharing them on Facebook Marketplace and WhatsApp groups, and ordering the goods if he receives any orders. Over the past two years, Mika has made up to 20,000 rand ($1,074) in a good month.
Dropshipping in South Africa is not without problems. Unlike general dropshipping, in South Africa it seems to serve mainly the domestic market. That means a lot of Chinese-made products are flowing into South Africa, including plenty of counterfeit items:
In 2022, online retail in South Africa crossed 50 billion rand ($2.6 billion), fueling the popularity of drop-shipping. But there are concerns about Chinese counterfeits — which currently account for up to 10% of the South African economy, according to the Consumer Goods Council — and growing worries over Chinese brands making South Africa their dumping ground.
The South African Revenue Authority is concerned that these Chinese imports may not meet South Africa’s safety, regulatory and health standards. Another issue is that goods brought in by dropshippers aren’t paying custom duties when they enter the country. Siphithi Sibeko, head of communications and media for the South African Revenue Authority, pointed out to Rest of World that, as a result, “These goods also undercut the industry, causing unemployment and exacerbating poverty and inequality.”
Dropshipping is clearly not a panacea. But it is a good example of how the Internet has created novel business opportunities with extremely low barriers to entry. That’s important for people who, like the long-term unemployed in South Africa, have only minimal resources at their disposal.
Follow me @glynmoody on Mastodon and on Bluesky.
Filed Under: china, custom duties, dropshipping, dumping, ecommerce, regulations, south africa, unemployment
Companies: ali express, meta, shopify, wayfair
Cable Companies, Automakers Try To Derail FTC, FCC Quest To Kill Misleading Fees
from the this-is-why-we-can't-have-nice-things dept
Mon, Feb 5th 2024 05:28am - Karl Bode
For decades now, airlines, hotels, cable companies, banks and a long list of other companies have bilked U.S. consumers out of billions of dollars annually via bullshit fees that unfairly jack up the advertised price of service. More interesting perhaps is the fact that it it took until 2023 for a U.S. federal regulator to even ponder the idea that this was perhaps bad and could or should be stopped.
Last year, the FTC announced it would be cracking down on such fees. That included a 126 page proposal aimed at the auto industry’s use of “administrative fees,” document fees, and other markups used to fatten up the price consumers pay for new or used cars.
Not surprisingly the auto industry didn’t much like that, and has been fighting the effort in court. While the rules were supposed to go into effect on July 30, the National Automobile Dealers Association (NADA) and Texas Automobile Dealers Association filed a challenge with the notoriously wacky (and corporate friendly) Fifth Circuit Court of Appeals, which has suspended the FTC’s plan upon review.
As with most such challenges, the automakers are trying to claim the FTC doesn’t have the authority to implement such rules, despite the agency’s authority to police “unfair and deceptive” behavior being very clear under the FTC Act:
“The dealership groups, in the petition to the Fifth Circuit, called it “an abuse of discretion” and seek the court to block its implementation. The FTC maintains that the rule “does not impose substantial costs, if any” on law-abiding dealerships, and instead simply guarantees a more even playing field for both dealerships and consumers by eliminating junk fees and hidden costs.”
Automakers aren’t the only well funded corporations fighting the FTC’s effort. The cable industry is also hard at work fighting FTC and FCC plans to crack down on the billions of dollars cable companies make off of junk fees, again by claiming the agencies don’t have the legal authority to do their jobs.
Here’s the thing: the FTC and FCC aren’t doing anything crazy here. They’re doing the absolute bare minimum when it comes to policing obnoxious, misleading fees, often used to help companies falsely advertise a lower price that doesn’t actually exist. And even here you can see how such efforts face an unrelenting legal and lobbying assault by companies with near-unlimited legal and lobbying budgets.
Now remember that the corrupt Supreme Court is on the precipice of dismantling Chevron, a cornerstone of regulatory law as we know it, effectively undermining most existing independent regulatory authority. Once Chevron is dead, every last regulatory decision corporations don’t like will be challenged in court, and it will be left to a (potentially corrupt) judge to determine what regulators can or can’t do.
Picture this fight over fees. Now apply it to pretty much any regulatory effort to do anything. Then apply the assumption that corporations will win most of the time thanks to corrupt, unelected judges (often with lifetime appointments), and you’ll begin to see the full picture.
As it stands, Congress passes a (often vague and badly written) law, and it’s up to regulators with very tailored knowledge (on subjects ranging from wireless spectrum management to emission controls) to implement useful rules within the confines of the law. The axing of Chevron eliminates much of that independent authority, meaning they can’t do anything not very specifically outlined by Congress.
The flimsy underlying justification for killing Chevron is that it rebalances the constitutional order, shifting power away from the Executive and back to Congress. In reality, corporations captured Congress long ago. They know it’s hard to get Congress to reform much of anything. Now they’re aiming to finish the job by all-but lobotomizing what’s left of regulatory independence (net neutrality was a lovely example).
The goal is the final beheading of what’s left of the already undermanned and underfunded federal regulatory state. It’s the final killing blow of regulatory power at the hands of U.S. corruption, dressed up as a good faith effort to restore constitutional balance and free unfairly burdened corporations from the “tyranny” of already fairly feckless U.S. corporate oversight.
In reality, the axing of Chevron is going to result in gridlock in the courts, as every last regulatory decision corporations don’t like are challenged by corporations under the claim regulators have no authority. That’s going to result not only in the dismantling of existing consumer protections, environmental reforms, and public safety initiatives, but will chill any efforts to craft any new reforms.
That will shift most consumer protection to the handful of states that actually still care about such things. At that point, with federal regulatory oversight dead, corporations can shift all of their attention and resources toward undermining state regulatory power. Again, the goal here isn’t a good faith concern about free markets or constitutional balance, it’s completely unchecked corporate power.
The goal is handcuffing regulators’ hands behind their backs as the U.S. is scrapped and sold for parts out the back door, by men who can retreat to their private islands once the girders give way and the roof begins to collapse on the rest of us. Consumer protection, environmental law, election law, and public safety are facing an historic existential threat.
I don’t think most people really understand the scope and scale of what’s coming down the road once Chevron is dismantled. And I don’t expect the broken “both sides,” billionaire-owned U.S. press will make that threat clear to readers.
But I do think that the full impact of the Supreme Court’s looming decision will be painfully obvious to U.S. residents in the decade to come. At which point current debates — like the auto industry’s attempt to stop the FTC from policing predatory fees — will seem downright adorable by comparison.
Filed Under: automakers, cable, cable tv, chevron, consumer protection, consumers, fcc, ftc, junk fees, regulations
Money Talks In The World Of Copyright Legislation; And That’s A Big Problem For Ordinary Internet Users
from the money-talks,-internet-users-lose dept
Copyright has always been about money. That’s why the copyright industry fights so hard to strengthen legal protections, in order to boost its profits. However, getting detailed information about how much money is involved, and who receives it, is hard, because there are so many small pieces to the overall copyright ecosystem. That makes a long post exploring money in the global music business by Will Page on the Pivotal Economics site particularly welcome.
One striking chart shows global music streaming revenues rising from negligible rates in 2005 to $5.5 billion in 2022. This underlines how foolish the music business was to resist the move to online music – it could probably have made billions more dollars had it started earlier. Instead, it sued the pioneering service Napster into the ground in 2001. It was a typically short-sighted move that impoverished not just the music industry, but society as a whole, for a reason Lawrence Lessig explains in his book Free Culture:
When Napster told the district court that it had developed a technology to block the transfer of 99.4 percent of identified infringing material, the district court told counsel for Napster 99.4 percent was not good enough. Napster had to push the infringements ‘down to zero.’
If 99.4 percent is not good enough, then this is a war on file-sharing technologies, not a war on copyright infringement. There is no way to assure that a p2p [peer-to-peer] system is used 100 percent of the time in compliance with the law, any more than there is a way to assure that 100 percent of VCRs or 100 percent of Xerox machines or 100 percent of handguns are used in compliance with the law. Zero tolerance means zero p2p. The court’s ruling means that we as a society must lose the benefits of p2p, even for the totally legal and beneficial uses they serve, simply to assure that there are zero copyright infringements caused by p2p.
It’s good news for the music industry that streaming now brings in 5.5billionayear,anincreaseof5.5 billion a year, an increase of 5.5billionayear,anincreaseof800 million over 2021. But it’s unlikely much of that boost reached the people who made it possible – the musicians. In 2021, the Digital, Culture, Media and Sport Committee of the UK Parliament published its second report on the Economics of Music Streaming, which noted:
the major music groups are disproportionately benefitting from music streaming relative to creators. This has resulted in record high levels of income and profit growth and historic levels of profitability for the major labels whilst performers’ incomes average less than the median wage.
Despite some tweaking of streaming payments, musicians still get a rough deal from the current system, as numerous posts on Walled Culture attest.
The Pivotal Economics post calculates that the global music industry overall is now a $40 billion business. At the beginning of his breakdown of this number, Page asks rhetorically:
Why do I go into a bat cave every October and calculate this figure? The primary answer is policy. There’s an axiom in lobbying circles that politicians are more likely to respond to bigger numbers than smaller ones.
This work gives us that number. Regardless of where you work in this complex music business, if you are trying to capture the attention of policy makers in Geneva, Washington DC, Brussels or London – this number is for you. Use it – not just one of its constituent parts.
That’s a really important point that exposes one of the key problems with copyright in the digital age. Politicians are impressed with big money, and tend to listen to those who generate it when they pass new copyright laws. But in today’s online world, there is a whole new class of creators – ordinary members of the public – who are being endlessly inventive with digital material as a routine part of their daily lives.
This new online activity, often given the bland name “user-generated content” (UGC), takes place on a vast scale, but typically generates zero revenue. As a result, politicians generally discount it, downplaying its cultural and social significance. That’s precisely what happened with the EU’s Copyright Directive, where the concerns of millions of ordinary users were ignored in the rush to keep a few deep-pocketed and influential copyright companies happy.
Follow me @glynmoody on Mastodon. Originally posted to Walled Culture.
Filed Under: copyright, lobbying, regulations
Kids Are Smart: Teach Them To Be Safe Online
from the don't-hide-the-ball dept
Last April, Utah Governor Spencer Cox noted that “Kids are smart, they’ll find ways around” Utah’s new social media bans. But that’s not the reason why these laws will fail teens in Utah, Arkansas, and Texas. These laws will fail teens because state leaders don’t believe kids are smart enough to learn to use social media appropriately.
Florida, on the other hand, had the good sense to turn to education as a tool for keeping teens safe and healthy online.
You read that correctly — Florida. The same state that launched an unconstitutional attack on Section 230 last year. The same state actively working against the First Amendment. The same state picking fights with Disney, creator of the only mouse welcome in US households. Clearly, the state has problems. But sometimes it gets things closer to correct.
Florida’s not alone either. A bill from the California legislature, the same state that passed the first problematic age-verification law, is also leaning into education as an alternative to bans. If only that’s where the desire to regulate the internet “for the children” ended.
It may be a bit too soon to say, given the bipolar attitudes of these states toward tech, but this year they both acknowledged the reality of the situation and moved in a more positive direction. When it comes to the “future of society,” as Jonathan Haidt put it, kids become adults and move away. If we don’t equip them with a basic understanding of the world online because we’ve hidden them away, we’ve failed them.
As retired Utah teacher Linda Bettinger put it, “Media literacy is education for individual empowerment.” But in the face of rising teen depression, which some attribute to teen social media use, legislators across the country looked for a shutoff valve in the form of legislative age-gating.
To be sure, the list of serious issues teens face online is a long one, including cyberbullying and child sexual exploitation. Some in Utah felt that bans were the most appropriate solution to teen “algorithm addiction.”
When all of the above are merged into a one-pager, these issues become paramount in the minds of legislators, even in the face of clear privacy and cybersecurity risks.
But there’s also a long list of healthy and beneficial uses of social media. It can be useful for learning creative applications of physics and engineering, remembering the birthdays of faraway friends and relatives, discovering symptoms of difficult-to-diagnose diseases, finding jobs, and networking with professionals, just to name a few.
Each of these is contemplated in Florida’s law and notably absent from Utah’s. Florida’s approach also accounts for the efforts of social media companies to introduce tools and resources for parents.
Far from complacent, social media companies and internet service providers (ISPs) have responded to online threats by developing safety tools. The prevalent approach among them is giving parents a say in their children’s browsing experience using time limits. At the user level, social media companies and smartphone makers alike have also begun employing tools to give parents access to child accounts and blacklisting known harmful websites.
Of course, even though the trust and safety teams at social media companies, ISPs, and device manufacturers have unquestionably reduced the baseline level of risk out in the wild, some believe the danger to teens is still too great.
But where safety tools may fail, education and parental involvement have a major role to play in picking up the slack.
In this sense, states should look to update their educational curriculum instead of banning teens. Hefty civil penalties against social media companies may play well in headlines, but teens will pay the price if forced to grow up without learning to navigate the complex online world.
Florida’s Governor may have called the state the “Utah of the Southeast,” but when it comes to teen social media use, Florida leads the field.
Caden Rosenbaum is the technology and innovation policy analyst at Libertas Institute in Lehi, Utah.
Gavin Hickman is a technology and innovation policy intern at Libertas Institute in Lehi, Utah.
Filed Under: california, florida, media literacy, regulations, social media, social media addiction, utah
EU’s New AI Law Targets Big Tech Companies But Is Probably Only Going To Harm The Smallest Ones
from the ask-gpt-to-write-better-laws dept
The EU Parliament is looking to regulate AI. That, in itself, isn’t necessarily a bad idea. But the EU’s proposal — the AI Act — is pretty much bad all over, given that it’s vague, broad, and would allow pretty much any citizen of any EU nation to wield the government’s power to shut down services they personally don’t care for.
But let’s start with the positive aspects of the proposal. The EU does want to take steps to protect citizens from the sort of AI law enforcement tends to wield indiscriminately. The proposal would actually result in privacy protections in public spaces. This isn’t because the EU is creating new rights. It’s just placing enough limits on surveillance of public areas that privacy expectations will sort of naturally arise.
James Vincent’s report for The Verge highlights the better aspects of the AI Act, which is going to make a bunch of European cops upset if it passes intact:
The main changes to the act approved today are a series of bans on what the European Parliament describes as “intrusive and discriminatory uses of AI systems.” As per the Parliament, the prohibitions — expanded from an original list of four — affect the following use cases:
- “Real-time” remote biometric identification systems in publicly accessible spaces;
- “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;
- Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
- Predictive policing systems (based on profiling, location or past criminal behaviour);
- Emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and
- Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).
That’s the good stuff: a near-complete ban on facial recognition tech in public areas. Even better, the sidelining of predictive policing programs which, as the EU Parliament already knows, is little more than garbage “predictions” generated by bias-tainted garbage data supplied by law enforcement.
Here’s what’s terrible about the AI Act, which covers far more than the government’s own use of AI tech. This analysis by Technomancers roots out what’s incredibly wrong about the proposal. And there’s a lot to complain about, starting with the Act’s ability to solidly entrench tech incumbents.
In a bold stroke, the EU’s amended AI Act would ban American companies such as OpenAI, Amazon, Google, and IBM from providing API access to generative AI models. The amended act, voted out of committee on Thursday, would sanction American open-source developers and software distributors, such as GitHub, if unlicensed generative models became available in Europe. While the act includes open source exceptions for traditional machine learning models, it expressly forbids safe-harbor provisions for open source generative systems.
Any model made available in the EU, without first passing extensive, and expensive, licensing, would subject companies to massive fines of the greater of €20,000,000 or 4% of worldwide revenue. Opensource developers, and hosting services such as GitHub – as importers – would be liable for making unlicensed models available. The EU is, essentially, ordering large American tech companies to put American small businesses out of business – and threatening to sanction important parts of the American tech ecosystem.
When you make the fines big enough and the mandates restrictive enough, only the most well-funded companies will feel comfortable doing business in areas covered by the AI Act.
In addition to making things miserable for AI developers in Europe, the Act is extraterritorial, potentially subjecting any developer located anywhere in the world to the same restrictions and fines as those actually located in the EU.
And good luck figuring out how to comply with the law. The acts (or non-acts) capable of triggering fines and bans are equally vague. AI providers must engage in extensive risk-testing to ensure they comply with the law. But the list of “risks” they must foresee and prevent are little more than a stack of government buzzwords that can easily be converted into actionable claims against tech companies.
The list of risks includes risks to such things as the environment, democracy, and the rule of law. What’s a risk to democracy? Could this act itself be a risk to democracy?
In addition, the restrictions on API use by third parties would put US companies in direct conflict with US laws if they attempt to comply with the EU’s proposed restrictions.
The top problem is the API restrictions. Currently, many American cloud providers do not restrict access to API models, outside of waiting lists which providers are rushing to fill. A programmer at home, or an inventor in their garage, can access the latest technology at a reasonable price. Under the AI Act restrictions, API access becomes complicated enough that it would be restricted to enterprise-level customers.
What the EU wants runs contrary to what the FTC is demanding. For an American company to actually impose such restrictions in the US would bring up a host of anti-trust problems.
While some US companies will welcome the opportunity to derail their smaller competitors and lock in large contracts with their wealthiest customers, one of the biggest tech companies in the world is signaling it wants no part of the EU Parliament’s AI proposal. The proposal may not be law yet, but as Morgan Meaker and Matt Burgess report for Wired, Google is already engaging in some very selective distribution of its AI products.
[G]oogle has made its generative AI services available in a small number of territories of European countries, including the Norwegian dependency of Bouvet Island, an uninhabited island in the South Atlantic Ocean that’s home to 50,000 penguins. Bard is also available in the Åland Islands, an autonomous region of Finland, as well as the Norwegian territories of Jan Mayen and Svalbard.
This looks like Google is sending a bit of a subtle hint to EU lawmakers, letting them know that if they want more than European penguins to have access to Google’s AI products, they’re going to have to do a bit of a rewrite before passing the AI Act.
The EU Parliament is right to be concerned about the misuse of AI tech. But this isn’t the solution, at least not in this form. The proposal needs to be far less broad, way less vague, and more aware of the collateral damage this grab bag of good intentions might cause.
Filed Under: ai, antitrust, apis, competition, eu, facial recognition, regulations