innovation – Techdirt (original) (raw)

The First Amendment: America’s Secret Sauce For Innovation

from the protect-it dept

The First Amendment is America’s secret sauce. Far from a relic of the 18th century, the right to speak freely and question authority is a cornerstone of innovation, which thrives on pushing boundaries and challenging the status quo. This freedom is why the United States consistently out-invents and out-performs other societies that do not prioritize the right to speech. That’s why it’s so disappointing to see attacks on America’s proud tradition of free speech coming from our own policymakers and leading thinkers

Take, for example, a recent op-ed in the New York Times by former Biden Administration Special Assistant to the President for Technology and Competition Tim Wu. In “The First Amendment is Out of Control,” Wu makes the truly breathtaking argument that the First Amendment not only serves corporate interests, it actually undermines our government. This attack on the cornerstone amendment of our democracy ignores the vital role the First Amendment serves in fostering innovation by ensuring that Americans can criticize the government, try new business models, and lean into creativity and progress.

Consider the booming field of generative artificial intelligence, a technology that facilitates asking and answering questions. American companies lead the way in developing and deploying new generative AI platforms and tools – despite competition from China, which has the advantage of massive state-led investments into AI research, a huge population, and limited privacy protections that allow developers to tap into massive data sets. Our leadership is no coincidence. It stems from an American culture where questioning and debate are encouraged and legally protected.

Critics bash “big tech platforms” for controlling speech, but these platforms have democratized communication more than ever before, and many smaller platforms have risen up behind them to provide even more avenues for speech. For the first time, ordinary citizens can reach millions without needing to own a TV or radio station or convincing a major newspaper to publish their opinions. These platforms have enabled vastly more speech, not less. They give voice to the voiceless and amplify diverse perspectives.

Wu and others have also suggested that the First Amendment undermines national security. This is the same rationale used by tyrants and dictators for quashing speech and jailing opponents and dissidents. It’s also patently false. The First Amendment is the disinfecting sunshine that helps hold the government accountable. It allows not only journalists, but everyday citizens to scrutinize and criticize government actions. It ensures transparency and prevents abuses of power. Fears of “foreign manipulation” through platforms, while certainly a concern, can and should be addressed through targeted laws and regulations rather than efforts to scrap the core principles that underpin free speech rights.

More, the Supreme Court’s recent decisions protecting algorithmic curation by Internet platforms recognize that modern speech takes many forms. Whether it’s a human editor or an algorithm, the essence of expression remains. These decisions reflect an understanding that in the digital age, free speech must adapt to new technologies and modes of communication.

Entrepreneurs and startups thrive in an environment where they can challenge incumbents, propose radical solutions, and address societal needs without fear of censorship. The First Amendment fosters an ecosystem where diverse ideas can clash and collaborate, leading to scientific advancements, artistic expression, and cultural evolution. It ensures that innovations can emerge in a vibrant, grassroots ecosystem where everyone has the opportunity to contribute.

Paring back the First Amendment to make it easier to attack specific companies is a terrible idea. The First Amendment protects corporate speech, and it has been extended to new forms of expression. We should see this broad protection as a feature, not a bug. It ensures that innovation is not stifled by overreaching regulations and that new ideas can flourish.

The First Amendment as it’s applied today is doing exactly what it was intended to do: protecting the freedom to speak, to question, and to innovate. It ensures that power, whether governmental or corporate, can be held in check. This freedom is why America remains a leader in innovation and why we must continue to uphold and defend the First Amendment in its broadest sense.

Michael Petricone is the Senior VP of Government Affairs at the Consumer Technology Association.

Filed Under: 1st amendment, free speech, innovation, tim wu

Taxing The Internet To Bail Out Media Won’t Solve The Fundamental Problems Of The Media Business

from the not-the-way-to-do-this dept

Hey Google, can you spare a few hundred million to keep Rupert Murdoch’s yacht afloat? That’s essentially what some legislators are demanding with their harebrained schemes to force tech companies to fund journalism.

It is no secret that the journalism business is in trouble these days. News organizations are failing and journalists are being laid off in record numbers. There have been precious few attempts at carefully thinking through this situation and exploring alternative business models. The current state of the art thinking seems to be either (1) a secretive hedge fund buying up newspapers, selling off the pieces and sucking out any remaining cash, (2) replacing competent journalists with terrible AI-written content or (3) putting all the good reporting behind a paywall so that disinformation peddlers get to spread nonsense to the vast majority of the public for free.

Then, there’s the legislative side. Some legislators have (rightly!) determined that the death of journalism isn’t great for the future of democracy. But, so far, their solutions have been incredibly problematic and dangerous. Pushed by the likes of Rupert Murdoch, whose loud and proud support for “free market capitalism” crumbled to dust the second his own news business started failing, leading him to demand government handouts for his own failures in the market. The private equity folks buying up newspapers (mainly Alden Capital) jumped into the game as well, demanding that the government force Google and Meta to subsidize their strip-mining of the journalism field.

The end result has mostly been disastrous link taxes, which were pioneered in Europe a decade ago. They failed massively before being revived more recently in Australia and Canada, where they have also failed (despite people pretending they have succeeded).

For no good reason, the US Congress and California’s legislature are still considering their own versions of this disastrous policy that has proven (1) terrible for journalism and (2) even worse for the open web.

Recently, California Senator Steve Glazer offered up an alternative approach, SB 1327 that is getting a fair bit of attention. Instead of taxing links like all those other proposals, it would directly tax the digital advertising business model and use that tax to create a fund for journalism. Specifically, it would apply a tax on what it refers to (in a dystopian Orwellian way) as a “data extraction transaction.” It refers to the tax as a “data extraction mitigation fee” and that tax would be used to provide credits for “qualified” media entities.

I’ve seen very mixed opinions on this. It’s not surprising that some folks are embracing this as a potential path to funding journalism. Casey Newton described it as a “better way for platforms to fund journalism.”

Unlike the bargaining codes, this bill starts with the right question, which is how to fund more jobs in journalism. Its answer is to use tax credits, a time-tested form of public-private partnership. It structures those credits to incentivize small publishers and even freelance journalism just as much as it helps to support large, existing media companies.

And it does all of that without breaking the principles of the open internet.

And, I mean, when compared to link taxes, it is potentially marginally better (but also, with some very scary potential side effects). The always thoughtful Brandon Silverman (who created CrowdTangle and has worked to increase transparency from tech companies) also endorses the bill as “a potential path forward.”

It’s a simple bill designed to help revive local journalism. Instead of complicated usage-based mechanisms, this approach is very straightforward. It’s an online advertising tax that funds tax credits to support education and journalism. In this case, it’s a 7.25% ad tax (matching the state’s sales and use tax rate) on companies with more than $2.5 billion in revenue.

And here’s the rub: it would raise more than $500 million.

That’s every year.

To put it in context, the single largest philanthropic commitment to local news in the U.S. was the MacArthur announcement I mentioned in the first of this post. That funding represents $100 million a year and is spread across the entire country. This would be 5x that number, would grow over time, has no end date, and is just for California. Of course, as I understand it, some of this money would have to go to the general fund and be directed towards education in the state…that’s also a great use of the funds and there would still an enormous amount left for news (we’ll know more on these exact numbers as more official analysis is completed).

But that is a staggering amount of money and a game-changing amount of potential funding for news in the state. And it’s something that could easily replicated across the country.

But I tend to agree much more with journalism professor Jeff Jarvis who highlights the fundamental problems of the bill and the framework it creates. As I’ve pointed out with link taxes, the oft-ignored point of a tax on something is to get less of it. You tax something bad because that tax decreases how much of it is out there. And, as Jarvis points out here, this is basically a tax on information:

Data are information and information is knowledge. To demonize and tax the collection of information should be abhorrent in an enlightened society. His rhetoric at moral-panic pitch sets a perilous precedent.

Furthermore, Jarvis rightly points out that Glazer’s bill is positioned as something unique when users give their attention to internet companies, but explicitly carves out when users give their attention to other types of media companies. This sets up a problematically tiered system for when attention gets taxed and when it doesn’t:

He argues that he is taxing a barter exchange users make when they give data to internet platforms and receive free content in return. Well then, shouldn’t that tax apply to the exchange we all make when we give our valuable attention to TV and radio and much of the web in exchange for free content? But the bill exempts news media.

Indeed, the entire framing of the bill seems to suggest that data and advertising is a sort of “pollution,” that needs to be taxed in order to minimize it. And that seems particularly troublesome.

As Jarvis also notes, the true beneficiaries of a law like this would still be those rapacious hedge funds that have bought up a bunch of news orgs:

The hedge funds that now own 18 of the state’s top 25 newspapers — the hedge funds that are ruining journalism in California and across America — will benefit. They should not receive a penny. If anyone’s cash flow should be taxed, if anyone should be punished for the state of news today, it is them. Though the money is intended to go to supporting reporters, money is fungible and it will doubtless support hedge funds’ bottom lines more than journalists.

Indeed, the structure of the bill is one that will continue to benefit the failed news organizations, rather than incentivizing newer, better news organizations. That is the problem with all of these approaches, which assume that the answer must be to prop up the businesses that failed to innovate, rather than creating better incentives for more innovative approaches.

Google has warned that, if the bill passed, it would likely stop funding a bunch of other news initiatives that it has funded for years. This shouldn’t be surprising. If Google has already been funneling a ton of money into news initiatives, and then the California government is forcing them to direct hundreds of millions of dollars to its preferred news initiative, it would make sense that the company would drop its other programs and redirect the money to this one.

And, again, that highlights the problematic nature of this whole setup. It’s based on having the government decide who should be taxed and who gets funded. And when it comes to journalism, we should be pretty worried about the government picking and choosing winners and losers. Because that raises serious First Amendment issues and is very prone to just supporting news organizations that treat the deciding politicians nicely, rather than those that do deep investigative reporting and expose corruption and malfeasance.

Not surprisingly, Glazer did not take Google’s announcement well. He obnoxiously declared, “When people asks [sic] who is in charge of protecting our democracy and independent news— now you know.”

Image

But, if the alternative is that the California legislature gets to pick and choose who “protects independent news,” I’m not sure that’s any better.

Honestly, if Glazer didn’t think that his plan would lead Google (and Meta) to pull the money they already put into funding journalists as duplicative, what was he even thinking?

And I say this as someone who could conceivably benefit from this bill. But I don’t trust the California legislature not to play favorites.

A few years back, I visited some elected California legislators to talk about a bunch of policy-related issues. My first meeting with a California state senator set the tone. He asked me if I had heard about a new committee he had set up, and I told him I had. He then said he noticed that I had not reported on that committee. I pointed out that the mere creation of a committee didn’t seem all that newsworthy, but when the committee did something that I thought was worth covering, I would then write about it.

His response was kind of chilling and has stuck with me for years: “well, if you’re not willing to write about what I’m doing, why should I even listen to you?”

It was a demand for political quid pro quo, which is not something we do here at Techdirt.

But I fear that a bill like Glazer’s effectively makes this mandatory. Journalism orgs will need to scratch the California government’s back to get access to these funds.

There are all sorts of reasons why tech companies should consider funding journalism. I think their desire for high-quality data for training AI is a good one, for example. But having the state step in and set the rules seems prone to all sorts of corruption.

Journalism needs new business models. We’re all experimenting all the time with different ideas (and if you’d like to help, there are lots of ways to support us). But we should be pretty wary of governments stepping in with half-baked solutions that could distort the overall world of journalism and the open internet.

Filed Under: business models, california, data extraction tax, data tax, innovation, journalism, sb 1327, steve glazer
Companies: google, meta, news corp

FTC Bans Non-Competes, Sparks Instant Lawsuit: The War For Worker Freedom

from the stopping-indentured-servitude-is-a-major-question dept

This is a frustrating article to write. The FTC has come out with a very good and important policy ruling, but I’m not sure it has the authority to do so. The legal challenge (that was filed basically seconds after the rule came out) could do way more damage not just to some fundamental parts of the administrative state, but to the very underlying policy that the FTC is trying to enact: protecting the rights of workers to switch jobs and not be effectively tied to an employer in modern-day indentured servitude with no realistic ability to leave.

All the way back in 2007, I wrote about how non-competes were the DRM of human capital. They were an artificial manner of restricting a basic freedom, and one that served no real purpose other than to make everything worse. As I discussed in that post, multiple studies done over the previous couple of decades had more or less shown that non-competes are a tremendous drag on innovation, to the point that some argue (strongly, with data) that Silicon Valley would not be Silicon Valley if not for the fact that California has deemed non-competes unenforceable.

The evidence of non-competes being harmful to the market, to consumers, and to innovation is overwhelming. It’s not difficult to understand why. Studies have shown that locking up information tends to be harmful to innovation. The big, important, innovative breakthroughs happen when information flows freely throughout an industry, allowing different perspectives to be brought into the process. Over and over again, it’s been shown that those big breakthroughs come when information is shared and multiple companies are trying to tackle the underlying problem.

But you don’t want companies colluding. Instead, it’s much better to simply ban non-competes, as it allows workers to switch jobs. This allows for more of a free flow of information between companies, which contributes to important innovations, rather than stagnant groupthink. The non-competes act as a barrier to the free flow of information, which holds back innovation.

They’re really bad. It’s why I’ve long supported states following California’s lead in making them unenforceable.

And, of course, once more companies realized the DRM-ish nature of non-competes, they started using them for more and more evil purposes. This included, somewhat infamously, fast food workers being forced to sign non-competes. Whatever (weak) justification there might be for higher-end knowledge workers to sign non-competes, the idea of using them for low-end jobs is pure nonsense.

Non-competes should be banned.

But, when the FTC proposed banning non-competes last year, I saw it as a mixed bag. I 100% support the policy goal. Non-competes are actively harmful and should not be allowed. But (1) I’m not convinced the FTC actually has the authority to ban them across the board. That should be Congress’ job. And, (2) with the courts the way they are today, there’s a very high likelihood that any case challenging such an FTC rule would not just get tossed, but that the FTC may have its existing authority trimmed back even further.

Yesterday, the FTC issued its final rule on non-competes. The rule bans all new non-competes and voids most existing non-competes, with the one exception being existing non-competes for senior executives (those making over $151,164 and who are in “policy-making positions”).

The rule is 570 pages long, with much of it trying to make the argument for why the FTC actually has this authority. And all those arguments are going to be put to the test. Very shortly after the new rule dropped (long before anyone could have possibly read the 570 pages), a Texas-based tax services company, Ryan LLC, filed a lawsuit.

The timing, the location, and the lawyers all suggest this was clearly planned out. The case was filed in Northern Texas. It was not, as many people assumed, assigned to judicial shopping national injunction favorite Matthew Kacsmaryk. Instead, it went to Judge Ada Brown. The law firm filing the case is Gibson Dunn, which is one of the law firms you choose when you’re planning to go to the Supreme Court. One of the lawyers is Gene Scalia, son of late Supreme Court Justice Antonin Scalia.

Also notable, as pointed out by a lawyer on Bluesky, is that the General Counsel of Ryan LLC clerked for Samuel Alito (before Alito went to the Supreme Court) and is married to someone who clerked for both Justices Alito and Thomas. She also testified before the Senate in support of Justice Gorsuch during his nomination.

The actual lawsuit doesn’t just seek to block the rule. It is basically looking to destroy what limited authority the FTC has. The main crux of the argument is on more firm legal footing, claiming that this rule goes beyond the FTC’s rulemaking authority:

The Non-Compete Rule far exceeds the Commission’s authority under the FTC Act. The Commission’s claimed statutory authority—a provision allowing it “[f]rom time to time” to “classify corporations and . . . make rules and regulations,” 15 U.S.C. § 46(g)—authorizes only procedural rules, as the Commission itself recognized for decades. This is confirmed by, among other statutory features, Congress’s decision to adopt special procedures for the substantive rulemaking authority it did grant the Commission, for rules on “unfair or deceptive acts or practices.”

I wish this weren’t the case, because I do think non-competes should be banned, but this argument may be correct. Congress should make this decision, not the FTC.

However, the rest of the complaint is pretty far out there. It’s making a “major questions doctrine” argument here, which has become a recent favorite among the folks looking to tear down the administrative state. It’s not worth going deep on this, other than to say that this doctrine suggests that if an agency is claiming authority over “major questions,” it has to show that it has clear (and clearly articulated) authority to do so from Congress.

Is stopping the local Subway from banning sandwich makers from working at the McDonald’s down the street a “major question”? Well, the lawsuit insists that it is.

Moreover, even if Congress did grant the Commission authority to promulgate some substantive unfair-competition rules, it did not invest the Commission with authority to decide the major question of whether non-compete agreements are categorically unfair and anticompetitive, a matter affecting tens of millions of workers, millions of employers, and billions of dollars in economic productivity.

And then the complaint takes its big swing: the whole FTC is unconstitutionally structured.

Compounding the constitutional problems, the Commission itself is unconstitutionally structured because it is insulated from presidential oversight. The Constitution vests the Executive Power in the President, not the Commission or its Commissioners. Yet the FTC Act insulates the Commissioners from presidential control by restricting the President’s ability to remove them, shielding their actions from appropriate political accountability.

This is taking a direct shot at multiple parts of the administrative state, where Congress (for very good reasons!!!) set up some agencies to be independent agencies. They were set up to be independent to distance them from political pressure (and culture war nonsense). While the President can nominate commissioners or directors, they have limited power over how those independent agencies operate.

This lawsuit is basically attempting to say that all independent agencies are unconstitutional. This is one hell of a claim, and would do some pretty serious damage to the ability of the US government to function. Things that weren’t that political before would become political, and it would be a pretty big mess.

But that’s what Ryan LLC (or, really, the lawyers planning this all out) are gunning for.

The announcement that Ryan LLC put out is also… just ridiculous.

“For more than three decades, Ryan has served as a champion for empowering business leaders to reinvest the tax savings our firm has recovered to transform their businesses,” the firm’s Chairman and CEO, G. Brint Ryan, said in a statement.. “Just as Ryan ensures companies pay only the tax they owe, we stand firm in our commitment to serve the rightful interest of every company to retain its proprietary formulas for success taught in good faith to its own employees.

Um. That makes no sense. The FTC ruling does not outlaw NDAs or trade secret laws. Those are what protect “proprietary formulas.” So, the concern that Mr. Ryan is talking about here is wholly unrelated to the rule.

Last spring, Ryan “sought to dissuade” the FTC from imposing the new rule by submitting a 54-page public comment against it. In the comment, Ryan called non-compete agreements “an important tool for firms to protect their IP and foster innovation,” saying that without them, firms could hire away a competitor’s employees just to gain insights into their competitor’s intellectual property. Ryan added that the rule would inhibit firms from investing in that IP in the first place, “resulting in a less innovative economy.”

Again, almost everything said here is bullshit. They can still use NDAs (and IP laws) to protect their “IP.” That’s got nothing to do with non-competes.

As for the claim that it will result in “a less innovative economy,” I’ll just point to the fact that California remains the most innovative economy in the US and hasn’t allowed non-competes. Every single study on non-competes has shown that they hinder innovation. So Ryan LLC and its CEO are full of shit, but that shouldn’t be much of a surprise.

Anyway, this whole thing is a stupid mess. Non-competes should be banned because they’re awful and bad for innovation and employee freedom. But it should be Congress banning them, not the FTC. But, now that the FTC has moved forward with this rule, it’s facing an obviously planned out lawsuit, filed in the Northern District of Texas with friendly judges, and the 5th Circuit appeals court ready to bless any nonsense you can think of.

And, of course, it’s happening at a time when the Supreme Court majority has made it pretty clear that dismantling the entire administrative state is something it looks forward to doing. This means there’s a pretty clear path in the courts for the FTC to lose here, and lose big time. One hopes that if the courts are leaning in this direction, they would simply strike down this rule, rather than effectively striking down the FTC itself. But these days, who the fuck knows how these cases will go.

And even just on the issue of non-competes, my fear is that this effort sets back the entire momentum behind banning them. Assuming the courts strip the FTC rule, many will see it as open season on increasing non-competes, and the FTC would likely be stripped of any power to challenge the most egregious, anti-competitive ones.

Non-competes should be banned. But the end result of this rule could be that they end up being used more widely. And that would really suck.

Filed Under: administrative state, ftc, innovation, major questions doctrine, non-competes, section 5 authority, supreme court
Companies: ryan llc

Palworld Creator Loves That Others Are Trying To Clone The Game

from the best-pals dept

We’ve had several posts on the video game sensation that is Palworld in the past. Given that the game has been described by others as “Pokémon, but with guns”, we kicked things off both wondering if Nintendo was going to try to take some kind of misguided legal action on the game, while also pointing out that the game is an excellent case study in copyright’s idea/expression dichotomy. After all, the game does not do any direct copying of any Pokémon IP, but does draw obvious inspiration from some of the base ideas behind that IP. In fact, highlighting the dichotomy further was a mod that injected actual Pokémon IP into Palworld, which Nintendo then managed to get taken down.

One of the things writers of this sort of content like me tend to fret about, however, is how often rank hypocrisy suddenly shows up among subjects such as the creators behind Palworld. It’s not uncommon to see a content creator attempt to go after folks doing to them exactly what the creator did in drawing inspiration from others. If you were worried the people behind Palworld would fall into this category, however, it appears very much that you were worried for nothing.

With the success of the game, it was only a matter of time before someone, or many someones, tried to cash in on its success by making similar games, or “clones.” PocketPair CEO Takuro Mizobe noticed this was happening with Palworld and reacted thusly.

“Tencent is already making a Palworld clone game!” PocketPair CEO Takuro Mizobe recently tweeted,” according to a translation by Automaton. He seemed happy about it. “These are incredible times,” he wrote. Some initially interpreted Mizobe as being critical of these moves. An IGN story described him as accusing other companies of ripping off Palworld, a framing the CEO rejected.

“To ‘accuse’ someone of something, means to say they are doing something wrong,” Mizobe wrote in a follow-up tweet responding to the IGN story. “I don’t think what Tencent is doing is wrong. I’m proud that other companies want to make games like Palworld. The industry historically innovates when we borrow ideas from games we love. I’m surprised that many high-quality mobile games are already in development.”

No going legal. No threats. Not even a hint of a complaint. Instead, Mizobe acknowledged what we all already know to be true: video games, like other forms of culture, are and have always been built on what came before it. If the success of Palworld spawns similar games after the fact, that’s not only not a problem, it’s a good thing for gaming culture. Hell, Mizobe even went so far as to praise some of these games’ quality.

Imagine Nintendo doing anything like this. You simply can’t. In fact, when Palworld was released, Nintendo made some vague comments about looking into the game to see if it wanted to pursue any legal action. You know, the exact opposite of the route Mizobe took.

Who knows if these new Palworld clones that Tencent and others are apparently developing will ever see the light of day. We won’t know if they’re actually rip-offs until they’re out, but Mizobe doesn’t seem to mind either way.

And why should he? I imagine he’s far too busy counting all the money his company is making by focusing on making a successful game rather than wringing his hands over some clones that may or may not ever gain any traction.

Filed Under: copying, innovation, inspiration, palworld, takuro mizobe, video games
Companies: nintendo, pocketpair, pokemon, tencent

NYC Officials Are Mad Because Journalists Pointed Out The City’s New ‘AI’ Chatbot Tells People To Break The Law

from the I'm-sorry-I-can't-do-that,-Dave dept

Fri, Apr 5th 2024 05:29am - Karl Bode

Countless sectors are rushing to implement “AI” (undercooked language learning models) without understanding how they work — or making sure they work. The result has been an ugly comedy of errors stretching from journalism to mental health care thanks to greed, laziness, computer-generated errors, plagiarism, and fabulism.

NYC’s government is apparently no exception. The city recently unveiled a new “AI” powered chatbot to help answer questions about city governance. But an investigation by The Markup found that the automated assistant not only doled out incorrect information, it routinely advises city residents to break the law across a wide variety of subjects, from landlord agreements to labor issues:

“The bot said it was fine to take workers’ tips (wrong, although they sometimes can count tips toward minimum wage requirements) and that there were no regulations on informing staff about scheduling changes (also wrong). It didn’t do better with more specific industries, suggesting it was OK to conceal funeral service prices, for example, which the Federal Trade Commission has outlawed. Similar errors appeared when the questions were asked in other languages, The Markup found.”

Folks over on Bluesky had a lot of fun testing the bot out, and finding that it routinely provided bizarre, false, and sometimes illegal results:

There’s really no reality where this sloppily-implemented bullshit machine should remain operational, either ethically or legally. But when pressed about it, NYC Mayor Eric Adams stated the system will remain online, albeit with a warning that the system “may occasionally produce incorrect, harmful or biased content.”

But one administration official complained about the fact that journalists pointed out the whole error prone mess in the first place, insisting they should have worked privately with the administration to fix the problems cause by the city:

If you can’t see that, it’s reporter Joshua Friedman reporting:

At NYC mayor Eric Adams’s press conference, top mayoral advisor Ingrid Lewis-Martin criticizes the media for publishing stories about the city’s new Al-powered chatbot that recommends illegal behavior. She says reporters could have approached the mayor’s office quietly and worked with them to fix it

That’s not how journalism works. That’s now how anything works. Everybody’s so bedazzled by new tech (or keen on making money from the initial hype cycle) they’re just rushing toward the trough without thinking. As a result, uncooked and dangerous automation is being layered on top of systems that weren’t working very well in the first place (see: journalism, health care, government).

The city is rushing to implement “AI” elsewhere in the city as well, such as with a new weapon scanning system tests have found have an 85 percent false positive rate. All of this is before you even touch on the fact that most early adopters of these systems see them are a wonderful way to cut corners and undermine already mistreated and underpaid labor (again see: journalism, health care).

There are lessons here you’d think would have been learned in the wake of previous tech hype and innovation cycles (cryptocurrency, NFTs, “full self driving,” etc.). Namely, innovation is great and all, but a rush to embrace innovation for innovation’s sake due to greed or incurious bedazzlement generally doesn’t work out well for anybody (except maybe early VC hype wave speculators).

Filed Under: automation, eric adams, hype, ingrid lewis-martin, innovation, language learning models, nyc, tech

As Walled Culture has often noted, the process of framing new copyright laws is tilted against the public in multiple ways. And on the rare occasions when a government makes some mild concession to anyone outside the copyright industry, the latter invariably rolls out its highly-effective lobbying machine to fight against such measures. It’s happening again in the world of AI. A post on the Knowledge Rights 21 site points to:

a U-turn by the British Government in February 2023, abandoning its prior commitment to introduce a broad copyright exception for text and data mining that would not have made an artificial distinction between non-commercial and commercial uses. Given that applied research so often bridges these two, treating them differently risks simply chilling innovative knowledge transfer and public institutions working with the private sector.

Unfortunately, and in the face of significant lobbying from the creative industries (something we see also in Washington, Tokyo and Brussels), the UK government moved away from clarifying language to support the development of AI in the UK.

In an attempt to undo some of the damage caused by the UK government’s retrograde move, a broad range of organizations, including Knowledge Rights 21, Creative Commons, and Wikimedia UK, have issued a public statement calling on the UK government to safeguard AI innovation as it draws up its new code of practice on copyright and AI. The statement points out that copyright is a serious threat to the development of AI in the UK, and that:

Whilst questions have arisen in the past which consider copyright implications in relation to new technologies, this is the first time that such debate risks entirely halting the development of a new technology.

The statement’s key point is as follows:

AI relies on analysing large amounts of data. Large-scale machine learning, in particular, must be trained on vast amounts of data in order to function correctly, safely and without bias. Safety is critical, as highlighted in the [recently agreed] Bletchley Declaration. In order to achieve the necessary scale, AI developers need to be able to use the data they have lawful access to, such as data that is made freely available to view on the open web or to which they already have access to by agreement.

Any restriction on the use of such data or disproportionate legal requirements will negatively impact on the development of AI, not only inhibiting the development of large-scale AI in the UK but exacerbating further pre-existing issues caused by unequal access to data.

The organizations behind the statement note that restrictions imposed by copyright would create barriers to entry and raise costs for new entrants. There would also be serious knock-on effects:

Text and data mining techniques are necessary to analyse large volumes of content, often using AI, to detect patterns and generate insights, without needing to manually read everything. Such analysis is regularly needed across all areas of our society and economy, from healthcare to marketing, climate research to finance.

The statement concludes by making a number of recommendations to the UK government in order to ensure that copyright does not stifle the development of AI in the UK. The key ones concern access to the data sets that are vital for training AI and carrying out text and data mining. The organizations ask that the UK’s Code of Practice:

Clarifies that access to broad and varied data sets that are publicly available online remain available for analysis, including text and data mining, without the need for licensing.

Recognises that even without an explicit commercial text and data mining exception, exceptions and limits on copyright law exist that would permit text and data mining for commercial purposes.

Those are pretty minimal demands, but we can be sure that the copyright industry will fight them tooth and nail. For the companies involved, keeping everything involving copyright under their tight control is far more important than nurturing an exciting new technology with potentially huge benefits for everyone.

Follow me @glynmoody on Mastodon. Originally posted to Walled Culture.

Filed Under: ai, copyright, copyright exceptions, innovation, text and data mining, training, uk

NLRB Files Complaint Against Ridiculously Overbroad Non-Compete As An Unfair Labor Practice

from the non-competes-are-human-drm dept

We’ve been on this soapbox for over 15 years now. There are reams upon reams of evidence that the single greatest reason why California became the innovation hub that it became (in both Silicon Valley and Hollywood) was because it effectively outlawed non-compete agreements in the late 19th century. I have long been a vocal advocate for outlawing all non-compete agreements. The benefit is clear and the data is unquestionable. Non-competes are not just a tax on labor, it’s a huge and damaging tax on innovation.

I was cautiously happy earlier this year that the Biden administration seems to agree, and the FTC has proposed banning non-competes entirely. My concern, though, is that this might go beyond the authority of the FTC itself. I’d much rather that Congress do this and pass a law instead.

In the meantime, though, the National Labor Relations Board (NLRB), seems to be taking a different path. Back in May, the NLRB’s General Counsel released a memo saying that “overbroad” non-competes could be seen as an unfair labor practice.

“Non-compete provisions reasonably tend to chill employees in the exercise of Section 7 rights when the provisions could reasonably be construed by employees to deny them the ability to quit or change jobs by cutting off their access to other employment opportunities that they are qualified for based on their experience, aptitudes, and preferences as to type and location of work,” said General Counsel Abruzzo. “This denial of access to employment opportunities interferes with workers engaging in Section 7 activity in a number of ways—for example, workers know that they will have greater difficulty replacing their lost income if they are discharged for exercising their statutory rights to organize and act together to improve working conditions; their bargaining power is undermined in the context of lockouts, strikes and other labor disputes; and their social ties and solidarity leading to improvements in working conditions at workplaces are lost as they scatter to the four winds.”

And, now, the NLRB has acted on this. A new NLRB complaint against a spa in Ohio has charged the spa with unfair labor practices for its non-compete agreements. The complaint against Juvly Aesthetics quotes the company’s non-compete agreement extensively (and it’s quite a non-compete, as beyond just barring going to work for a competitor, it also includes a non-disparagement clause, and further bars an employee who does go somewhere else from “soliciting” clients to follow them, including barring them from even responding to client questions about where they’re working now). I mean… what the hell is this:

As part of your initial employment documentation, you signed a nonsolicitation and nondisparagement agreement that prevents you from communicating with the public, clients, or employees of Juvly/Contour Clinic beyond your termination, whether voluntary or involuntary. If you fail to comply with the following requirements, your actions will be considered a solicitation and/or tortious interference in which you will be liable pursuant to the above solicitation clause and any additional damages incurred by Juvly/Contour Clinic.

Do not contact any clients or notify them of your departure from Juvly/Contour Clinic.

Do not respond to any client questions regarding your employment status. You may only refer them to Juvly.com to book an appointment.

Should you choose to pursue work with any prior Juvly/Contour employee, retain all communications as this is considered a solicitation, their destruction is prohibited by law.

Do not discuss any information with any individual regarding your employment at Juvly/Contour Clinic.

Do not make any public statements to any party for any reason regarding Juvly employment, business practices, or treatment information.

That… pretty obviously, goes way beyond even a typical non-compete, so I can see why the NLRB chose to go after such an egregious form of a non-compete agreement. It’s unclear if this will lead to further cases against more typical non-competes, even as good as that would be for the economy.

Still, it’s good to see at least a little more recognition of how problematic non-compete agreements can be.

Filed Under: ftc, innovation, nlrb, non-compete agreements, non-disparagement, unfair labor practices
Companies: contour clinic, juvly

Move Over, Software Developers – In The Name Of Cybersecurity, The Government Wants To Drive

from the unconstitutional-camel-noses dept

Earlier this year the White House put out a document articulating a National Cybersecurity Strategy. It articulates five “pillars,” or high-level focus areas where the government should concentrate its efforts to strengthen the nation’s resilience and defense against cyberattacks: (1) Defend Critical Infrastructure, (2) Disrupt and Dismantle Threat Actors, (3) Shape Market Forces to Drive Security and Resilience, (4) Invest in a Resilient Future, and (5) Forge International Partnerships to Pursue Shared Goals. Each pillar also includes several sub-priorities and objectives as well.

It is a seminal document, and one that has and will continue to spawn much discussion. For the most part what it calls for is too high level to be particularly controversial. It may even be too high level to be all that useful, although there can be value in distilling into words any sort of policy priorities. After all, even if what the government calls for may seem obvious (like “defending critical infrastructure,” which of course we’d all expect it do), going to the trouble to actually articulate it as a policy priority provides a roadmap for more constructive efforts to follow and may also help to martial resources, plus it can help ensure that any more tangible policy efforts the government is inclined to directly engage in are not at cross-purposes with what the government wants to accomplish overall.

Which is important because what the rest of this post discusses is how the strategy document itself reveals that there may already be some incoherence among the government’s policy priorities. In particular, it lists as one of the sub-priorities an objective with troubling implications: imposing liability on software developers. This priority is described in a few paragraphs in the section entitled, “Strategic Objective 3.3: Shift Liability for Insecure Software Products and Services,” but the essence is mostly captured in this one:

The Administration will work with Congress and the private sector to develop legislation establishing liability for software products and services. Any such legislation should prevent manufacturers and software publishers with market power from fully disclaiming liability by contract, and establish higher standards of care for software in specific high-risk scenarios. To begin to shape standards of care for secure software development, the Administration will drive the development of an adaptable safe harbor framework to shield from liability companies that securely develop and maintain their software products and services. This safe harbor will draw from current best practices for secure software development, such as the NIST Secure Software Development Framework. It also must evolve over time, incorporating new tools for secure software development, software transparency, and vulnerability discovery.

Despite some equivocating language, at its essence it is no small thing that the White House proposes: legislation instructing people on how to code their software and requiring adherence to those instructions. And such a proposal raises a number of concerns, including in both the method the government would use to prescribe how software be coded, and the dubious constitutionality of it being able to make such demands. While with this strategy document itself the government is not yet prescribing a specific way to code software, it contemplates that the government someday could. And it does so apparently without recognizing how significantly shaping it is for the government to have the ability to make such demands – and not necessarily for the better.

In terms of method, while the government isn’t necessarily suggesting that a regulator enforce requirements for software code, what it does propose is far from a light touch: allowing enforcement of coding requirements via liability – or, in other words, the ability of people to sue if software turns out to be vulnerable. But regulation via liability is still profoundly heavy-handed, perhaps even more so than regulator oversight would be. For instance, instead of a single regulator working from discrete criteria there will be myriad plaintiffs and courts interpreting the language however they understand it. Furthermore, litigation is notoriously expensive, even for a single case, let alone with potentially all those same myriad plaintiffs. We have seen all too many innovative companies be obliterated by litigation, as well as seen how the mere threat of litigation can chill the investment needed to bring new good ideas into reality. This proposal seems to reflect a naïve expectation that litigation will only follow where truly deserved, but we know from history that such restraint is rarely the rule.

True, the government does contemplate there being some tuning to dull the edge of the regulatory knife, particularly through the use of safe harbors, such that there are defenses that could protect software developers from being drained dry by unmeritorious litigation threats. But while the concept of a safe harbor may be a nice idea, they are hardly a panacea, because we’ve also seen how if you have to litigate whether they apply then there’s no point if they even do. In addition, even if it were possible to craft an adequately durable safe harbor, given the current appetite among policymakers to tear down the immunities and safe harbors we currently have, like Section 230 or the already porous DMCA, the assumption that policymakers will actually produce a sustainable liability regime with sufficiently strong defenses and not be prone to innovation-killing abuse is yet another unfortunately naïve expectation.

The way liability would attach under this proposal is also a big deal: through the creation of a duty of care for the software developer. (The cited paragraph refers to it as “standards of care,” but that phrasing implies a duty to adhere to them, and liability for when those standards are deviated from.) But concocting such a duty is problematic both practically and constitutionally, because at its core, what the government is threatening here is alarming: mandating how software is written. Not suggesting how software should ideally be written, nor enabling, encouraging, nor facilitating it to be written well, but instead using the force of law to demand how software be written.

It is so alarming because software is written, and it raises a significant First Amendment problem for the government to dictate how anything should be expressed, regardless how correct or well-intentioned the government may be. Like a book or newspaper, software is something that is also expressed through language and expressive choices; there is not just one correct way to write a program that does something, but rather an infinite number of big and little structural and language decisions made along the way. But this proposal basically ignores the creative aspect to software development (indeed, software is even treated as eligible for copyright protection as an original work of authorship). Instead it treats it more like a defectively-made toaster than a book or newspaper, replacing the independent expressive judgment of the software developer with the government’s. Courts have also recognized the expressive quality to software, so it would be quite a sea change if the Constitution somehow did not apply to this particular form of expression. And such a change would have huge implications, because cybersecurity is not the only reason that the government keeps proposing to regulate software design. The White House proposal would seem to bless all these attempts, no matter how ill-advised or facially censorial, by not even contemplating the constitutional hurdles any legal regime to regulate software design would need to hurdle.

It would still need to hurdle them even if the government truly knew best, which is a big if, even here, and not just because the government may lack adequate enough or current enough expertise. The proposal does contemplate a multi-stakeholder process to develop best practices, and there is nothing wrong in general with the government taking on some sort of facilitating role to help illuminate what these practices are and making sure software developers are aware of them – it may even be a good idea. The issue is not that there may be no such thing as any best practices for software development – obviously there are. But they are not necessarily one-size-fits-all or static; a best practice may depend on context, and constantly need to evolve to address new vectors of attack. But a distant regulator, and one inherently in a reactive posture, may not understand the particular needs of a particular software program’s userbase, nor the evolving challenges facing the developer. Which is a big reason why requiring adherence to any particular practice through the force of law is problematic, because it can effectively require software developers to make their code the government’s way rather than what is ultimately the best way for them and their users. Or at least put them in the position of having to defend their choices, which up until now the Constitution had let them make freely. And which would amount to a huge, unprecedented burden that threatens to chill software development altogether.

Such chilling is not an outcome the government should want to invite, and indeed, according to the strategy document itself, does not want. The irony with the software liability proposal is that it is inherently out-of-step with the overall thrust of the rest of the document, and even the third pillar it appears in itself, which proposes to foster better cybersecurity through the operation of more efficient markets. But imposing design liability would have the exact opposite effect on those markets. Even if well-resourced private entities (ex: large companies) might be able to find a way to persevere and navigate the regulatory requirements, small ones (including those potentially excluded from the stakeholder process establishing the requirements) may not be able to meet them, and individual people coding software are even less likely to. The strategy document refers to liability only on developers with market power, but every software developer has market power, including those individuals who voluntarily contribute to open source software projects, which provide software users with more choices. But those continued contributions will be deterred if those who make them can be liable for them. Ultimately software liability will result in fewer people writing code and consequently less software for the public to use. So far from making the software market work more efficiently through competitive pressure, imposing liability for software development will only remove options for consumers, and with it the competitive pressure the White House acknowledges is needed to prompt those who still produce software to do better. Meanwhile, those developers who remain will still be inhibited from innovating if that innovation can potentially put them out of compliance with whatever the law has so far managed to imagine.

Which raises another concern with the software liability proposal and how it undermines the rest of the otherwise reasonable strategy document. The fifth pillar the White House proposes is to “Forge International Partnerships to Pursue Shared Goals”:

The United States seeks a world where responsible state behavior in cyberspace is expected and rewarded and where irresponsible behavior is isolating and costly. To achieve this goal, we will continue to engage with countries working in opposition to our larger agenda on common problems while we build a broad coalition of nations working to maintain an open, free, global, interoperable, reliable, and secure Internet.

On its face, there is nothing wrong with this goal either, and it, too, may be a necessary one to effectively deal with what are generally global cybersecurity threats. But the EU is already moving ahead to empower bureaucratic agencies to decide how software should be written, yet without a First Amendment or equivalent understanding of the expressive interests such regulation might impact. Nor does there seem to be any meaningful understanding about how any such regulation will affect the entire software ecosystem, including open source, where authorship emerges from a community, rather than a private entity theoretically capable of accountability and compliance.

In fact, while the United States hasn’t yet actually specified requirements for design practices a software developer must comply with, the EU is already barreling down the path of prescriptive regulation over software, proposing a law that would task an agency to dictate what criteria software must comply with. (See this post by Bert Hubert for a helpful summary of its draft terms.) Like the White House, the EU confuses its stated goal of helping the software market work more efficiently with an attempt to control what can be in the market. For all the reasons that an attempt by the US stands to be counterproductive, so would EU efforts be, especially if born from a jurisdiction lacking a First Amendment or equivalent understanding of the expressive interests such regulation would impact. Thus it may turn out to be European bureaucrats that attempt to dictate the rules of the road for how software can be coded, but that means that it will be America’s job to try to prevent that damage, not double-down on it.

It is of course true that not everything software developers currently do is a good idea or even defensible. Some practices are dreadful and damaging. It isn’t wrong to be concerned about the collateral effects of ill-considered or sloppy coding practices or for the government to want to do something about it. But how regulators respond to these poor practices is just as important, if not more so, than that they respond, if they are going to make our digital environment better and more secure and not worse and less. There are a lot of good ideas in the strategy document for how to achieve this end, but imposing software design liability is not one of them.

Filed Under: 1st amendment, chilling effects, coding, computer security, cybersecurity, duty of care, innovation, liability, national cybersecurity strategy, software, standards of care, white house

Verizon Fails Again, Shutters Attempted Zoom Alternative BlueJeans After Paying $400 Million For It

from the heckuva-job,-brownie dept

Thu, Aug 10th 2023 05:29am - Karl Bode

Pretty much every time Verizon wanders outside of its core competencies (operating telecom networks, lobbying to hamstring competition, undermining the most basic of regulatory oversight), the telco amusingly falls flat on its face. It’s quite honestly starting to get a little weird.

Whether it’s the company’s Go90 video streaming platform, its video joint venture with RedBox, its news website Sugarstring (which you may recall tried to ban reporters from talking about surveillance or net neutrality), its app store, its “me too” VCAST apps, the billions wasted on Yahoo, the effort to run Tumblr into the ground, or any of a dozen other attempted pivots, Verizon has failed. Usually semi-spectacularly.

During peak COVID Verizon spent somewhere around $400 million to acquire BlueJeans, which was pitched as a videoconferencing alternative to Zoom. But of course in typical Verizon fashion the app went nowhere, and in an email to users Verizon stated they’ll be shutting the service down August 31. In the email, Verizon paints the app nobody has heard of as “award winning”:

BlueJeans is an award-winning product that connects our customers around the world, but we have made this decision due to the changing market landscape.

These repeated failures by Verizon would be less of an issue if the company didn’t have such a long history of skimping on essential broadband network upgrades. Whether it’s New York, New Jersey, or Pennsylvania, the telco has a long history of taking tax breaks, subsidies, or regulatory favors in exchange for promised DSL to fiber network upgrades that somehow never fully materialize.

While with the other hand, Verizon adores simply setting vast swaths of money on fire to please Wall Street’s myopic lust for “growth for growth’s sake” projects, even if execs routinely lack the chops to manage any of the efforts. With Verizon now facing major financial remediation headaches due to a lot of lead in their cables, much of that cash would probably come in handy.

Despite endless pretense, telecoms can’t innovate. At least outside of finding creative new ways to over-charge captive customers or undermining government oversight. It’s not clear how many examples we need before Verizon and the folks pouring money into these doomed projects figure that out.

Filed Under: app, dsl, failed pivot, fiber, innovation, telecom, videoconferencing
Companies: bluejeans, verizon

Meta’s Threads Didn’t Launch In The EU: Is That Showing The Failure Or Success Of The Digital Markets Act?

from the permissioned-innovation dept

As you almost certainly know, earlier this month, Meta released Threads, its Twitter-like microblogging service. There are rumors that the company rushed the launch, pushing it up a few weeks to try to capitalize on the latest nonsense at Twitter. And, it seemed to work (to some extent) in that the company was able to quickly scale to 100 million signups in just a few days. Of course, it had help. This was all piggybacked on the Instagram social graph, which has over 2 billion users.

Still, one thing that likely held back even wider adoption was that Meta barred EU residents from using Threads. While many people assumed that this was due to a lack of GDPR compliance (since the GDPR is the EU law many Americans are most familiar with), it was pretty clear from the beginning that the actual culprit was the upcoming DMA, or Digital Markets Act.

While we’ve talked a lot about the DSA, or Digital Services Act, we haven’t talked quite as much about the DMA, which is a similar kind of law, but focused on online “marketplace.” Whereas the DSA designates some platforms to face stricter rules by declaring them VLOPs (Very Large Online Platforms), under the DMA, the similar designation is for “Gatekeepers” and as a July 4th present, the EU named basically all the US “big tech” companies as gatekeepers: Amazon, Apple, Google, Meta, and Microsoft. TikTok and Samsung were also named.

This means that additional rules regarding how those companies can launch new products will come into effect shortly, including blocking the ability to leverage data from one product to another, which seems to be what Meta is most concerned about.

I’ve spoken to a few EU legal and policy experts who say it’s not at all clear that launching Threads in the EU would, in any way, violate the DMA, and Meta keeps harping on “regulatory uncertainty” as a reason for why. One legal expert I spoke to on background noted that they thought this was really just a way for Meta to highlight some of the negative consequences of the DMA: letting the EU know that they’re now second class citizens for new services.

Of course, EU policy makers are trying to spin all this as a good thing.

“The fact that Threads is still not available for EU citizens shows that EU regulation works,” said Christel Schaldemose, a Danish lawmaker, according to Politico last week. “I hope Meta will make sure all rules are covered and complied with before opening up for EU citizens.”

And, I guess your view on whether or not it’s working for EU citizens, or punishing EU citizens, truly depends on (1) if you think having access to new services is important, and (2) if you think that complying with things like the DMA will actually do anything useful for folks in the EU.

Meanwhile, Meta is apparently going so far as to block EU users from accessing Threads while using a VPN to get around the geoblocking.

Don’t try to sign up for Threads through a virtual private network (VPN) if you live in Europe. Meta has confirmed that it’s blocking European Union users from accessing the new social network through a VPN. As consultant Matt Navarra explains, content, notifications and profiles won’t load properly. Some users say they can use Threads without a VPN if they’d previously signed up with one, but you may not want to count on that loophole working.

In a statement, Meta says it’s taking “further measures” to stop people from accessing Threads in European countries where the app is unavailable. The company nonetheless says Europe remains a “very important market” and that it hopes to expand availability in the future.

The fact that so many EU users were using VPNs to access Threads — at least enough of them to catch Meta’s attention — certainly suggests they felt that it was more important for them to be able to access this new service than to be “protected” by whatever requirements Meta is expected to put in place to comply with the DMA.

While I do think there are some interesting aspects to the DMA (especially around interoperability requirements, though it remains to be seen how well those will actually work), this seems to once again highlight the EU approach to tech regulation, being about restricting innovation until the bureaucrats say it’s okay. You can argue that leads to safer outcomes, but it’s hard to see how that will lead to better overall outcomes, as it will slow innovation down, and leave many in the EU cut off from services and features that the rest of the world enjoys.

Filed Under: competition, data sharing, dma, eu, gatekeepers, innovation
Companies: instagram, meta, threads