language learning models – Techdirt (original) (raw)

Stories filed under: "language learning models"

Yet Another Company Caught Using ‘AI’ To Quietly Create Fake Journalists And Fake Journalism

from the the-'AI'-journalism-revolution-is-going-great,-thanks-for-asking dept

While “AI” (language learning models) certainly could help journalism, the fail upward brunchlords in charge of most modern media outlets instead see the technology as a way to cut corners, undermine labor, and badly automate low-quality, ultra-low effort, SEO-chasing clickbait.

As a result we’ve seen an endless number of scandals where companies use LLMs to create entirely fake journalists and hollow journalism, usually without informing their staff or their readership. When they’re caught (as we saw with CNET, Gannett, or Sports Illustrated), they usually pretend to be concerned, throw their AI partner under the bus, then get right back to doing it.

This week another similar venture, Hoodline, is facing similar criticism. Hoodline was created in 2014 to fill the growing void left with the death of quality news. (aka “news deserts”). Executives there apparently thought the best way to do that was to create fake “AI” local journalists to write a lot of low-quality aggregated crap — without adequately informing readers about it:

“…until recently, the site had further blurred the line between reality and illusion. Screenshots captured last year by the Internet Archive and local outlet Gazetteer showed Hoodline had further embellished its AI author bylines with what appeared to be AI-generated headshots resembling real people and fake biographical information.

“Nina is a long-time writer and a Bay Area Native who writes about good food & delicious drink, tantalizing tech & bustling business,” one biography claimed.”

We’ve noted repeatedly how the death of local news is a real issue. Gone are most local newspapers, and in their place have been seeded a rotating crop of right wing propagandists pretending to be local TV news (like Sinclair Broadcasting) or fake “pink slime” propaganda rags (usually also almost exclusively coming from the right wing) pretending to be local newspapers.

This has not only resulted in a more ignorant and divided public, but has had a measurable impact on electoral outcomes. It also results in far fewer real gumshoe journalists covering local courts and city hall proceedings, something corrupt officials adore.

And while trying to fix this problem is a noble and thankless calling, Hoodline is clearly going about it the wrong way. They’re not using LLMs to genuinely improve things, they’re using LLMs to create a sort of simulacrum of real local journalism, and hoping nobody could tell the difference. Instead of helping journalism, that undermines the public’s already shaky trust in an already struggling sector:

“Employing AI to help local journalists save time so they can focus on doing more in-depth investigations is qualitatively different from churning out a high amount of low-quality stories that do nothing to provide people with timely and relevant information about what is happening in their community, or that provides them with a better understanding of how the things happening around them will end up affecting them,” [Felix] Simon told CNN.

Again there are a bunch of things LLMs can help journalists with. Editing, transcription, digging up court documents, data analysis, easing administrative burdens, etc. But then, of course, that value proposition has to be weighed against the immense water and power suck of AI during a period where increased climate destabilization is putting unprecedented strain on long-neglected infrastructure.

And is that kind of a value proposition worth it if what’s being created is just derivative dreck?

If you look at what Hoodline is producing (here’s our local version for Seattle) the content is exclusively aggregating press releases and reporting from elsewhere without much if any original reporting or intelligent analysis. They’re effectively injecting themselves into the news bloodstream to redirect ad revenue that could go to real reporting outlets to their shoddy, (badly) automated simulacrum.

And given that companies like Google aren’t willing to use their untold billions to actually maintain quality control over Google News and Google search, it’s easier than ever for these kinds of pseudo-news outlets (and far less ethical outright propaganda and spam merchants) to find success. In many cases, far easier than genuine journalists who can’t even get Google to index their website.

Hoodline has since shifted things around slightly and now uses a small “AI” badge to indicate the article was written with the help of LLMs. But how much help, and whether the authors are actually real people with any meaningful understanding of local events they’re covering, remains decidedly unclear.

Filed Under: ai, automation, clickbait, journalism, language learning models, media, reporting, simulacrum, spam
Companies: hoodline

CEO: ‘AI’ Power Drain Could Cause Data Centers To Run Out Of Power Within Two Years

from the I'm-sorry-I-can't-do-that,-Dave dept

Fri, May 10th 2024 05:24am - Karl Bode

By now it’s been made fairly clear that the bedazzling wonderment that is “AI” doesn’t come cheap. Story after story have highlighted how the technology consumes massive amounts of electricity and water, and we’re not really adapting to keep pace. This is also occurring alongside a destabilizing climate crisis that’s already putting a capacity and financial strain on aging electrical infrastructure.

A new report from the International Energy Agency (IEA) indicates that the 460 terawatt-hours (TWh) consumed by data centers in 2022 represented 2% of all global electricity usage, mostly driven by data centers and data center cooling. AI and crypto mining is expected to double that consumption by 2026.

Marc Ganzi, CEO of data center company DigitalBridge, isn’t really being subtle about his warnings. He claims that data centers are going to start running out of power within the next 18-24 months:

“We started talking about this over two years ago at the Berlin Infrastructure Conference when I told the investor world, we’re running out of power in five years. Well, I was wrong about that. We’re kind of running out of power in the next 18 to 24 months.”

Of course when these guys say “we” are going to run out of power, they really mean you (the plebs) will be running out of power. They’ll find solutions to address their need for unlimited power, and the strain will likely be shifted to areas, companies, and residents with far less robust lobbying budgets.

Data centers can move operations closer to natural gas, hydropower sources, or nuclear plants. Some are even using decommissioned Navy ships to exploit liquid cooling. But a report by the financial analysts at TD Cowen says there’s now a 3+ year lead time on bringing new power connections to data centers. It’s a 7 year wait in Silicon Valley; 8 years in markets like Frankfurt, London, Amsterdam, Paris and Dublin.

Network engineers have seen this problem coming for years. Yet crypto and AI power consumption, combined with the strain of climate dysregulation, still isn’t really a problem the sector is prepared for. And when the blame comes, the VC hype bros who got out over their skis, or utilities that failed to modernize for modern demand and climate stability issues won’t blame themselves, but regulation:

“[Cisco VP Denise] Lee said that, now, two major trends are getting ready to crash into each other: Cutting-edge AI is supercharging demand for power-hungry data center processing, while slow-moving power utilities are struggling to keep up with demand amid outdated technologies and voluminous regulations.”

While I’m sure utilities and data centers certainly face some annoying regulations, the real problem rests on the back of technology hype cycles that don’t really care about the real-world impact of their hyper-scaled profit seeking. As always, the real-world impact of the relentless pursuit of unlimited wealth and impossible scale is somebody else’s problem to figure out later, likely at significant taxpayer cost.

This story is playing out to a backdrop of a total breakdown of federal regulatory guidance. Bickering state partisans are struggling to coordinate vastly different and often incompatible visions of our energy future. While at the same time a corrupt Supreme Court prepares several pro-corporate rulings designed to dismantle what’s left of coherent federal regulatory independence.

I would suspect the crypto and AI-hyping VCs (and the data centers that profit off of the relentless demand for unlimited computational power and energy) will be fine. Not so sure about everybody else, though.

Filed Under: ai, climate change, electricity, environmental impact, language learning models, llm, sustainability

Logitech Launches An “AI” Mouse That’s Just A 2022 Mouse With A Mappable Button

from the because-we-can dept

Mon, May 6th 2024 05:29am - Karl Bode

“AI,” or semi-cooked language learning models are very cool. There’s a world of possibility there in terms of creativity and productivity tools to scientific research.

But early adoption of AI has been more of a rushed mess driven by speculative VC bros who are more interested in making money off of hype (see: pointless AI badges), or cutting corners (see: journalism), or badly automating already broken systems (see: health insurance) or using it as a bludgeon against labor (also see: journalism and media), than any sort of serious beneficial application.

And a lot of these kinds of folks are absolutely obsessed with putting “AI” into products that don’t need it just to generate hype. Even if the actual use case makes no coherent sense.

We most recently saw this with the Human AI pin, which was hyped as some kind of game changing revelation pre-release, only for reviewers to realize it doesn’t really work, and doesn’t really provide much not already accomplished by the supercomputer sitting in everybody’s pocket. But even that’s not as bad as companies who claim they’re integrating AI — despite doing nothing of the sort.

Like Logitech, which recently released a new M750 wireless mouse it has branded as a “signature AI edition.” But as Ars Technica notes, all they did is rebrand a mouse released in 2022 while adding a customizable button:

“I was disappointed to learn that the most distinct feature of the Logitech Signature AI Edition M750 is a button located south of the scroll wheel. This button is preprogrammed to launch the ChatGPT prompt builder, which Logitech recently added to its peripherals configuration app Options+.

That’s pretty much it.”

Ars points to other, similarly pointless ventures, like earbuds with clunky ChatGPT gesture prompt integration or Microsoft’s CoPilot button; stuff that only kind of works and nobody actually asked for. It’s basically just an attempt to seem futuristic and cash in on the hype wave without bothering to see if the actually functionality works or works better than what already exists.

The AI hype cycle isn’t entirely unlike the 5G hype cycle, in that there certainly is interesting and beneficial technology under the hood, but the way it’s being presented or implemented by overzealous marketing types is so detached from reality as to not be entirely coherent.

That creates an association over time in the minds of consumers between the technology and empty bluster, undermining the tech itself and future, actually beneficial use cases.

When bankers and marketing departments took over Silicon Valley it resulted in the actual engineers (like Woz) getting shoved in the corner out of sight. We’re now seeing such a severe disconnect between hype and reality it’s resulting in a golden age of bullshit artists and actively harming everybody in the chain, including the marketing folks absolutely convinced they’re being exceptionally clever.

Filed Under: ai, artificial intelligence, hardware, language learning models, marketing, mouse
Companies: logitech

NYC Officials Are Mad Because Journalists Pointed Out The City’s New ‘AI’ Chatbot Tells People To Break The Law

from the I'm-sorry-I-can't-do-that,-Dave dept

Fri, Apr 5th 2024 05:29am - Karl Bode

Countless sectors are rushing to implement “AI” (undercooked language learning models) without understanding how they work — or making sure they work. The result has been an ugly comedy of errors stretching from journalism to mental health care thanks to greed, laziness, computer-generated errors, plagiarism, and fabulism.

NYC’s government is apparently no exception. The city recently unveiled a new “AI” powered chatbot to help answer questions about city governance. But an investigation by The Markup found that the automated assistant not only doled out incorrect information, it routinely advises city residents to break the law across a wide variety of subjects, from landlord agreements to labor issues:

“The bot said it was fine to take workers’ tips (wrong, although they sometimes can count tips toward minimum wage requirements) and that there were no regulations on informing staff about scheduling changes (also wrong). It didn’t do better with more specific industries, suggesting it was OK to conceal funeral service prices, for example, which the Federal Trade Commission has outlawed. Similar errors appeared when the questions were asked in other languages, The Markup found.”

Folks over on Bluesky had a lot of fun testing the bot out, and finding that it routinely provided bizarre, false, and sometimes illegal results:

There’s really no reality where this sloppily-implemented bullshit machine should remain operational, either ethically or legally. But when pressed about it, NYC Mayor Eric Adams stated the system will remain online, albeit with a warning that the system “may occasionally produce incorrect, harmful or biased content.”

But one administration official complained about the fact that journalists pointed out the whole error prone mess in the first place, insisting they should have worked privately with the administration to fix the problems cause by the city:

If you can’t see that, it’s reporter Joshua Friedman reporting:

At NYC mayor Eric Adams’s press conference, top mayoral advisor Ingrid Lewis-Martin criticizes the media for publishing stories about the city’s new Al-powered chatbot that recommends illegal behavior. She says reporters could have approached the mayor’s office quietly and worked with them to fix it

That’s not how journalism works. That’s now how anything works. Everybody’s so bedazzled by new tech (or keen on making money from the initial hype cycle) they’re just rushing toward the trough without thinking. As a result, uncooked and dangerous automation is being layered on top of systems that weren’t working very well in the first place (see: journalism, health care, government).

The city is rushing to implement “AI” elsewhere in the city as well, such as with a new weapon scanning system tests have found have an 85 percent false positive rate. All of this is before you even touch on the fact that most early adopters of these systems see them are a wonderful way to cut corners and undermine already mistreated and underpaid labor (again see: journalism, health care).

There are lessons here you’d think would have been learned in the wake of previous tech hype and innovation cycles (cryptocurrency, NFTs, “full self driving,” etc.). Namely, innovation is great and all, but a rush to embrace innovation for innovation’s sake due to greed or incurious bedazzlement generally doesn’t work out well for anybody (except maybe early VC hype wave speculators).

Filed Under: automation, eric adams, hype, ingrid lewis-martin, innovation, language learning models, nyc, tech

‘AI’ Exposes Google News Quality Control Issues, Making Our Clickbait, Plagiarism, And Propaganda Problem Worse

from the broken-signal-to-noise-ratio dept

Mon, Jan 22nd 2024 11:56am - Karl Bode

Journalists have long used Google News to track news cycles. But for years users have documented a steady decline in product quality parallel to similar complaints about the quality of Google’s broader search technology. Many stories and outlets are often no longer indexed, low quality clickbait and garbage are everywhere, and customization seems broken as Google shifted its priorities elsewhere.

Now the broader problem with Google News quality control seems to have gotten worse with the rise of “generative AI” (half baked language learning models). AI-crafted clickbait, garbage, and plagiarized articles are now dominating the Google News feed, reducing the already shaky service’s utility even further:

“Google News is boosting sites that rip-off other outlets by using AI to rapidly churn out content, 404 Media has found. Google told 404 Media that although it tries to address spam on Google News, the company ultimately does not focus on whether a news article was written by an AI or a human, opening the way for more AI-generated content making its way onto Google News.”

As we’ve seen in the broader field of content moderation, moderating these massive systems at scale is no easy feat. Compounded by the fact that companies like Google (which feebly justified more layoffs last week despite sitting on mountains of cash) would much rather be spending time and resources on things that make them more money, instead of ensuring that existing programs and systems actually work as advertised.

But the impact of Google’s cheap laziness is multi-fold. One, sloppy moderation of Google News only helps contribute to an increasingly lopsided signal to noise ratio as a dwindling number of under-funded actual journalists try to out-compete automated bullshit and well-funded propaganda mills across a broken infotainment and engagement economy. It’s already not a fair fight, and when a company like Google fails to invest in functional quality control, it actively makes the problem worse.

For example, many of automated clickbait plagiarism mills are getting the attention and funding that should be going to real journalism operating on shoestring budgets, as the gents at 404 Media (whose quality work ironically isn’t even making it to the Google News feed) explore in detail. For its part, Google reps had this to say:

“Our focus when ranking content is on the quality of the content, rather than how it was produced. Automatically-generated content produced primarily for ranking purposes is considered spam, and we take action as appropriate under our policies.”

Except they’re clearly not doing a good job at any part of that. And they’re not doing a good job at that because the financial incentives of the engagement economy are broadly perverse; aligned toward cranking out as much bullshit as possible to maximize impressions and end user engagement at scale, and against spending the money and time to ensure quality control at that same scale.

It’s not entirely unlike problems we saw when AT&T would actively support (or turn a blind eye to) scammers and crammers on its telecom networks. AT&T made money from the volume of traffic regardless of whether the traffic was harmful, muting any financial incentive to do anything about it.

This isn’t exclusively an AI problem (LLMs could be used to improve quality control). And it certainly isn’t exclusively a Google problem. But it sure would be nice if Google took a more responsible lead on the issue before what’s left of U.S. journalism drowns in a sea of automated garbage and engagement bait.

Filed Under: ai, content moderation, google news, journalism, language learning models, media, news, plagiarism, reporting, spam
Companies: google

‘AI’ Is Supercharging Our Broken Healthcare System’s Worst Tendencies

from the I'm-sorry-I-can't-do-that,-Dave dept

Tue, Nov 21st 2023 05:26am - Karl Bode

“AI” (or more accurately language learning models nowhere close to sentience or genuine awareness) has plenty of innovative potential. Unfortunately, most of the folks actually in charge of the technology’s deployment largely see it as a way to cut corners, attack labor, and double down on all of their very worst impulses.

Case in point: “AI’s” rushed deployment in journalism has been a keystone-cops-esque mess. The fail-upward brunchlord types in charge of most media companies were so excited to get to work undermining unionized labor and cutting corners that they immediately implemented the technology without making sure it actually works. The result: plagiarism, bullshit, a lower quality product, and chaos.

Not to be outdone, the very broken U.S. healthcare industry is similarly trying to layer half-baked AI systems on top of a very broken system. Except here, human lives are at stake.

For example UnitedHealthcare, the largest health insurance company in the US, has been using AI to determine whether elderly patients should be cut off from Medicare benefits. If you’ve ever navigated this system on behalf of an elderly loved one, you likely know what a preposterously heartless shitwhistle this whole system already is long before automation gets involved.

But a recent investigation by STAT showed the AI consistently made major errors and cut elderly folks off from needed care prematurely, with little recourse by patients or families:

“UnitedHealth Group has repeatedly said its algorithm, which predicts how long patients will need to stay in rehab, is merely a guidepost for their recoveries. But inside the company, managers delivered a much different message: that the algorithm was to be followed precisely so payment could be cut off by the date it predicted.”

How bad is the AI? A recent lawsuit filed in the US District Court for the District of Minnesota alleges that the AI in question was reversed by human review roughly 90 percent of the time:

“Though few patients appeal coverage denials generally, when UnitedHealth members appeal denials based on nH Predict estimates—through internal appeals processes or through the federal Administrative Law Judge proceedings—over 90 percent of the denials are reversed, the lawsuit claims. This makes it obvious that the algorithm is wrongly denying coverage, it argues.”

Of course, the way that the AI is making determinations isn’t particularly transparent. But what can be discerned is that the artificial intelligence at use here isn’t particularly intelligent:

“It’s unclear how nH Predict works exactly, but it reportedly estimates post-acute care by pulling information from a database containing medical cases from 6 million patients…But Lynch noted to Stat that the algorithm doesn’t account for many relevant factors in a patient’s health and recovery time, including comorbidities and things that occur during stays, like if they develop pneumonia while in the hospital or catch COVID-19 in a nursing home.”

Despite this obvious example of the AI making incorrect determinations, company employees were increasingly mandated to strictly adhere to its decisions. Even when users successfully appealed these AI-generated determinations and win, they’re greeted with follow up AI-dictated rejections just days later, starting the process all over again.

The company in question insists that the AI’s rulings are only used as a guide. But it seems pretty apparent that, as in most early applications of LLMs, the systems are primarily viewed by executives as a quick and easy way to cut costs and automate systems already rife with problems, frustrated consumers, and underpaid and overtaxed support employees.

There’s no real financial incentive to reform the very broken but profitable systems underpinning modern media, healthcare, or other industries. But there is plenty of financial incentive to use “AI” to speed up and automate these problematic systems. The only guard rails for now are competent government regulation (lol), or belated wrist slap penalties by class action lawyers.

In other words, expect to see a lot more stories exactly like this one in the decade to come.

Filed Under: ai, automation, chat-gpt, coverage denied, healthcare, language learning models, medicare
Companies: unitedhealthcare

Microsoft’s Use Of ‘AI’ In Journalism Has Been An Irresponsible Mess

from the I'm-sorry-I-can't-do-that,-Dave dept

Mon, Nov 6th 2023 05:20am - Karl Bode

We’ve noted repeatedly how early attempts to integrate “AI” into journalism have proven to be a comical mess, resulting in no shortage of shoddy product, dangerous falsehoods, and plagiarism. It’s thanks in large part to the incompetent executives at many large media companies, who see AI primarily as a way to cut corners, assault unionized labor, and automate lazy and mindless ad engagement clickbait.

The folks rushing to implement half-cooked AI at places like Red Ventures (CNET) and G/O Media (Gizmodo) aren’t competent managers to begin with. Now they’re integrating “AI” with zero interest in whether it actually works or if it undermines product quality. They’re also often doing it without telling staffers what’s happening, revealing a widespread disdain for their own employees.

Things aren’t much better over at Microsoft, where the company’s MSN website had already been drifting toward low-quality clickbait and engagement gibberish for years. They’re now busy automating a lot of the content at MSN with half-baked language learning models, and it’s… not going great.

The company recently came under fire after MSN reprinted a Guardian story about the murder of a young Australian woman, including a tone deaf AI-generated poll some felt made light of the death. But as CNN notes, MSN has also been rife with a flood of “news” that’s either weirdly heartless or just false. Even in instances where it’s simply republishing human-written content from other outlets:

“In August, MSN featured a story on its homepage that falsely claimed President Joe Biden had fallen asleep during a moment of silence for victims of the catastrophic Maui wildfire.

The next month, Microsoft republished a story about Brandon Hunter, a former NBA player who died unexpectedly at the age of 42, under the headline, “Brandon Hunter useless at 42.”

Then, in October, Microsoft republished an article that claimed that San Francisco Supervisor Dean Preston had resigned from his position after criticism from Elon Musk.”

It’s a pretty deep well of dysfunction. One of my personal favorites was when an automated article on Ottawa tourism recommended that tourists prioritize a trip to a local food bank. When caught, Microsoft often tries to pretend the problem isn’t lazily implemented automation, deletes the article, then just continues churning out automated clickbait gibberish.

While Microsoft executives have posted endlessly about the responsible use of AI, that apparently doesn’t include their own news website. MSN is routinely embedded as the unavoidable default launch page at a lot of enterprises and companies, ensuring this automated bullshit sees fairly widespread distribution even if users don’t actually want to read any of it.

Microsoft, for its part, says it will try to do better:

“As with any product or service, we continue to adjust our processes and are constantly updating our existing policies and defining new ones to handle emerging trends. We are committed to addressing the recent issue of low quality articles contributed to the feed and are working closely with our content partners to identify and address issues to ensure they are meeting our standards.”

Again though, MSN, like so many outlets, had been drifting toward garbage clickbait long before language learning models came around. AI has just supercharged existing bad tendencies. Most of these execs see AI as a money-saving shortcut to creating automated ad-engagement machines that effectively shit money — without the pesky need to pay human editors or reporters a living wage.

With an army of well-funded authoritarian hacks keen on using propaganda to befuddle the masses at unprecedented scale, quality, ethical journalism is more important than ever. But instead of fixing the sector’s key shortcomings or paying our best reporters and editors a living wage, we’re seemingly dead set on ignoring their input and doubling down on — and automating — all of the sector’s worst habits.

While the AI will certainly improve, there’s little indication the executives making key decisions will. U.S. journalism has been on a very unhealthy trajectory for a long while due to these same execs who will dictate most of what happens next. Without really consulting (or in many instances even telling) any of the employees who actually understand how the industry actually works.

What could possibly go wrong?

Filed Under: ai, artificial intelligence, disinformation, failures, journalism, language learning models, misinformation, propaganda
Companies: microsoft

Silicon Valley Starts Hiring Poets To Fix Shitty Writing By Undercooked “AI”

from the I'm-sorry-I-can't-do-that,-Dave dept

Thu, Sep 28th 2023 05:27am - Karl Bode

When it comes to the early implementation of “AI,” it’s generally been the human beings that are the real problem.

Case in point: the fail-upward incompetents that run the U.S. media and journalism industries have rushed to use language learning models (LLMs) to cut corners and attack labor. They’ve made it very clear they’re not at all concerned about the fact that these new systems are mistake and plagiarism prone, resulting in angry employees, a lower-quality product, and (further) eroded consumer trust.

While AI certainly has many genuine uses for productivity, many VC hustlebros see AI as a way to create an automated ad engagement machine that effectively shits money and undermines already underpaid labor. The actual underlying technology is often presented as akin to science fiction or magic; the ballooning server costs, environmental impact, and $2 an hour developing world labor powering it are obscured from public view whenever possible.

But however much AI hype-men would like to pretend AI makes human beings irrelevant, they remain essential for the underlying illusion and reality to function. As such, a growing number of Silicon Valley companies are increasingly hiring poets, English PHDs, and other writers to write short stories for LLMs to train on in a bid to improve the quality of their electro-mimics:

“A string of job postings from high-profile training data companies, such as Scale AI and Appen, are recruiting poets, novelists, playwrights, or writers with a PhD or master’s degree. Dozens more seek general annotators with humanities degrees, or years of work experience in literary fields. The listings aren’t limited to English: Some are looking specifically for poets and fiction writers in Hindi and Japanese, as well as writers in languages less represented on the internet.”

LLMs like Chat GPT have struggled to accurately replicate poetry. One study found that after being presented with 17 poem examples, the technology still couldn’t accurately write a poem in the style of Walt Whitman. While Whitman’s poems are often less structured, Chat GPT kept trying to produce poems in traditional stanzas, even when explicitly being told not to do that. The problem got notably worse in languages other than English, driving up the value, for now, of non-English writers.

So it’s clear we still have a long way to go before these technologies actually get anywhere close to matching both the hype and employment apocalypse many predicted. LLMs are effectively mimics that create from what already exists. Since it’s not real artificial intelligence, it’s still not actually capable of true creativity:

“They are trained to reproduce. They are not designed to be great, they try to be as close as possible to what exists,” Fabricio Goes, who teaches informatics at the University of Leicester, told Rest of World, explaining a popular stance among AI researchers. “So, by design, many people argue that those systems are not creative.”

That, for now, creates additional value for the employment of actual human beings with actual expertise. You need to hire humans to train models on, and you need editors to fix the numerous problems undercooked AI creates. The homogenized blandness of the resulting simulacrum also, for now, likely puts a premium on thinkers and writers who actually have something original to say.

The problem remains that while the underlying technology will continuously improve, the folks rushing to implement it without thinking likely won’t. Most seem dead set on using AI primarily as a bludgeon against labor in the hopes the public won’t notice the drop in quality, and professional writers, editors, and creatives won’t mind increasingly lower pay and tenuous position in the food chain.

Filed Under: ai, journalism, language learning models, llm, media, poet