The AI Journalism ‘Revolution’ Continues To Go Poorly As Gannett Accused Of Making Up Fake Humans To Obscure Lazy AI Use (original) (raw)
from the I'm-sorry-I-can't-do-that-Dave dept
While recent evolutions in “AI” have netted some profoundly interesting advancements in creativity and productivity, its early implementation in journalism has been a comically sloppy mess thanks to some decidedly human problems: namely greed, incompetence, and laziness.
If you remember, the cheapskates over at Red Ventures implemented AI over at CNET without telling anybody. The result: articles rife with accuracy problems and plagiarism. Of the 77 articles published, more than half had significant errors. It ultimately cost them more to have humans editors come in and fix the mistakes than the money they’d actually saved. After backlash, Red Ventures paused the effort.
Gannett, the giant media company that owns USA Today (and very likely whatever’s left of your local newspaper), was also forced to pause its use of AI earlier this year because the resulting product was laughably bad and full of obvious errors. Even when used for the kind of basic writing LLMs are supposed to excel at, like basic box score journalism.
Fast forward to this week, and Gannett is once again under fire for allegedly making up writer bylines as cover for a different low-quality AI experiment. This time the problems bubbled up at Reviewed, a USA Today-owned product review website, where staffers noticed that badly written product reviews of products staffers had never seen were popping up under the bylines of people who didn’t exist:
“Not only were Reviewed staffers unfamiliar with the bylines on the stories — names like “Breanna Miller” and “Avery Williamson” — they were unable to find evidence of writers by those names on LinkedIn or any professional websites.”
All of the articles in question are sterile and not particularly engaging, and all shared notable similarities. Here, for example, is one of their reviews for scuba masks, contrasted to their reviews for water bottles:
While “AI” can definitely improve journalism efficiency on everything from transcription to editing, the kind of fail-upward types at the top of the media industry food chain generally see the technology as a way to cut corners and assault already woefully mistreated and underpaid human labor, especially of the unionizing variety.
Unionized writers at Reviewed say that Gannett was trying to obfuscate its efforts to undermine unionized human staff after its embarrassing face plant earlier this year:
Carrillo, a shop steward for the union, said the mysterious reviews — which appeared just weeks after staff staged a one-day walkout to demand management negotiate on a new contract — harm the reputations of actual employees.
“It’s gobbledygook compared to the stuff that we put out on a daily basis,” he said. “None of these robots tested any of these products.”
Amusingly, when approached for comment by the Washington Post, a Gannett spokesperson first tries to deny that the articles were AI generated, then implies that if they were AI-generated, it was all the fault of a third-party marketing firm:
“In a statement to The Post, a spokesperson said the articles — many of which have now been deleted — were created through a deal with a marketing firm to generate paid search-engine traffic. While Gannett concedes the original articles “did not meet our affiliate standards,” officials deny they were written by AI.
“We expect all our vendors to comply with our ethical standards and have been assured by the marketing agency the content was NOT AI generated,” the spokesperson said in an email.”
The marketing firm in question redirected questions back to Gannett. WAPO reporters couldn’t find evidence any of the writers exist. The site’s human writers say it’s obvious that AI was used, noting the marketing firm in question clearly advertises that it engages in “polishing AI generative text.”
Again, the problem here generally isn’t the technology itself. AI will ultimately improve and become increasingly useful in a myriad of ways. The problem is the kind of humans implementing it. And the way they’re implementing it without involving or even telling existing staffers.
The affluent hedge fund brunchlord types that dominate key positions across U.S. media “leadership” clearly see AI not as a path toward better product or more efficient workforce, but as a shortcut to building an automated ad engagement machine that effectively shits money. And, as an added bonus, a way to undermine staffers peskily demanding health insurance and a living wage.
Large U.S. media companies are filled to the brim with managers who are terrible at their jobs to begin with, making their failures on AI unsurprising. When it comes to the folks shaping the contours of modern journalism, ethics, product quality, accurately informing the public, staff happiness, or genuine human interest very often never even enter the frame.
Filed Under: ai, content, journalism, llm, marketing, media, reporting, reviewed
Companies: gannett