guardian – Techdirt (original) (raw)
While Social Media Was Quick To Highlight And Limit The Spread Of False Claims Of Election Victory, Traditional Media Just Let It Flow
from the guys,-seriously? dept
For four years, all we’ve been hearing about is how social media was this terrible source of disinformation that had to be regulated because they were destroying democracy and all that. And so what happened last night/early this morning when Donald Trump falsely tried to claim he had won prior to all the votes being counted? Twitter and Facebook both reacted pretty quickly to flag the information, and highlight that it was misleading or false (and Twitter limited the ability to share it).
Meanwhile, nearly every major TV station allowed Trump to give his speech directly, in which he falsely claimed that he had already won states where there were still many votes to be counted, insisted that the counting of votes must be stopped (and claiming he was going to ask the Supreme Court to stop the count), and suggesting that there was fraud going on in a few states that still had significant mail-in ballots to count (most of which they hadn’t been able to count prior to yesterday because of Republican legislatures blocking that ability). There was no attempt to delay what he was saying, to contextualize it or to point out it was wrong until well after it had broadcast.
And then you had journalistic malpractice via the Associated Press. Two of its White House reporters, Zeke Miller and Jonathan Lemire, decided to do a “straight” tweet repeating what Trump had said, without any context, without any caveats or context, as if it were factual reporting.
It’s flabbergasting that the AP would take this view from nowhere approach to reporting on something so critical. And, even worse, since so many local newspapers just rerun AP newswire, that’s the take that many people are going to see.
Other sources got it correct. Buzzfeed — a site that old school journalists used to love to mock — did a hell of a lot more journalism than the AP:
The Guardian, a UK paper, got the story correct as well:
We’ve been noting in the past year how studies have shown that TV news is the key source for disinformation and how it doesn’t tend to go viral on social media until after it appears on TV.
So can someone explain to me why it is everyone wants to rush out and blame social media for disinformation?
Filed Under: content moderation, disinformation, donald trump, elections, fact checking, jonathan lemire, media, social media, zeke miller
Companies: a&p, associated press, buzzfeed, facebook, guardian, twitter
AI Writes Article About AI: Does The Newspaper Hold The Copyright?
from the the-monkey-gets-it dept
For many years, we wrote about the infamous monkey selfie copyright situation (and lawsuit) not just because it was hellishly entertaining, but also because the legal questions underlying the issue were likely to become a lot more important. Specifically, while I don’t think anyone is expecting a rush of monkey-authored works to enter the market any time soon, we certainly do expect that works created by computers will be all over the damn place in the very, very near future (and, uh, even the immediate past). Just recently, IBM displayed its “Project Debater” offering, doing an AI-powered realtime debate against a human on the “Intelligence Squared” debates program. A few days after that, the Guardian used OpenAI to write an article about itself, which the Guardian then published (it’s embedded about halfway down the fuller article which is written by a real life human, Alex Hern).
In both cases, the output is mostly coherent, with a few quirks. Here’s a snippet that shows… both:
This new, artificial intelligence approach could revolutionize machine learning by making it a far more effective tool to teach machines about the workings of the language. Deep-learning systems currently only have the ability to learn something specific; a particular sentence, set of words or even a word or phrase; or what certain types of input (for example, how words are written on a paper) cause certain behaviors on computer screens.
GPT2 learns by absorbing words and sentences like food does at a restaurant, said DeepFakes? lead researcher Chris Nicholson, and then the system has to take the text and analyze it to find more meaning and meaning by the next layer of training. Instead of learning about words by themselves, the system learns by understanding word combinations, a technique researchers can then apply to the system?s work to teach its own language.
Almost… but not quite.
Anyway, in the ensuing discussion about all this on Twitter, James Green asked the “simple” question of who is the “author” of the piece in question. The answer, summed up by Parker Higgins is:
legally speaking: ?_(?)_/?
there are a few proposed frameworks and a few theories of what happens if none of the proposals get taken up, but it will likely be settled in court
— Parker Higgins (@xor) February 15, 2019
This is why I think the monkey selfie case was so important. In determining, quite clearly, that creative works need a human author, it suggests that works created by a computer are squarely in the public domain. And while this seems to lead some (mainly lawyers) to freak out. There’s this unfortunate assumption that many people (especially lawyers) seem to make: that every creative work must be “owned” under copyright. There is no legal or rational basis for such an argument. We lived for many years in which it was fine that many works entered life and went straight into the public domain, and we shouldn’t fear going back to such a world.
This certainly isn’t a new question. Pam Samuelson wrote a seminal paper on allocating ownership rights in computer-generated works all the way back in 1985 (go Pam!), but it’s an issue that is going to be at the forefront of a number of copyright discussions over the next few years. If you think that various companies, publishers and the like are going to just let those works go into the public domain without a fight, you haven’t been paying attention to the copyright wars of the past few decades.
I fully expect that there will be a number of other legal fights, not unlike the monkey selfie case but around AI-generated works, coming in the very near future. Having the successful monkey case in the books is good to start with, as it establishes the (correct) baseline of requiring a human. However, I imagine that we’ll see ever more creative attempts to get around that in the courts, and if that fails, a strong push to get Congress to amend the law to magically create copyrights for AI-generated works.
Filed Under: ai, articles, copyright, monkey selfie, ownership, public domain
Companies: guardian, openai
Move Over Ed Snowden, Al Jazeera Has A Huge New Stack Of Spy Documents
from the and-the-revelations-just-keep-on-coming dept
There have been questions of when (not if) the next “Ed Snowden” situation would show up. There certainly have been a few recent leaks that appear to have been from folks other than Snowden, but they’ve mostly been one-off leaks. However, this morning, Al Jazeera is claiming that it got its hands on a huge trough of spy documents, in the form of cables from South Africa’s spy agency, the State Security Agency (SSA), and it will begin reporting on what’s in those documents, in collaboration with reporters at The Guardian:
Spanning a period from 2006 until December 2014, they include detailed briefings and internal analyses written by operatives of South Africa’s State Security Agency (SSA). They also reveal the South Africans’ secret correspondence with the US intelligence agency, the CIA, Britain’s MI6, Israel’s Mossad, Russia’s FSB and Iran’s operatives, as well as dozens of other services from Asia to the Middle East and Africa.
The files unveil details of how, as the post-apartheid South African state grappled with the challenges of forging new security services, the country became vulnerable to foreign espionage and inundated with warnings related to the US “War on Terror”.
As Al Jazeera points out, this is not “signals intelligence” (SIGINT) material, but rather “human intelligence” (HUMINT) of the kind normally done by the CIA, rather than the NSA. It’s about spies on the ground — and also, according to Al Jazeera, their humdrum daily office existence. Honestly, it almost sounds like the plot of a bad sitcom: come work at a premier national intelligence agency… and bitch about the lack of parking:
At times, the workplace resembles any other, with spies involved in form-filling, complaints about missing documents and personal squabbles…. One set of cables from the Algerian Embassy in South Africa relates to a more practical concern. It demands that “no parking” signs are placed in the street outside. The cable notes that the British and US embassies enjoy this privilege, and argues that it should be extended to Algeria as well.
Whether or not this latest leak turns up anything more interesting than parking disputes, it is worth noting that another trove of intelligence documents have leaked…
Filed Under: cia, fsb, leaks, mi6, mossad, south africa, ssa, surveillance, whistleblower
Companies: al jazeera, guardian
Guardian, Salon Show How Keeping And Fixing News Comments Isn't Hard If You Give Half A Damn
from the Walter-Cronkite-is-Dead dept
Fri, Feb 6th 2015 07:39pm - Karl Bode
We’ve been talking a lot lately about how the new school of website design (with ReCode, Bloomberg, and Vox at the vanguard) has involved a misguided war on the traditional comment section. Websites are gleefully eliminating the primary engagement mechanism with their community and then adding insult to injury by pretending it’s because they really, really love “conversation.” Of course the truth is many sites just don’t want to pay moderators, don’t think their community offers any valuable insight, or don’t like how it “looks” when thirty people simultaneously tell their writers they’ve got story facts completely and painfully wrong.
Many sites justify the move by claiming comments sections are just so packed with pile that they’re beyond redemption, though studies show it doesn’t actually take much work to raise the discourse bar and reclaim your comment section from the troll jungle if you just give half a damn (as in, just simple community engagement can change comment tone dramatically). Case in point is Salon, which decided to repair its awful comment section by hiring a full time moderator, rewarding good community involvement, and treating commenters like actual human beings:
“You can measure engagement by raw number of comments or commenters. Using Google Analytics, Livefyre and Adobe, Salon looks at metrics like the number of replies they make as a share of overall comments, how frequently they share Salon articles, and how many pageviews they log per visit. (Users who log in, which is required if you want to comment, view seven pages per session on average, while non-registered users make it to only 1.7, according to Dooling.) After it identified these top commenters, Salon has solicited their feedback and invited them to lead discussions on posts and even help moderate threads.
…”Comments aren?t awful,? (said Salon community advisor Annemarie Dooling). ?It?s just the way we position them. The whole idea is not to give up on debate.”
That news is now a conversation and a community is something traditional news outlets have struggled to understand, so it’s ironic that a major wave of websites proclaiming to be the next great iteration of media can’t seem to figure this out either. For example Verge co-founder Josh Topolsky, spearheading the freshly-redesigned Bloomberg, recently argued that disabling comments is ok because editors are still “listening” to reader feedback by watching analytics and the viewer response to wacky font changes. But that’s not the same as engagement or facilitating engagement. Similarly, Reuters and ReCode editors have tried to argue that Facebook and Twitter are good enough substitutes for comments — ignoring that outsourcing engagement to Facebook dulls and homogenizes your brand.
Former managing editor for digital strategy at the New York Times Aron Pilhofer, now at The Guardian, seems to understand this point:
“I feel very strongly that digital journalism needs to be a conversation with readers. This is one, if not the most important area of emphasis that traditional newsrooms are actually ignoring. You see site after site killing comments and moving away from community ? that?s a monumental mistake. Any site that moves away from comments is a plus for sites like ours. Readers need and deserve a voice. They should be a core part of your journalism.”
Now — can you quantify and prove that money spent on community engagement will come back to you in clear equal measure as cold, hard cash? Of course not. But all the same, it’s not really a choice. We’re well beyond the Walter Cronkite era of journalism where a talking head speaks at the audience from a bully pulpit. We’re supposed to have realized by now that news really is a malleable, fluid, conversational organism. Under this new paradigm, reporters talk to (and correct) other reporters, blogs and websites talk to (and correct) other blogs and websites, and readers talk to (and correct) the writers and news outlets. You’re swimming against the current if your website design culminates in little more than a stylish uni-directional bullhorn.
Filed Under: comments, community
Companies: guardian, salon
Google Alerts Press About Right To Be Forgotten Removals, Putting Those Stories Back In The News
from the institutionalized-streisanding dept
It’s 2014 and do we really need to be reminded that, when you seek to censor something by demanding that it be removed from view, it’s really only going to generate that much more attention to the original? I believe there’s even a term for that sort of thing. As you may have heard, thanks to a ridiculous ruling in the EU Court of Justice, Google is being forced to start removing links to content, based on submissions by people who wish their past embarrassments would just disappear down the memory hole. The company received tens of thousands of requests for removals based on the new ruling, and last week began removing such links from its index, following a review by the new team the company had to put together to review these requests.
It appears that, as part of its transparency efforts, Google is also telling the websites who are being delinked that they are being delinked over this, because both the BBC and the Guardian have stories up today about how they’ve had stories removed from Google thanks to the “right to be forgotten” efforts. And, guess what? Both articles dig into what original articles have been removed, making it fairly easy to determine just who was so embarrassed and is now seeking to have that embarrassing past deleted. And, of course, by asking for the content to be removed, these brilliant individuals with embarrassing histories have made both the removal attempt and the original story newsworthy all over again.
First up, is the BBC, which received a notice about one of its articles being removed from search. That article is all about Merrill Lynch chairman Stan O’Neal losing his job. In fact, the only person named in the article is… Stan O’Neal. Take a wild guess what thin-skinned former top executive to a major US financial firm must have issued a “please forget me” request to Google? The BBC’s Robert Preston — author of both articles — questions why this should be forgotten:
My column describes how O’Neal was forced out of Merrill after the investment bank suffered colossal losses on reckless investments it had made.
Is the data in it “inadequate, irrelevant or no longer relevant”?
Hmmm.
Most people would argue that it is highly relevant for the track record, good or bad, of a business leader to remain on the public record – especially someone widely seen as having played an important role in the worst financial crisis in living memory (Merrill went to the brink of collapse the following year, and was rescued by Bank of America).
In other words, welcome to the new world in Europe, where all sorts of important, truthful and relevant information gets deleted.
Over at the Guardian, they’ve found out that six articles from their website have been memory-holed by Google. And again, it quickly becomes clear who’s involved:
Three of the articles, dating from 2010, relate to a now-retired Scottish Premier League referee, Dougie McDonald, who was found to have lied about his reasons for granting a penalty in a Celtic v Dundee United match, the backlash to which prompted his resignation.
The Guardian does searches for McDonald on both the US and UK versions of Google and finds that McDonald’s lie is wiped from history over in the UK, while we Americans can still find it, no problem.
The other disappeared articles ? the Guardian isn’t given any reason for the deletions ? are a 2011 piece on French office workers making post-it art, a 2002 piece about a solicitor facing a fraud trial standing for a seat on the Law Society’s ruling body and an index of an entire week of pieces by Guardian media commentator Roy Greenslade.
It’s pretty likely that Paul Baxendale-Walker is the person complaining about that second article, since he’s the main subject of that article. The other two… are not clear at all. The Post-It wars story names three individuals: Julien Berissi, Stephane Heude and Emilie Cozette. But none of them are portrayed in any way that would seem negative. It just shows them having some fun by making giant post-it artwork. And the other one is just weird because it’s not an actual story, but an index page showing a week of story headlines and opening blurbs — but apparently whichever article in the list caused the request wasn’t directly included itself — suggesting whoever sent in the request did a pretty bad job of figuring out what to censor.
Either way, both the Guardian and the BBC point out how ridiculous this is. Preston, at the BBC, says this is “confirming the fears of many in the industry” that this will be used “to curb freedom of expression and to suppress legitimate journalism that is in the public interest.” Meanwhile, James Ball at the Guardian, notes how troubling this is, and starts to think of ways to deal with it, including highlighting every “deleted” article:
But this isn’t enough. The Guardian, like the rest of the media, regularly writes about things people have done which might not be illegal but raise serious political, moral or ethical questions ? tax avoidance, for example. These should not be allowed to disappear: to do so is a huge, if indirect, challenge to press freedom. The ruling has created a stopwatch on free expression ? our journalism can be found only until someone asks for it to be hidden.
Publishers can and should do more to fight back. One route may be legal action. Others may be looking for search tools and engines outside the EU. Quicker than that is a direct innovation: how about any time a news outlet gets a notification, it tweets a link to the article that’s just been disappeared. Would you follow @GdnVanished?
Preston has asked Google how the BBC can appeal, while Ball says the Guardian doesn’t believe there’s any official appeals process. Either way, it’s safe to say that (1) this process is a mess and leading to the censorship of legitimate content and (2) people like Stan O’Neal and Dougie McDonald who thought that they could hide their embarrassing pasts under this ruling may not end up being very happy in the long term.
Filed Under: censorship, dougie mcdonald, eu, free press, free speech, news, paul baxendale-walker, press, right to be forgotten, stan o'neal, stories, uk
Companies: bbc, google, guardian, merrill lynch
Washington Post: Stop Us Before We Do Any More Real Journalism Like That Cute Little Guardian Paper
from the we-should-never-have-broken-watergate dept
Want to see how an out of touch editorial board works? The Washington Post — which continues to be a key player in publishing documents leaked by Ed Snowden — has written a bizarre and totally tone deaf post about how the leaks need to stop before they cause any real damages.
In fact, the first U.S. priority should be to prevent Mr. Snowden from leaking information that harms efforts to fight terrorism and conduct legitimate intelligence operations. Documents published so far by news organizations have shed useful light on some NSA programs and raised questions that deserve debate, such as whether a government agency should build a database of Americans’ phone records. But Mr. Snowden is reported to have stolen many more documents, encrypted copies of which may have been given to allies such as the WikiLeaks organization.
It is not clear whether Russia or China hasobtained the material, though U.S. officials would have to assume that Mr. Snowden would be obliged to hand over whatever he has to win asylum in Moscow. Such an exchange would belie his claim to be a patriotic American and a whistleblower. At the same time, stopping potentially damaging revelations or the dissemination of intelligence to adversaries should take precedence over U.S. prosecution of Mr. Snowden — which could enhance his status as a political martyr in the eyes of many both in and outside the United States.
Yes, this is an editorial board of a newspaper famous for breaking stories thanks to whistleblowers and leakers, including this very story, asking the government to stop them from being able to publish any more leaked documents. It’s as if the Editorial Board of the Washington Post doesn’t even realize that its own reporters have been key players in reporting on this story. Or, as Jack Shafer amusingly wrote: “Bart Gellman’s stories are coming from INSIDE YOUR BUILDING!”
And then, in a bizarre article by Paul Farhi, the Washington Post appears to mock The Guardian, the famed British newspaper, which has been around for almost two centuries and is well known around the globe, as if it’s some small upstart:
For a newspaper that’s small and underweight even by British standards, the Guardian has a knack for making some big noises, both in its home market and across the pond.
Of course, as plenty of folks are pointing out, the Guardian is larger than the Washington Post in terms of readership:
The Guardian’s global monthly unique visitors: 23.2 million 41 million in May, per Guardian press officer Gennady Kolker
The Washington Post’s monthly unique visitors: 17.2 million
And, in terms of newsrooms, apparently, they have nearly identical staff sizes. Oh, and then there’s this: while the Washington Post has beaten the Guardian to a few of these stories, the Guardian is generally cleaning WaPo’s clock in terms of its overall coverage of the leaks. Perhaps the Washington Post shouldn’t let its jealousy show quite so much.
Filed Under: ed snowden, editorial board, journalism, leaks
Companies: guardian, washington post
And This Little Piggy Went Viral
from the advertising-is-content dept
BoingBoing recently highlighted this entertaining commercial for The Guardian, which neatly captures the state of modern news by having some fun with a fairy tale:
It’s a great little production, because not only does it effectively portray the potential of what is variously called open journalism, citizen journalism and participatory journalism, among other things, it also serves as a good example of a common mantra around these parts: advertising is content, and content is advertising.
Filed Under: advertising, content, journalism, viral
Companies: guardian
Building Company Realizes That Threatening A Blogger With Bogus Libel Suit Was A Bad Idea; Sincerely Apologizes
from the you-don't-see-this-very-often dept
For years, we’ve covered stories of companies reacting badly to finding something they don’t like about themselves online, and threatening to sue those who posted the content with libel. Many lawyers tend to go to extremes in threatening people, with the idea of scaring them into just taking content down. These days, of course, that’s quite likely to backfire, as the recipient can just go public with it, and shame the company. Even so, it’s rare for those companies to come out with a really sincere apology. Aaron DeOliveira points us to an interesting story involving a building company, Guardian Building Products, that freaked out over a blog that showed “a lousy installation” of their insulation. The company threatened to sue, saying that the blog post “constitute[s] libel, slander and commercial disparagement.” In response, the blogger, Dr. Allison Bailes, went public with the threat.
And here’s where Guardian realized that perhaps it was doing something really, really wrong. It sent a very apologetic letter:
While you can (reasonably) argue that the company should have known better than to send the original threat letter, there is something to be said for owning up to the fact that you made a big mistake, and (hopefully) actually learning from it.
Filed Under: apologies, defamation, streisand effect
Companies: guardian
Guardian Asks UK Gov't To Investigate Google News For Not Contributing To Journalism?
from the wrong-target dept
It had seemed like perhaps The Guardian newspaper in the UK understood how the internet worked. After all, execs there had been saying that they hoped the NYTimes would start charging, since it would just drive a lot more traffic their way. However, it seems like not everyone at The Guardian is on the same page. Similar to Feargal Sharkey’s call demand that the UK government investigate Google for not giving the recording industry money, The Guardian is now asking the UK government to investigate Google over its Google News product, specifically claiming that Google gets too much benefit from its content. Of course, there’s a simple solution to this: take your news off of Google News (or take it offline altogether). But The Guardian doesn’t want to do that.
The reasoning is a bit convoluted, but, basically The Guardian says that since the online ad market is tough right now, it can’t make enough money on the traffic that Google sends it. So stop accepting traffic from Google, right? No, it can’t do that, because then competitors like the BBC would sweep up all of the traffic.
Is it just me, or does this reasoning suggest that The Guardian should be asking the government not to investigate Google News, but the BBC for representing unfair competition? The Guardian’s reasoning here is a bit tortured. It seems to be saying it can’t compete with other sources due to Google News… even though those other sources have the exact same issue (getting traffic from Google News). It’s only real complaint is that the BBC offers its content for free online — and (though it doesn’t appear to explicitly call this out), the BBC is publicly funded and doesn’t have to focus on ad revenue like The Guardian does. So why isn’t the complaint against the BBC instead of Google News?
The Guardian always struck me as a pretty good paper, but the logic here is hard to understand. If it doesn’t want the traffic, fine, don’t take it (though, most people recognize that would be a mistake). If the problem is that it can’t monetize the content effectively, then that’s a business model problem for The Guardian — not Google News. Finally, if the problem is (as it appears) competition from the BBC, then take it up with the BBC or those who fund it, but don’t misplace the blame on Google News.
Filed Under: competition, google news, journalism
Companies: bbc, google, guardian