deepfake – Techdirt (original) (raw)

Dead Politicians Hit The Campaign Trail For India’s Billion-Voter Election

from the who-decides-what-you-can-do-after-your-death? dept

The 2024 elections in India are widely regarded as the largest in history, with nearly a billion people eligible to cast a vote. Alongside the sheer human scale, there’s another aspect of the Indian elections that is surprising for its magnitude. This is the use of millions of deepfakes by Indian politicians in an attempt to sway voters, a topic on the most recent Ctrl-Alt-Speech podcast. As Mike noted during the discussions there, it’s a relatively benign kind of deepfake compared to some of the more nefarious uses that seek to deceive and trick people. But an article on the Rest of the World site points out that the use of deepfakes by Indian politicians is pushing ethical boundaries in other ways:

In January this year, M. Karunanidhi, the patriarch of politics in the southern state of Tamil Nadu, first appeared in an AI video at a conference for his party’s youth wing. In the clip, he wore the look for which he is best remembered: a luminous yellow scarf and oversized dark glasses. Even his head was tilted, just slightly to one side, to replicate a familiar stance from real life. Two days later, he made another appearance at the book launch of a colleague’s memoirs.

Karunanidhi died in 2018.

“The idea is to enthuse party cadres,” Salem Dharanidharan, a spokesperson for the Dravida Munnetra Kazhagam (DMK) — the party that Karunanidhi led till his death — told me. “It excites older voters among whom Kalaignar [“Man of Letters,” as Karunanidhi was popularly called] already has a following. It spreads his ideals among younger voters who have not seen enough of him. And it also has an entertainment factor — to recreate a popular leader who is dead.”

A Wired article on the topic of political deepfakes, discussed on the Ctrl-Alt-Speech podcast, mentions another Tamil Nadu politician who was resurrected using AI technology:

In the southern Indian state of Tamil Nadu, a company called IndiaSpeaks Research Lab contacted voters with calls from dead politician J. Jayalalithaa, endorsing a candidate, and deployed 250,000 personalized AI calls in the voice of a former chief minister. (They had permission from Jayalalithaa’s party, but not from her family.)

That raises the issue of who is able to approve the use of audio and video deepfakes of dead people. In India, it seems that some political parties have no qualms about deploying the technology, regardless of what the politician’s family might think. Should the dead have rights here, perhaps laid down in their wills? If not, who should be in control of their post-death activities? As more political parties turn to deepfakes of the dead for campaigning and other purposes, these are questions that will be asked more often, and which need to be answered.

Follow me @glynmoody on Mastodon and on Bluesky.

Filed Under: ai, ctrl-alt-speech, death, deepfake, elections, india, podcast, political parties, rights, tamil nadu, will
Companies: IndiaSpeaks Research Lab

Freshly Indicted Biden Deepfaker Prompts Uncharacteristically Fast FCC Action On AI Political Robocalls

from the careful-what-you-wish-for dept

Fri, May 24th 2024 05:30am - Karl Bode

Earlier this year you probably saw the story about how a political consultant used a (sloppy) deepfake of Joe Biden in a bid to try and trick voters into staying home during the Presidential Primary. It wasn’t particularly well done; nor was it clear it reached all that many people or had much of an actual impact.

But it clearly spooked the government, which was already nervously watching AI get quickly integrated in global political propaganda and disinformation efforts.

The Biden deepfake quickly resulted in an uncharacteristically efficient joint investigation by the FCC and state AGs leading to multiple culprits, including Life Corp., a Texas telecom marketing company, a political consultant by the name Steve Kramer, and a magician named Paul Carpenter, who apparently “holds a world record in straitjacket escapes.”

But Kramer was the “mastermind” of the effort, and when busted back in February, claimed to NBC News that he was secretly trying to prompt regulatory action on robocalls, likening himself to American Revolutionary heroes Paul Revere and Thomas Paine (seriously):

This is a way for me to make a difference, and I have,” he said in the interview. “For 500,Igotabout500, I got about 500,Igotabout5 million worth of action, whether that be media attention or regulatory action.”

This week Kramer was indicted in New Hampshire, and now faces five counts that include bribery, intimidation and suppression. Now that he’s been formally indicted, Kramer, likely heeding the advice of counsel, is significantly less chatty than he was earlier this year.

Whether he’s telling the truth about his intentions or not, Kramer has gotten his wish. The whole mess has prompted numerous new AI-related efforts by the historically somewhat feckless FCC. Back in February, the FCC proposed a new rule declaring such calls illegal under the Telephone Consumer Protection Act (TCPA), which it already uses to combat robocalls (often badly).

And this week, the FCC announced it would also be considering new rules requiring disclosure of the use of AI in political ads:

“As artificial intelligence tools become more accessible, the Commission wants to make sure consumers are fully informed when the technology is used,” [FCC boss Jessica] Rosenworcel said in a news release. “Today, I’ve shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see, and I hope they swiftly act on this issue.”

We’ve explored in great detail how the FCC has been a bit of a feckless mess when it comes to the policing of robocalls. In part because it’s had its legal authority chipped away by industry lobbying and dodgy court rulings for years, but also because big telecom giants (affixed to our domestic surveillance apparatus) and “legit” marketing companies lobby revolving door regulators for rule loopholes.

Everything the FCC does, however wimpy, inevitably faces a major court challenge by corporations keen on making money off of whatever the FCC is trying to protect the public from. It’s why in the year 2024 scammers and scumbags have rendered our voice communications networks nearly unusable. Hopefully the FCC’s efforts to combat AI deep fake political robocalls results in a more productive outcome.

Filed Under: ai, automation, deepfake, disinformation, fcc, misinformation, politics, propaganda, robocalls

The Guy Behind That Biden AI Deepfake Robocall Is Going To Go Through Some Things

from the good-luck dept

Wed, Feb 28th 2024 05:30am - Karl Bode

Last month you probably saw the story about how somebody used a (sloppy) deepfake of Joe Biden in a bid to try and trick voters into staying home during the Presidential Primary. It wasn’t particularly well done; nor was it clear it reached all that many people or had much of an actual impact.

But it clearly spooked the government. FCC robocall enforcement is generally fairly feckless for reasons we’ve well discussed (short version: having strict enforcement and rules might upset corporate American debt collectors and marketing departments that use many of the same tactics as robocall scammers).

But in this case it took all of a week or two before the FCC, in cooperation with state AGs, had tracked down the culprit: a “veteran political consultant working for a rival candidate” by the name of Steve Kramer. In comments to NBC, Kramer make it rather clear that he doesn’t really quite understand the width and breadth of the tornado dumpster fire about to fall on his head:

“In a statement and interview with NBC News, Kramer expressed no remorse for creating the deepfake, in which an imitation of the president’s voice discouraged participation in New Hampshire’s Democratic presidential primary. The call launched several law enforcement investigations and provoked outcry from election officials and watchdogs.

“I’m not afraid to testify, I know why I did everything,” he said an interview late Sunday, his first since coming forward. “If a House oversight committee wants me to testify, I’m going to demand they put it on TV because I know more than them.”

While U.S. regulators are pretty feckless about robocall enforcement (especially if you’re a large company that might prove difficult to defeat in court), they’re going to nail a small fry like this to a tree in the town square to make a point.

Kramer, a well known player in Albany politics who helped the short-lived Ye campaign, appears to believe he’ll be able to tap dance around his coming legal woes by insisting that he was some kind of avante garde revolutionary or activist:

“Kramer claimed he planned the fake robocall from the start as an act of civil disobedience to call attention to the dangers of AI in politics. He compared himself to American Revolutionary heroes Paul Revere and Thomas Paine. He said more enforcement is necessary to stop people like him from doing what he did.

“This is a way for me to make a difference, and I have,” he said in the interview. “For 500,Igotabout500, I got about 500,Igotabout5 million worth of action, whether that be media attention or regulatory action.”

Indeed.

How much of a kick to the crotch Kramer will experience is hard to parse out, but he’s not going to have fun. The usually fairly feckless FCC is making a precedent-shifting change for his “act of civil disobedience,” declaring AI-generated robocalls illegal under the Telephone Consumer Protection Act (TCPA), which they already use to combat robocalls.

Usually the FCC (technically the FTC) sucks at collecting robocall fines because scammers (and legit companies) spoof their numbers and identities, making them hard to track down. In this case, Kramer is openly bragging about what he did, so I’d imagine the fine will be very large and hard to avoid.

For reference, right-wing propagandists Jacob Wohl and Jack Burkman were fined $5,134,500 for 1,141 illegal robocalls the duo made in a bid to confuse and mislead state voters. I’d suspect that this fine will be bigger. Kramer will also likely face a litany of lawsuits, and whatever additional charges the federal government can drum up to make an example of him. Which he claims is what he wanted, so enjoy.

Filed Under: consumer protection, deepfake, fcc, joe biden, politics, ratfuckery, robocalls, tcpa

Robocallers Used Sloppy Biden Deepfake To Try And Keep Voters From The Polls On Presidential Primary Day

from the the-check-is-coming-due dept

Wed, Jan 24th 2024 05:29am - Karl Bode

It’s equal parts annoying and bizarre that we’ve normalized the fact that scammers, scumbags, debt collectors, partisan operatives, and marketers have made the U.S.’ primary voice communication platform largely unusable. Americans received 3.8 billion robocalls last month, and despite some modest inroads in fighting the problem, it continues to grow for reasons we’ve already well established.

This week the problem popped up again, after some anonymous political ratfuckers tried to keep voters from heading to the polls on January 23rd’s presidential primary. The New Hampshire Attorney General’s office says its investigating the ploy, and in a statement noted the calls use a (badly) deepfaked version of President Biden’s voice to convince them to stay home:

“The message, which was sent on January 21, 2024, stated “your vote makes a difference in November, not this Tuesday.” Although the voice in the robocall sounds like the voice of President Biden, this message appears to be artificially generated based on initial indications.”

Impressive sleuthing there, champs.

NBC has a recording of the clearly fake Biden voice call, and notes that the call spoofed the number of “a prominent New Hampshire Democrat.” While Biden’s name wasn’t actually on the New Hampshire ballot due to an intra-party scheduling feud, there was an active campaign by the spoofed caller in question to conduct a write-in campaign in the state. It’s unclear how many voters received the calls.

There are, of course, numerous problems here. Voter data can easily be purchased from data brokers because corrupt federal officials refuse to regulate data brokers or pass even a baseline privacy law for the internet era. At the same time, federal anti-robocalling efforts have been consistently undermined by lobbyists for legitimate companies that keep weakening consumer protections and boxing in regulators, and would prefer you think robocalling is a tactic exclusive to scammers.

So even if the perpetrators of this political ratfucking effort are identified, whether or not they’re meaningfully penalized (by state and federal regulators that can’t even competently find or fine robocallers most of the time) is an open question. And while this was by every indication a relatively minor scam of low quality, this kind of fusion of deepfakes and robocalling is only just heating up.

Filed Under: consumer protection, deepfake, fcc, political, politics, robocalls

Deepfake Of Tom Cruise Has Everyone Freaking Out Prematurely

from the not-deep-enough-yet dept

You may have heard that in recent days a series of deepfake videos appeared on TikTok of a fake Tom Cruise looking very Tom-Cruise-ish all while doing mostly non-Tom-Cruise-ish things. After that series of short videos came out, the parties responsible for producing them, Chris Ume and Cruise impersonator Miles Fisher, put out a compilation video sort of showing how this was all done.

As you can see, this was all done in the spirit of educating the public on what is possible with this kind of technology and, you know, fun. Unfortunately, some folks out there aren’t finding any fun in this at all. Instead, there is a certain amount of understandable fear for how this technology might disrupt our lives that is leading to less understandable conclusions about what we should do about it.

For instance, some folks apparently think that deepfake outputs should be considered the intellectual property of those who are the subjects of the deepfakes.

A recent deepfake of Hollywood star “Tom Cruise” sparked a bombshell after looking very close to real. Now it has been claimed they are on their way to becoming so good, that families of the dead should own the copyright of their loved ones in deepfakes.

Lilian Edwards, a professor of law and expert in the technology, says the law hasn’t been fully established yet. She believes many will claim they should own the rights, while some may not.

She told BBC: “For example, if a dead person is used, such as (the actor) Steve McQueen or (the rapper) Tupac, there is an ongoing debate about whether their family should own the rights (and make an income from it).”

Now, I want to be somewhat generous here, but this is still a terrible idea. Let’s just break this down practically. In the interest of being fair, it is understandable that people would be creeped out by deepfake creations of either their dead relatives or themselves. Let’s call that a given. But why is the response to that to try to inject some kind of strange intellectual property right into all of this? Why should Steve McQueen’s descendants have some right to control this kind of output? And why are we using McQueen and Tupac as the examples here, given that both are public figures? What problem does this solve?

The answer would be, I think: control over the likeness rights of a person. But such control is both fraught with potential for overreach and over-protection coupled with a history of a total lack of nuance in what should not be considered infringing behavior or what is fair use. Techdirt’s pages are littered with examples of this. Add to all of this that purveyors of deepfakes are quite often internationally located, anonymous, and unlikely to pay the slightest attention to the kind of image likeness rights being bandied about, and you really have to wonder why we’re even entertaining this subject.

And then there are the people who think this Tom Cruise deepfake means that soon we’ll simply have no functional legal system at all.

The CEO of Amber, a video verification site, believes deepfake evidence will raise reasonable doubt. Mr Allibhai told us: “Deepfakes are getting really good, really fast.

“I am worried about both aural/visual evidence being manipulated and also just the fact that when believable fake videos exist, they will delegitimise genuine evidence and defendants will raise reasonable doubt. When the former happens, innocent people will go to jail and when the latter happens, criminals will be set free. Due process will be compromised and a core foundation of democracy is undermined. Judges will drop cases, not necessarily because they believe jurors will be unable to tell the difference: they themselves, and most humans for that matter, will be unable to tell the difference between fact and fiction soon.”

Folks, we really need to slow our roll here. Deepfake technology is progressing. And it’s not progressing slowly, but nor is it making insane leaps heretofore unforeseen. The collapse of the legal system as a result of nobody being able to tell truth from fiction may well come one day, but it certainly won’t be coming as a result of the harbinger of a Tom Cruise deepfake.

In fact, you really have to dial in on how the Cruise videos were made to understand how unique they are.

The Tom Cruise fakes, though, show a much more beneficial use of the technology: as another part of the CGI toolkit. Ume says there are so many uses for deepfakes, from dubbing actors in film and TV, to restoring old footage, to animating CGI characters. What he stresses, though, is the incompleteness of the technology operating by itself. Creating the fakes took two months to train the base AI models (using a pair of NVIDIA RTX 8000 GPUs) on footage of Cruise, and days of further processing for each clip. After that, Ume had to go through each video, frame by frame, making small adjustments to sell the overall effect; smoothing a line here and covering up a glitch there. “The most difficult thing is making it look alive,” he says. “You can see it in the eyes when it’s not right.”

Ume says a huge amount of credit goes to Fisher; a TV and film actor who captured the exaggerated mannerisms of Cruise, from his manic laugh to his intense delivery. “He’s a really talented actor,” says Ume. “I just do the visual stuff.” Even then, if you look closely, you can still see moments where the illusion fails, as in the clip below where Fisher’s eyes and mouth glitch for a second as he puts the sunglasses on.

This isn’t something where we’re pushing a couple of buttons and next thing you know you’re seeing Tom Cruise committing a homicide. Instead, creating these kinds of deepfakes takes time, hardware, skill, and, in this case, a talented actor who already looked like the subject of the deepfake. It’s a good deepfake, don’t get me wrong. But it is neither easy to make nor terribly difficult to spot clues for what it is.

All of which isn’t to say that deepfakes might not someday present problems. I actually have no doubt that they will. But as with every other kind of new technology, you’re quite likely to hear a great deal of exaggerated warnings and fears compared with what those challenges will actually be.

Filed Under: copyright, deepfake, publicity rights, tom cruise

Facebook Tested With Deepfake Of Mark Zuckerberg: Company Leaves It Up

from the as-it-should dept

Over the last few weeks there’s been a silly debate over whether or not Facebook made the right call in agreeing to leave up some manipulated videos of House Speaker Nancy Pelosi that were slowed down and/or edited, to make it appear like she was either confused or something less than sober. Some Pelosi-haters tried to push the video as an attack on Pelosi. Facebook (relatively quickly) recognized that the video was manipulated, and stopped it from being more widely promoted via its algorithm — and also added some “warning” text for anyone who tried to share it. However, many were disappointed that Facebook didn’t remove the video entirely, arguing that Facebook was enabling propaganda. Pelosi herself attacked Facebook’s decision, and (ridiculously) called the company a “willing enabler” of foreign election meddling. However, there were strong arguments that Facebook did the right thing. Also, it seems worth noting that Fox News played one of the same video clips (without any disclaimer) and somehow Pelosi and others didn’t seem to think it deserved the same level of criticism as Facebook.

Either way, Facebook defended its decision and even noted that it would do the same with a manipulated video of Mark Zuckerberg. It didn’t take long to put that to the test, as some artists and an advertising agency created a deep fake of Zuckerberg saying a bunch of stuff about controlling everyone’s data and secrets and whatnot, and posted it to Facebook-owned Instagram.

And… spoiler alert: Facebook left it up.

?We will treat this content the same way we treat all misinformation on Instagram,” a spokesperson for Instagram told Motherboard. “If third-party fact-checkers mark it as false, we will filter it from Instagram?s recommendation surfaces like Explore and hashtag pages.?

This actually is not a surprise (nor should it be). People keep wanting to infer nefarious intent in various content moderation choices, and we keep explaining why that’s almost never the real reason. Mistakes are made constantly, and some of those mistakes look bad. But these companies do have policies in place that they try to follow. Sometimes they’re more difficult to follow than other times, and they often involve a lot of judgment calls. But in cases like the Pelosi and Zuckerberg manipulated videos, the policies seem fairly clear: pull them from the automated algorithmic boost, and maybe flag them as misinformation, but allow the content to remain on the site.

So, once again, we end up with a “gotcha” story that isn’t.

Of course, now that Pelosi and Zuck have faced the same treatment, perhaps Pelosi could get around to returning Zuckerberg’s phone call. Or would that destroy the false narrative that Pelosi and her supporters have cooked up around this story?

Oh, and later on Tuesday, CBS decided to throw a bit of a wrench into this story. You see, the fake Zuckerberg footage is made to look as though it’s Zuck appearing on CBS News, and the company demanded the video be taken down as a violation of its trademark:

Perhaps complicating the situation for Facebook and Instagram a call late Tuesday from CBS for the company to remove the video. The clip of Zuckberg used to make the deepfake was taken from an online CBS News broadcast. “CBS has requested that Facebook take down this fake, unauthorized use of the CBSN trademark,” a CBS spokesperson told CNN Business.

Of course, if Facebook gives in to CBS over this request, it will inevitably (stupidly) be used by some to argue that Facebook used a different standard for disinformation about its own exec, when the reality would just be a very different kind of claim (trademark infringement, rather than just propaganda). Hopefully, Facebook doesn’t cave to CBS and points out to the company the rather obvious fair use arguments for why this is not infringing.

Filed Under: content moderation, deep fake, deepfake, mark zuckerberg, nancy pelosi
Companies: facebook, instagram