The Social Dilemma Manipulates You With Misinformation As It Tries To Warn You Of Manipulation By Misinformation (original) (raw)
from the it's-not-good dept
There’s been a lot of buzz lately about the Netflix Documentary The Social Dilemma, which we’ve been told reveals to us “the dangerous human impact of social networking, with tech experts sounding the alarm on their own creations.” I know that the documentary has generated widespread discussion — especially among those not in the tech space. But there’s a problem with the film: nearly everything it claims social media platforms do — manipulating people with misinformation — the film does itself. It is horribly one-sided, frequently misrepresents some fairly basic things, and then uses straight up misinformation to argue that social media has some sort of godlike control on the people who use it. It’s nonsense.
Also, I should note that nowhere do they mention that Netflix, the company which funded produced, distributed and widely promoted the documentary, is also arguably the first big internet company to spend time, money, and resources on trying to perfect the “recommendation algorithm” that is at the heart of the film’s argument that these internet companies are evil. I guess some folks no longer remember, but a decade ago, Netflix even held a huge $1 million prize contest asking anyone to try to build a better recommendation algorithm. (Update: it has been claimed that despite this being a “Netflix original” and widely promoted and distributed by Netflix as such, that Netflix did not “fund” the film, which doesn’t really change anything here, given everything else Netflix has done to make this film widely seen.)
There are a number of reasons to complain about what is portrayed in the film, but I’ll highlight just a few key ones. One narrative device that is used throughout the film is that it has these weird… not quite “re-enactments” but odd “afterschool special”-style fictional clips of a family with kids who really use social media a lot. And, yes, there are plenty of kids out there who have trouble putting down their phones/tablets — and there are reasons to be concerned about that (or at least to investigate the larger ramifications of it). But the film not only exaggerates them to a ridiculous degree reminiscent of Reefer Madness type of moral panic propaganda, but it repeatedly suggests (1) that social media can ruin a kid’s life in like two days, and (2) that social media can, within a matter of a week or two, turn an ordinary teen into a radicalized hate monger who will join in-person mobs (leading to arrest).
Even worse, the fictional clips go a level deeper, trying to anthropomorphize the evil “algorithm” in the form of three white dudes standing… on the deck of the Starship Enterprise? Or some other weird sci-fi trope:
Throughout the film, these three guys and their weird computer-ish controls in front of them are shown trying to “increase engagement” of the son in the family through any means necessary — including magically forcing some girl they think he likes to interact with him. And, I’m sorry, but that’s… not how any of this works.
It is literally emotionally engaging misinformation designed to impact our beliefs and actions. In other words, the same thing that the film claims social media companies are doing.
But it’s also the same thing any company has tried to do in the past through… advertising. One theme that runs throughout the film, and is dead wrong, is the idea that social media advertising can somehow “control” you. And… uh… no. Come on. Social media advertising is a joke. Can it better target some ads? Sure thing, and if those ads target stuff you actually find useful, then… that’s a good thing? But, most social media advertising is still garbage. It’s why so many of us block or ignore ads. Because they’re still just not that good.
The film is really designed to showcase Tristan Harris, who probably takes up 1/3 of the screen time. Tristan made his name by being the internal “ethicist” at Google for a little while before setting out on his own to become the high prophet of “internet companies are trying to manipulate us!” But, as others have pointed out, Tristan has a habit of vastly exaggerating things, or being misleading himself. As just one example, highlighted by Antonio Garcia-Martinez in his must-read dismantling of the film, is that Harris argues that we didn’t have these same problems with earlier technologies — like the bicycle. But as Antonio points out, there was, in fact, quite a large moral panic about the bicycle, and the Pessimist’s Archive makes the point quite clearly in this little clip:
As we’ve discussed for years, pretty much every new form of technology or entertainment — including the waltz, chess, the telephone and more — has resulted in similar moral panics, almost none of which proved to be accurate.
That doesn’t mean that there aren’t important concerns and messages that we ought to think about regarding the design of the internet and the various services we use. But the problem is that this film totally fails to adequately address any of those concerns and uses exactly the wrong messengers to bring the message. The vast majority of the talking heads in the film are former (and in some cases) current employees from the big tech companies who “regret” what happened with what they built. But they don’t seem to have any more of an idea of what to do other than “put down your phone,” which like “just say no” drug campaigns and sex-abstinence education programs have long been proven to be absolutely useless.
Also, it should be noted that the guy who gets the second most amount of screen time, former Facebook employee and Pinterest CEO Tim Kendall, currently is CEO of a company that tries to help you limit your phone time usage. Anyone think he has, perhaps, alternative motives to play up how “addictive” he made Facebook and Pinterest?
Notably, in nearly every case, the film takes the most nefarious and extreme explanations for what is happening at social media companies. At no time does it present a single person who offers a counterpoint, or suggests that the descriptions in the film are exaggerated and misleading. Again, all it does is use misinformation and manipulation to warn you about other tech companies supposedly using misinformation to manipulate.
On top of that, as many people have noted, there are many, many activists and experts — though frequently not white male former tech bros in t-shirts — who have been working on actual ways to improve technology and services, and to provide real solutions. But the film ignores all of them as well.
The entire conceit of the film is that these few tech giants (again, notably not Netflix, despite it being the leader in recommendation algorithms) have some sort of “total control” on the minds and actions of people. There’s some nonsense in there from Harvard professor Shoshana Zuboff, coiner of the term “Surveillance Capitalism” which always feels like a useful phrase until you dig in and realize that Zuboff has less than no clue about how the internet actually works. She insists that these companies are selling “human futures” which… is… not… how any of this works.
As Antonio summarizes, Zuboff seems to think that Silicon Valley is doing magic that it is not doing. This is akin to Josh Hawley last week arguing that the tech platforms have “total control” over our brains and our voting abilities:
Less diplomatically, everything Zuboff says is a nonsensical non sequitur.
?This is a world of certainty.?
Then why am I, crusty ad tech veteran, building probabilistic models all day?
?This is a totally new world.?
No it isn?t. I was there, at Facebook when it happened. We copied it all from the direct-mail people who?ve done it for decades.
?They?re trading human futures like we do pork-belly futures.?
Lolwut?
The CBC’s coverage of the film, rightly points out that the film is greatly exaggerating reality to the point of it being misinformation.
One of the ways the documentary represents surveillance, Chun noted, is by using three human actors trying to entice someone to use their phone and stay on social media longer. Along with their presentation of social media as an addiction (“there’s a difference between a habit and an addiction” Chun said) is the fear that is created when people think that real humans have access to all of their information, instead of algorithms that predict human behavior.
Though Chun argued users should not be tracked, she said the idea that algorithms know “everything” about you isn’t correct. She argued the film itself is based on revealing “open secrets,” and the information these services use to present personalized ads doesn’t reflect a deep knowledge of users.
“The idea that somehow they control you is overblown,” she said. “At the same time, you can say that a lot of what they know about you is accurate. But then the question you have to ask yourself is: So what?”
Indeed, the claims about “addiction” are so overblown as to be laughable. The film repeatedly argues that once you’re addicted to social media these companies can change your thoughts. But… that’s not what addiction is or how it works. It’s not brainwashing.
Anyway, there were many more things wrong with it — and even if people can agree that there are some significant problems with the internet of today, I have a hard time believing that the way you fix manipulation and misinformation is by creating a documentary that is full of misinformation and designed to manipulate emotionally people into believing things that just aren’t true.
Filed Under: documentary, manipulation, misinformation, social dilemma, social media, surveillance capitalism, tristan harris
Companies: facebook, google, netflix, twitter, youtube