andrew yang – Techdirt (original) (raw)
No, Google Isn't Hiding Elizabeth Warren's Emails To Promote Mayor Pete
from the not-how-it-works dept
Content moderation at scale is impossible. This time, it’s email content moderation. This week a new publication called The Markup launched. It’s a super smart group of folks who are doing deep data-driven investigative reporting of companies in and around the tech space — and I’m very excited to see what they do. I was going to write about the project overall and its goals, but instead I’m going to write about one of its first stories, done in partnership with the Guardian, entitled Swinging the Vote?, and which looks at Gmail’s filtering system, specifically as it regards political emails from Presidential candidates.
A few years back, Google added the “Promotions” tab to Gmail, as a way of hopefully, automagically sorting not-quite-spam emails, but general promotional emails that you probably don’t want cluttering up your inbox. Personally, I don’t use it, as I use a different filtering setup entirely that overrides Gmail’s defaults. However, for many people it’s proven to be quite useful. The reporters at The Markup conducted a worthwhile experiment:
The Markup set up a new Gmail account to find out how the company filters political email from candidates, think tanks, advocacy groups, and nonprofits.
We found that few of the emails we?d signed up to receive ?11 percent?made it to the primary inbox, the first one a user sees when opening Gmail and the one the company says is ?for the mail you really, really want.?
Half of all emails landed in a tab called ?promotions,? which Gmail says is for ?deals, offers, and other marketing emails.? Gmail sent another 40 percent to spam.
Very interesting! What was perhaps even more interesting was the chart — which quickly rocketed around social media — showing that some candidates had their emails go into the Primary Inbox at a much, much higher rate than others:
You’ll notice a few standouts. 63% of Pete Buttigieg’s emails made it to the Inbox, as did 47% of Andrew Yang’s. Everyone else was much closer to 0% with quite a few — including both Elizabeth Warren and Joe Biden — at 0%.
The reporters at The Markup also published a companion piece that gives the details of how they went about doing this research and (yes!) they even provide the data and the code on Github. This is a fantastic and transparent way of doing such journalism — and I applaud them for that.
However, the very framing of the original story itself… is problematic. It’s one thing to be open about how you conducted the research. But starting with a title like “Swinging the vote” and highlighting the chart above almost immediately resulted in lots of people on Twitter assuming (or suggesting) that Google was doing this deliberately, and that they were purposely making the decision to tilt the playing field towards Buttigieg. This includes vocal big tech critic Roger McNamee, who declared this was evidence that “Gmail has its thumb in the scale.” Another Google critic, who is fond of misleading conspiracy theories about the company, called it “election meddling” and claims that Google was giving certain candidates “special treatment.”
Except… that’s almost certainly not the case. No one at Google on the Gmail spam team is thinking about promoting one Presidential candidate over another. Instead, this is just yet another example of Masnick’s Impossibility Theorem, but applied to email moderation, rather than social media. Content moderation at scale is impossible to do well and will always piss off some people.
Indeed, looking over the data, the most obvious and most likely solution is simply this: Buttigieg and Yang hired competent email marketers who know how to craft emails that are (1) less likely to trigger the algorithm, and (2) less likely to be clicked on as spam by users (an important signal that feeds back into the algorithm). The rest of the candidates… did not. And thus, their emails went to the promotions and spam folder because they had characteristics that are more closely associated with promotions and spam. And, yet, The Markup story doesn’t bother to get into any of that — and thus leaves the speculation wide open, allowing plenty of folks to leap in.
Again, I’m super excited about The Markup as a project and believe it will put out plenty of important and impactful journalism in the days, weeks, months and years to come. I recommend people read over The President’s Letter from the site’s President Nabiha Syed (a past podcast guest) and Editor’s Letter from Julia Angwin — both of which present a compelling vision of what The Markup will be.
But this story shows how important context is in presenting a story. This is a data driven story — which is great. But if the necessary context is not provided, especially on a topic so fraught with speculation, people are going to rush in and jump to conclusions. The Markup itself did not directly say that Google was doing this deliberately, but its total failure to suggest why this might be happening, along with a cringe-worthy headline, opened the door for others to jump in and assume as much — and that’s a shame.
Filed Under: andrew yang, content moderation, content moderation at scale, elizabeth warren, email, gmail, pete buttigieg, political emails, promotions, spam
Companies: google
Andrew Yang's Horrible, No Good, Very Bad Tech Policy
from the did-you-hire-josh-hawley-to-write-your-policy? dept
Andrew Yang has been a bit of a surprise Presidential candidate this year, and is often described as a former “tech exec” or “Silicon Valley’s presidential candidate”. The “tech exec” claim seems a bit exaggerated, as he was a lawyer, and then ran a test prep company before a non-profit. Still, he got lots of attention for being a bit wonky and at least speaking the language of tech. His main claim to fame has been to support Universal Basic Income of $1,000/month which is a popular idea here in Silicon Valley.
However, the more we hear from Yang about his tech policy ideas, the more ridiculous and completely disconnected from the actual tech world he seems. He got a lot of flak a couple months back when he advocated for voting via your mobile device via blockchain which he declared to be “fraud proof.” This was universally mocked by security professionals and cryptocurrency experts, including one who described the proposal as “unbelievably dumb.”
So, his pro-tech campaign had already hit some choppy waters, and they got much, much worse last week when he introduced his official policy for regulating technology firms that is so filled with bad ideas that I initially thought it was a parody. It may be the single worst tech policy proposal of any current or former candidate for President (and, frankly, nearly all of them are pretty bad). It’s as if he took all the terrible ideas that Senator Josh Hawley has been proposing over the last year or so and said “Oh, I can top all of those with worse proposals.”
Let’s go through the details one by one.
Regulate the use of data and privacy by establishing data as a property right. The associated rights will enable individuals to retain ownership and share in the economic value generated by their data.
No, no, no, no. I’d been meaning to write a separate blog post for a while about this, but there are a few folks out there pushing for the idea that “data” should now be considered a form of “intellectual property,” with the originator holding some sort of property right over it. It’s a horrible idea. Take two horribly misunderstood and abused areas — intellectual property law and privacy — and awkwardly mash them together and pretend it will actually help? Come on. If we’ve learned anything about trying to build property rights over information, it’s that it creates all sorts of awful unintended consequences. Adding those to data will make them much worse.
I mean, given how many copyright, patent, and trademark trolls already exist, aren’t folks super excited about the ability to soon deal with data or privacy trolls as well? It’ll be a real blast. But what it won’t do is actually protect anyone’s privacy. Nor will it allow them to “share in the economic value generated by their data.” That’s now how any of this actually works. In fact, you already share in the economic value generated by your data by getting to use all the amazing services you already use. Charging for the data doesn’t open that up. In fact, it’s likely to close it down.
Minimize health impacts of modern tech on our people, particularly our children. I will create a Department of the Attention Economy that focuses on smartphones, social media, gaming, and apps, and how to responsibly design and use them, including age restrictions and guidelines.
A “Department of the Attention Economy”? What the fuck is that I don’t even…. And, again, this sounds exactly like Josh Hawley’s desire to appoint himself the product manager for the internet, by determining exactly how various apps and services must be designed. What makes Yang think that some bureaucrats are going to know how to do this well? Also, while I get that there are potentially reasonable concerns about how various apps and services are used, much of it still smacks of moral panics. Would Yang have created a special new regulatory agency to regulate rock and roll back in the 60s, determining age restrictions and guidelines for what kinds of songs were okay?
Stop the spread of misinformation that is eroding trust in our institutions and fanning the flames of polarization in our society. I will scale up VAT on digital ads to hasten a shift away from ad-driven business models, require disclosures on all ads, regulate bot activity, and regulate algorithms, addressing the grey area between publishers and platforms.
How do you stop the spread of misinformation without violating the 1st Amendment? Can someone ask Yang that? And if he thinks that “digital ads” are the problem, or that a VAT will somehow stop such misinformation, he’s even more disconnected from the reality of how tech works than I had previously imagined. Finally, that line “addressing the grey area between publishers and platforms” is a nod to the made up nonsense by a bunch of conservative trolls pretending that there’s some legal distinction between a publisher and a platform. There isn’t. There isn’t a legal grey area. The law is quite clear and the courts have had no problem in understanding the issues. It’s just a bunch of trolls — usually the folks spreading misinformation — who like to pretend there’s some grey area.
In the details to this plan, Yang dumps on Section 230, because of course he does:
Section 230 of the Communications Decency Act14 absolves platforms from all responsibility for any content published on them. However, given the role of recommendation algorithms?which push negative, polarizing, and false content to maximize engagement?there needs to be some accountability.
That “14” actually is a footnote to the EFF’s page about 230 that debunks Yang’s blatantly false claim that Section 230 “absolves platforms from all responsibility for any content published on them.” It does not and never has. It simply places the legal liability on whoever created the content. Which is a sensible and reasonable thing. And, even then, it only immunizes platforms from certain types of liability on certain types of content, not “any content published on them.”
Indeed, it’s once again ironic that someone pointing to Section 230 as being a problem because of misinformation being spread online… is spreading blatantly false information about Section 230. Yang’s actual “proposal” for 230 is also pretty much a 404 error:
Amend the Communications Decency Act to reflect the reality of the 21st century? that large tech companies are using tools to act as publishers without any of the responsibility.
What does that even mean? If anyone thinks that the large tech companies have no responsibility, they haven’t been paying attention and literally have no clue what they’re talking about. Apparently Yang has no clue what he’s talking about.
Adopt a 21st century approach to regulation that increases the knowledge and capacity of government while using new metrics to determine competitiveness and quickly identifies emerging tech in need of regulation.
Out of the four key “prongs” of his proposal, this is the only one that touches on an idea that makes sense. We’ve talked quite a lot about increasingly the knowledge and capacity of government, but the latter half of the prong should raise eyebrows. He acts as if any emerging area of tech is “in need of regulation.” How ’bout we use this new knowledge and capacity to actually explore if regulation is needed, before rushing in to focus in on regulating every new area of tech. Given how many emerging areas of innovation tend to be threatened or stifled by overregulation, the fact that Yang’s approach seems to be regulate everything as soon as it emerges, should be disqualifying.
And even here, nearly all the details are odd. He does support bringing back the Office of Technology Assessment, which is good, but then he goes into a weird and nonsensical idea of creating a “Department of Technology” and making it a Cabinet level agency. This smacks of the idiotic days in the early 2000s when every company would appoint a “Chief Digital Officer,” as if “digital” were its own silo like marketing or finance or human resources. As we explained back in the time where Yang was tutoring people to take their GMATs, thinking digitally isn’t a separate job function. These days it permeates all jobs, all government, and all society. Siloing it into a special new department makes no sense. Sure, get everyone in government more knowledgeable and tech literate, but that doesn’t require a whole new Department, which suggests only it needs to understand tech.
Hopefully we can no get everyone to admit that Yang is not quite the tech savvy “former tech exec” that the media likes to pretend he is. For what it’s worth, I sent Yang’s proposal to a close friend in the tech industry who had been a vocal Yang supporter, and got back a text saying “WTF? Not even one point is good. It’s like he doesn’t know tech at all.” Yup.
Filed Under: andrew yang, attention economy, data, intellectual property, privacy, section 230, tech policy