local – Techdirt (original) (raw)
Content Moderation Case Study: Nextdoor Faces Criticism From Volunteer Moderators Over Its Support Of Black Lives Matter (June 2020)
from the everything-is-politics dept
Summary: Nextdoor is the local ?neighborhood-focused? social network, which allows for hyper-local communication within a neighborhood. The system works by having volunteer moderators from each neighborhood, known as ?leads.? For many years, Nextdoor has faced accusations of perpetuating racial stereotyping from people using the platform to report sightings of black men and women in their neighborhood as somehow ?suspicious.?
With the various Black Lives Matter protests spreading around the country following the killing of George Floyd in Minnesota, many companies have put out messages of support for the Black Lives Matter movement, including Nextdoor, which put up a short blog post entitled Black Lives Matter, with a few links to various groups supporting the movement.
This happened around the same time that the site started facing criticism because users posting support of the Black Lives Matter movement were finding their own posts being removed as leads were claiming that posts about the protests violated guidelines not to discuss ?national and state? political issues (even when the posts were about local protests).
Meanwhile, many of the ?leads? were using their own forum to complain about the company?s public support for Black Lives Matter at the same time that they believed discussing such issues on the platform violated rules. The ensuing discussion (which in many ways mimicked the wider national discussion) highlighted how frequently local leads are bringing their own political viewpoints into their moderation decisions.
When the company also added posts to local Nextdoor communities that highlighted black-owned businesses, as part of its support for Black Lives Matter, it again angered some leads who felt that such posts violated the rules they had been told to enforce.
Decisions to be made by Nextdoor:
- When there are national conversations around movements like Black Lives Matter, when is it appropriate to take a public stand? How will that stand be perceived by users and by local moderators?
- If the company is taking a stand on an issue like Black Lives Matter, should it then make it clear that related content should be kept on the platform — even if some moderators believe it violates other guidelines?
- How much leeway and power should local, volunteer moderators have regarding what content is on the platform?
- How much communication and transparency should there be with those local moderators?
- How involved should the company get with regards to implicit or unconscious bias that may come from non-employee, volunteer moderators?
- Is it feasible to have a rule that suggests that local communities should not be a platform for discussing state or national political issues? How does that rule play out when those ?national or state? issues also involve local activism?
Questions and policy implications to consider:
- When issues of national importance, such as civil rights, come to the forefront of the public discussion, there is often the likelihood of them becoming politically divisive. When is it important to take a stand despite this, and how should any messaging be handled — especially in cases where some staff or volunteers may feel otherwise?
- Issues of race are particularly controversial to some, and yet vitally important. How should companies handle these questions and debates?
- Using volunteer moderators to help moderate targeted niche communities has obvious benefits, but how might historical bias and prejudice manifest itself in doing so?
Resolution: Nextdoor has continued to support the Black Lives Matter movement, and Gordon Strause, the company?s director of community, went onto the forum where some leads were complaining to explain the company?s position and why they were supporting Black Lives Matter, and to push back against those who argued that the movement itself was discriminatory, while also highlighting how there were a variety of perspectives, and there was value in learning about other viewpoints:
In an attempt to quell the furor, Gordon Strause, the company?s director of community, wrote on the leads forum on Monday from his ?own perspective? and not ?on behalf of Nextdoor.? Noting that ?it?s of course absolutely true all live [sic] matters, whether they are black, white, brown, blue, or any other color,? he explained his views on Black Lives Matter.
?The goal of the BLM movement, at least as I understand it, is simply to make the point that black lives matter as much as any other lives but too often in America that isn?t actually what happens in practice and this dynamic needs to change,? he wrote.
?While no one that I know or respect believes that looting helps anything, there are folks that I respect (including people in my own family) who believe that riots may be a necessary step to help the country finally understand the scale of injustice that has been happening,” he wrote, “while other folks I respect believe that the riots will be counterproductive and will only undermine the goals they are meant to achieve.? Strause then went on to recommend a book from psychologist Jonathan Haidt and urged leads ?to listen and not to judge.?
?While Nextdoor is generally not the place for discussions of national issues, I think it?s going to [sic] hard to restrain those discussions in the coming days without being perceived as taking sides. So rather than trying to do so, I would recommend that Leads instead focus on a different goal: keeping the discussions as civil and issue focused (rather than personality focused) as possible,? he wrote.
Filed Under: black lives matter, case studies, content moderation, local, politics, racism, social media
Companies: nextdoor
Moderate Globally, Impact Locally
from the monumental-balancing-act dept
Every minute, more than 500 hours of video are uploaded to YouTube, 350,000 tweets are sent, and 510,000 comments are posted on Facebook.
Managing and curating this fire hose of content is an enormous task, and one which grants the platforms enormous power over the contours of online speech. This includes not just decisions around whether a particular post should be deleted, but also more minute and subtle interventions that determine its virality. From deciding how far to allow quack ideas about COVID-19 to take root, to the degree of flexibility that is granted to the President of the United States to break the rules, content moderation raises difficult challenges that lie at the core of debates around freedom of expression.
But while plenty of ink has been spilled on the impact of social media on America’s democracy, these decisions can have an even greater impact around the world. This is particularly true in places where access to traditional media is limited, giving the platforms a virtual monopoly in shaping the public discourse. A platform which fails to take action against hate speech might find itself instrumental in triggering a local pogrom, or even genocide. A platform which acts too aggressively to remove suspected “terrorist propaganda” may find itself destroying evidence of war crimes.
Platforms’ power over the public discourse is partly the result of a conscious decision by global governments to outsource online moderation functions to these private sector actors. Around the world, governments are making increasingly aggressive demands for platforms to police content which they find objectionable. The targeted material can range from risque? photos of the King of Thailand, to material deemed to insult Turkey’s founding president. In some instances, these requests are grounded in local legal standards, placing platforms in the difficult position of having to decide how to enforce a law from Pakistan, for example, which would be manifestly unconstitutional in the United States.
In most instances, however, moderation decisions are not based on any legal standard at all, but on the platforms’ own privately drafted community guidelines, which are notoriously vague and difficult to understand. All of this leads to a critical lack of accountability in the mechanisms which govern freedom of expression online. And while the perceived opacity, inconsistency and hypocrisy of online content moderation structures may seem frustrating to Americans, for users in the developing world it is vastly worse.
Nearly all of the biggest platforms are based in the United States. This means not only that their decision-makers are more accessible and receptive to their American user-base than they are to frustrated netizens in Myanmar or Uganda, but also that their global policies are still heavily influenced by American cultural norms, particularly the First Amendment.
Even though the biggest platforms have made efforts to globalize their operations, there is still a massive imbalance in the ability of journalists, human rights activists, and other vulnerable communities to get through to the U.S.-based staff who decide what they can and cannot say. When platforms do branch out globally, they tend to recruit staff who are connected to existing power structures, rather than those who depend on the platforms as a lifeline away from repressive restrictions on speech.
For example, the pressure to crackdown on “terrorist content” inevitably leads to collateral damage against journalism or legitimate political speech, particularly in the Arab world. In setting this calculus, governments and ex-government officials are vastly more likely to have a seat at the table than journalists or human rights activists. Likewise, the Israeli government has an easier time communicating their wants and needs to Facebook than, say, Palestinian journalists and NGOs.
None of this is meant to minimize the scope and scale of the challenge that the platforms face. It is not easy to develop and enforce content policies which account for the wildly different needs of their global user base. Platforms generally aim to provide everyone with an approximately identical experience, including similar expectations with regard to the boundaries of permitted speech. There is a clear tension between this goal and the conflicting legal, cultural and moral standards in force across the many countries where they operate.
But the importance and weight of these decisions demands that platforms get this balancing right, and develop and enforce policies which adequately reflect their role at the heart of political debates from Russia to South Africa. Even as the platforms have grown and spread around the world, the center of gravity of these debates continues to revolve around D.C. and San Francisco.
This is the first in a series of articles developed by the Wikimedia/Yale Law School Initiative on Intermediaries and Information appearing here at Techdirt Policy Greenhouse and elsewhere around the internet?intended to bridge the divide between the ongoing policy debates around content moderation, and the people who are most impacted by them, particularly across the global south. The authors are academics, civil society activists and journalists whose work lies on the sharp edge of content decisions. In asking for their contributions, we offered them a relatively free hand to prioritize the issues they saw as the most serious and important with regard to content moderation, and asked them to point to areas where improvement was needed, particularly with regard to the moderation process, community engagement, and transparency.
The issues that they flag include a common frustration with the distant and opaque nature of platforms? decision-making processes, a desire for platforms to work towards a better understanding of local socio-cultural dynamics underlying the online discourse, and a feeling that platforms? approach to moderation often did not reflect the importance of their role in facilitating the exercise of core human rights. Although the different voices each offer a unique perspective, they paint a common picture of how platforms? decision making impacts their lives, and of the need to do better, in line with the power that platforms have in defining the contours of global speech.
Ultimately, our hope with this project is to shed light on the impacts of platforms? decisions around the world, and provide guidance on how social media platforms might do a better job of developing and applying moderation structures which reflect their needs and values of their diverse global users.
Michael Karanicolas is a Resident Fellow at Yale Law School, where he leads the Wikimedia Initiative on Intermediaries and Information as part of the Information Society Project. You can find him on twitter at @M_Karanicolas.
Filed Under: content moderation, global, greenhouse, impact, local
Here Comes The Waterfall: 15 MLB Teams To Lift Streaming Blackout For Fox Broadcasts
from the half-way dept
And away we go. Techdirt (myself specifically) has been talking for some time about the impending expansion of major sports streaming options as the cord-cutting trend has continued. It only makes sense: leagues and marketers will go where the audience is. The most recent trend started slowly with the FCC voting to end its blackout rule. That decision was important for streaming, because one of the dumbest ideas that migrated over from broadcast and cable television was the idea that local blackouts of broadcasts and streams were in any way a good idea. Even as the NFL, NBA, NHL and MLB all have incrementally increased streaming options, those efforts have continued to be hampered by local blackout restrictions.
Well, Major League Baseball just took a giant step over the blackout line and is now effectively straddling it, announcing that local streaming will be available in fifteen markets in the 2016 season.
There is no specific timetable for a potential announcement of a deal between FOX and MLB. The two sides hope to complete the agreement around the end of this season, which would give the league and RSNs a full offseason to market the availability of the new local streams before Opening Day 2016. MLB Commissioner Rob Manfred, working with the league’s president of business and media, Bob Bowman, has made in-market baseball streaming a key league priority, including personally participating in several negotiating sessions.
Per the above, this specific deal is going to be done with MLB teams that have broadcasting deals with Fox. But don’t think for a single moment that that’s where it ends. Even if MLB can’t get similar deals in place for the other half of teams in the league, which would fully free up the fantastic MLB.TV product for local streaming, any modicum of success that Fox has with this program will be immediately adopted by the other broadcasters. They really don’t have a choice. Cord-cutting isn’t going away and it’s been professional and college sports that have long kept subscribers tethered. The trickle of streaming options in sports has been turning into more of a deluge, and the cable industry should be expecting some tough times ahead in the next, oh, say three to five years. Because if Manfred has this on his priority list for MLB, please believe that the commissioners in the other leagues have it on theirs as well.
And when sports streaming really gets going, it’s the end of cable as we currently know it.
Filed Under: baseball, blackouts, local, mlb, online, sports, streaming
Forget Laundering Unauthorized Music Via Music Match, What About AirDrop Darknets?
from the slipped-that-one-right-by-the-goalie dept
In my initial post on Apple’s iTunes-in-the-cloud Music Match offering, I noted the ability to effectively “launder” unauthorized tracks through the service. That’s because it will scan your drive for all tracks — those from iTunes and elsewhere — and make authorized high quality, DRM-free versions of all of those songs available to you on any device “forever.” In theory, this means if you have a lot of unauthorized music, if you pay your $25 and join up, all of those unauthorized tunes become “authorized” via iTunes. Not surprisingly, it’s this aspect of so-called “legitimizing” unauthorized files that seems to be getting so much attention.
To be honest, I don’t think it’s a big deal, beyond the simple note of surprise that the major labels actually allowed this to happen. Beyond that, all the buzz about “legitimizing piracy” is a bunch of hot air. The simple fact of the matter is that once people had these songs on their hard drive, they were effectively legitimized. The only lawsuits were really over distribution. And while there may have been some efforts (such as in the Jammie Thomas case and the Joel Tenenbaum case) to establish where certain files came from, those were minor points and wouldn’t be impacted by Music Match. Basically, this whole focus on “legitimizing” those works is a red herring. No one was getting in trouble for those works on their hard drives, and just because they move into the iTunes cloud doesn’t mean that anything changes. At all.
What may be a much bigger copyright issue is the one raised by James Grimmelmann, who points out the much-less-press-generating announcement of AirDrop, and how it creates local, encrypted, peer-to-peer networks over WiFi. As Grimmelmann notes:
This is going to be yet another darknet vector. Imagine walking into a cafe, browsing someone else?s iTunes library, asking them for one of their albums, and getting it via AirDrop–all without knowing whose computer yours is interacting with. Sony?s rule on dual use technologies almost certainly absolves Apple of liability from any resulting infringement. Instead, this is yet another example of how technological changes are increasing the velocity with which media circulate, regardless of what copyright law may have to say about it.
Kind of makes you wonder if the labels knew about that as part of their agreement over Music Match…
Filed Under: airdrop, darknets, encrypted, file sharing, local, p2p
Companies: apple
How Newark Mayor Cory Booker Made All Politics Super Local With Twitter Following The Blizzard
from the one-to-one dept
I recognize that there are many people out there who simply don’t understand the appeal of Twitter. Every time we mention the service, we get comments from people saying things like “why would I want to know what some random person had for lunch.” Of course, the answer to that is that if all you’re getting out of Twitter is what people you don’t care about are having for lunch, you’re not following the right people. Twitter really is what you make of it, and for many people, it’s a really useful communication tool. I use it in a variety of ways, including to keep up with news, but also as a way to stay in contact with friends and family. It’s also — somewhat surprisingly — enabled new friendships and even business opportunities, by allowing me to build stronger relationships with people I’d probably not communicate with otherwise.
However, there are certain moments when you realize just how powerful Twitter can be as a communication platform, and those tend to be cases when previously impenetrable walls are broken down. I’ve told the story in the past about how the first time I realized Twitter was powerful was during the Iowa Presidential caucuses in early 2008, when I started following a user who was aggregating tweets directly from within caucus rooms about what was happening in those rooms. What became fascinating to me was that the information that was coming out got me detailed (and extraordinarily accurate) information well before (as in hours) mainstream media had the results. In fact, in comparing the Twitter results with CNN’s reporting, what became clear was that if you were watching Twitter you would have a much better understanding of what was happening in Iowa.
I’m getting a similar feeling after reading about Newark Mayor Cory Booker’s use of Twitter in response to the big blizzard that hit the northeast this past weekend. He’s been tweeting up a storm, as he travels around Newark helping to plow streets and dig out cars and help people in trouble. As you look down the thread, he’s specifically responding to different people calling out for help — either sending people to help or showing up himself, such as the case of the woman who was stuck in her home and needed diapers, which the mayor brought himself.
In another, somewhat epic, stream of tweets, one guy complained that he was stuck. Mayor Booker responded, asking for the guy’s phone number, and shortly thereafter tweeted that he was there to help. At the same time, though, the original tweeter was complaining on Twitter with curses, and wondering if the mayor would really show up. In response, Mayor Booker called him out:
Wow u shud b ashamed of yourself. U tweet vulgarities & then I come out here to help & its ur mom & sis digging. Where r u?
Eventually, the guy came out and apparently they talked and worked it out, with the mayor thanking him and the guy apologizing.
Now, I’m sure that there are cynical people out there who will mock all of this as just a publicity stunt. And, to some extent, it is a publicity stunt, but it’s an incredibly effective one. Paying attention to his account, you realize that even if he knows he’s getting attention for all of this, he really is using Twitter to find out where there are problems and responding quickly. Some will, of course, point out that all of this provides cover for the fact that the city didn’t seem to do a good job plowing in the first place — but the storm was not an ordinary storm. Also, a key characteristic of what makes a leader is how they respond when things go wrong, and this reaction is quite interesting.
But what’s more telling to me, is how this is yet another case of barriers being broken down. Traditionally, folks who were stuck in certain areas of Newark might — at best — call some government agency where they’d probably get a run around. The likelihood of them actually being able to contact the mayor directly and have him respond and do something was nil.
The famous saying, of course, is that all politics is local, but this story shows how Mayor Booker took that to another level, and really opened a channel for direct communication in a time when it really mattered.
Filed Under: blizzard, cory booker, local, politics, twitter
Yelp Adds A Tiny Bit Of Transparency… And Inches Away From Pay For Placement
from the extortion2.1 dept
Over the past few years, the review site Yelp has been no stranger to controversies regarding its treatment of comments and criticisms aimed at local businesses. Negative reviews on Yelp have spurred various lawsuits, accusing Yelp of unfair business practices that have been called “Extortion 2.0” — referring to the accusation that Yelp salespeople put pressure on companies to pay up for better ratings to appear more prominently on Yelp (and to remove the bad reviews that coincidentally seem to appear on the site when these salespeople allegedly suggest that better ratings could be bought).
In response, Yelp has explained (over and over again) that its algorithms are optimized to display the most “trustworthy” reviews of local businesses — in a way that’s completely unrelated to its sales efforts. Trying to put a friendly wrapping around its umpteenth explanation, Yelp has even created a cartoon to help educate everyone on its methods:
However, no matter how simply these explanations are conveyed, they have not been particularly convincing to small businesses who feel punished by bad reviews and see Yelp’s services as a veiled threat to their livelihood. So Yelp has taken another step by announcing some changes to its services to avoid further confusion:
- Businesses can no longer buy a “Favorite Review” like they could before — so that there’s no confusion over businesses being able to influence reviews by paying Yelp. This sounds like a pretty big step towards making it clear that companies can’t just buy better reviews, but what does this mean for companies that formerly bought “Favorite Reviews?” Those companies are being penalized with the unexpected removal of this service, and there’s still no guarantee that ratings can’t be manipulated by cunning business owners or competitors. Though, the conspiracy theorists may never actually be satisfied on this point, and gaming online rating systems will likely always be a nagging concern.
- Yelp is still keeping its review filtering algorithms a secret, but it will now display reviews that have been removed by its automated filters in an effort to allow users to see a bit of the reviews that Yelp deems suspicious or untrustworthy. However, Yelp is not exactly highlighting these filtered-out reviews — just making them available to be viewed in case anyone is curious to see what kind of reviews are tossed out on a regular basis.
- Yelp is adding video ads as a service for businesses — presumably to offset the loss of its “Favorite Review” feature.
- Yelp says it’s created a Small Business Advisory Council for companies to give feedback to Yelp management. This is an interesting development, but it’s not exactly easy to find out more information on how this council works. Granted, it was just announced, but its announcement seems to lack a bit of commitment when there aren’t any obvious links about it on yelp.com (yet?).
Yelp proudly states that it’s increasing transparency with these changes, allowing businesses and users to peek into what its algorithms are filtering out behind the scenes. But it’s not clear that anyone really asked for that feature — and getting that look at the filtered reviews isn’t going to ease the concerns that Yelp’s algorithms are inherently weighted against small businesses who don’t pay up for advertising space on Yelp.
The more significant change seems to be that Yelp is shifting away from a “Pay for Placement” business model with its reviews. Replacing its “Favorite Reviews” with video ads seems a bit odd, though — but apparently video ads were a top request from merchants. So at least Yelp is listening to its customers and responding — and if Yelp really wants to increase transparency, maybe we’ll see how Yelp actually handles feedback someday. But since Yelp doesn’t allow commenting on its own blog, chime in here and tell us what you think Yelp is doing wrong or right with its approach.
Filed Under: local, pay for placement, ratings, reviews, yelp
Companies: yelp
NBC Once Again Overvaluing Content, Undervaluing Community
from the their-own-loss dept
NBC Universal, like so many big media companies, seems to view everything through a top-down broadcast media lens. For example, while it may seem like a good idea that the company is finally (finally!) recognizing that people may be craving local content that is sometimes difficult to find, there’s something missing in its announcement of plans to create “locals only” websites targeted at specific geographic regions. You can read the entire press release and see if you notice what’s missing.
Every single part of the description of the site is about delivering content to people. Nowhere is there any sense of actually building a community around that content. The only time “community” is mentioned is as a “target.” The press release claims that these sites are aimed at “social capitalists” who are the leading influencers in their communities, but the company seems to have missed out on the fact that the reason those folks are influencers isn’t because they sit back and just consume the content shoveled to them, but because they take part in the process. They share the news, they comment on it, they write it, they annotate it, they build on it and they help create it. But all that NBC Universal is talking about is taking the same old, old model of simply shoveling content to people.
Filed Under: community, content, influencers, local, news, social capitalists
Companies: nbc universal
Why Should The Government Force Local Restrictions On Media?
from the not-needed dept
Ed Felten has a great post questioning various government regulations forcing “local” ownership, advisory committees and content for certain types of broadcast media. Felten points out that local content makes sense for local communities, but communities aren’t just defined by locality anymore. In fact, he points out how such “local” broadcasting rules made it more difficult for him to keep in touch with his “local” community back in Princeton, New Jersey, when he spent a year on sabbatical in California. Due to those local restrictions, he couldn’t get the local television stations from back in Princeton that he enjoyed.
But, perhaps an even bigger question (which Felten doesn’t touch on) is why there need to be regulatory mandates for local content in the first place. As we were just pointing out, in the newspaper business, newspaper chains that have aggressively focused on producing local content have found that it’s quite profitable while the newspapers that focus on more national news are struggling. In other words, the market itself seems to reward local content without any government mandate. So why is a government mandate necessary at all?
Filed Under: community, local, mandates, regulations
Not All Newspaper Chains Are Facing Doom And Gloom Scenarios
from the local-local-local dept
The common refrain on the newspaper industry is that it’s dying out, being replaced by online news sources. Of course, that leaves out some important facts: such as the fact that there are newspaper chains that are actually doing quite well, even when they mostly focus on the old dinosaur of actual newspapers. Romenesko points us to a story about David Black, a Canadian newspaper publisher who owns a ton of local newspapers around Canada and the US that are actually doing quite well. But, that’s because they’re not competing with the big dailys, but focus on having a lean staff that writes almost exclusively about local news. His papers are all local papers, that don’t try to provide all that other news that you can already get online, but the news that is only going to be interesting to a small group of folks. But, as Black has learned over many years, those small groups of folks are a lucrative audience. While other newspapers have trouble competing, Black’s are full of ads — many from local businesses which find those papers an effective (and cost effective) way to cut through the clutter.
Filed Under: local, news, newspapers