chat – Techdirt (original) (raw)
Introducing The Techdirt Insider Discord
from the join-us dept
Join the Insider Discord with a Watercooler or Behind The Curtain membership!
Techdirt has been around for nearly 25 years at this point, and we have an unfortunate habit of being just slightly too far ahead of the technology curve. The site was launched before the word blog even existed, and certainly before there were readily available and easy to use tools for creating a blog (more on that soon!), so we cobbled together our own solution. We’ve done that with unfortunate frequency. In the early 2000s, we even built our own internal RSS reader in order to find stories (I always thought it was better than Google Reader). And, a while back, we launched the Techdirt “Insider Chat” long before Discord or Slack or other such tools were popular.
The Techdirt Insider Chat was a widget on our site that, if you supported us at certain levels in our own Insider Shop (or on Patreon), you got access to a chat that only those supporters could use — but which was still displayed on the sidebar for anyone to see. Because there weren’t widespread tools to make this possible, we built our own. But it was a bit clunky and limited, and honestly wasn’t receiving that much use beyond a handful of dedicated users.
Over the last few months, we’ve moved the Insider Chat over to Discord, which has become the standard these days for community chats. However, we did want to still include the feature of displaying the chat publicly — but only allowing actual supporters to participate. So while we are now using Discord as the basis of the chat (which is much easier for many people to use, has many more features, and allows for things like accessing the chat on mobile devices), we built our own embeddable widget that reflects the chat in the sidebar (which you can see if you look over to the right).
If you’re interested in (1) supporting Techdirt and (2) joining in on the conversations now happening in the chat and (3) connecting with others in the Techdirt community, please consider supporting us at a level that includes the Insider Chat.
As you’ll recall, earlier this year we removed all the ads (and Google tracking code) from Techdirt. We are relying more and more on our community supporting us going forward, and we’re working hard to provide those supporters with more useful and fun features, including this new Discord community.
Filed Under: chat, insider chat, techdirt
Companies: discord, techdirt
Content Moderation Case Study: Telegram Gains Users But Struggles To Remove Violent Content (2021)
from the influx-of-insurrectionists dept
Summary: After Amazon refused to continue hosting Parler, the Twitter competitor favored by the American far-right, former Parler users looking to communicate with each other — but dodge strict moderation — adopted Telegram as their go-to service. Following the attack on the Capitol building in Washington, DC, chat app Telegram added 25 million users in a little over 72 hours.
Telegram has long been home to far-right groups, who often find their communications options limited by moderation policies that, unsurprisingly, remove violent or hateful content. Telegram’s moderation is comparatively more lax than several of its social media competitors, making it the app of choice for far right personalities.
But Telegram appears to be attempting to handle the influx of users — along with an influx of disturbing content. Some channels broadcasting extremist content have been removed by Telegram as the increasingly-popular chat service flexes its (until now rarely used) moderation muscle. According to the service, at least fifteen channels were removed by Telegram moderators, some of which were filled with white supremacist content.
Unfortunately, policing the service remains difficult. While Telegram claims to have blocked “dozens” of channels containing “calls to violence,” journalists have had little trouble finding similarly violent content on the service, which either has eluded moderation or is being ignored by Telegram. While Telegram appears responsive to some notifications of potentially-illegal content, it also appears to be inconsistent in applying its own rule against inciting violence.
Decisions to be made by Telegram:
- Should content contained in private chats (rather than public channels) be subjected to the same rules concerning violent content?
- Given that many of its users migrated to Telegram after being banned elsewhere for posting extremist content, should the platform increase its moderation efforts targeting calls for violence?
- Should a process be put in place to help prevent banned users/channels from resurfacing on Telegram under new names?
Questions and policy implications to consider:
- Does Telegram’s promise of user security and privacy dissuade it from engaging in more active content moderation?
- Is context considered when engaging in moderation to avoid accidentally blocking people sharing content they feel is concerning, rather than promoting the content or endorsing its message?
- Do reports of mass content violations (and lax moderation) draw extremists to Telegram? Does this increase the chance of the moderation problem “snowballing” into something that can no longer be managed effectively?
Resolution: Telegram continues to take a mostly hands-off approach to moderation but appears to be more responsive to complaints about calls to violence than it has been in the past. As it continues to draw more users — many of whom have been kicked off other platforms — its existing moderation issues are only going to increase.
Originally posted to the Trust & Safety Foundation website.
Filed Under: chat, content moderation, violence
Companies: telegram
Content Moderation Case Study: Scammers Targeting Scrabble Chat (2020)
from the it's-always-scrabble dept
Summary: In the spring of 2020, Mattel and Hasbro announced that the official mobile version of the game Scrabble would no longer be the game produced by Electronic Arts, but rather a new game called Scrabble Go created by a company called Scopely. The change drew the ire of fans (who have even started a petition for the old game to be brought back) for taking what had been a fairly standard mobile version of the popular word game, and introducing a new, flashier version that had some additional ?gamification? incentives and put the focus on playing against others, rather than the computer as was typical in the previous game.
This also introduced a new feature: chat. Since players are playing against other human beings, Scopely decided to add a chat feature, but apparently did not consider how such features may be regularly abused. In the months since Scrabble Go launched, there have been many reports of so-called ?romance scammers? trying to reach out to people via Scrabble Go?s chat feature.
Multiple reports of these kinds of approaches started appearing in various forums, with some examples of the scammers being quite persistent. At least in Australia, consumer protection officials noted that they have received multiple complaints of romance scammers approaching them via Scrabble Go. One woman in the UK noted that she has been approached by such scammers two to three times every week.
After three months of complaints, Scopely announced that it was rolling out an update that would allow players to ?mute? the chat function.
Decisions to be made by Scopely:
- Does a mobile Scrabble game need a chat feature?
- If scammers are bothering players so often is the game better off without it?
- How will chat be monitored? Is there a program in place to catch and stop scammers?
- Are there other tools to limit the abuse of the chat feature?
- Should the default be that chat is open to all or should it be opt-in?
Questions and policy implications to consider:
- Any system that allows for person-to-person communication can be abused. How should companies looking to add useful features take this into account?
- How do you weigh the pros and cons of features like chat when comparing their usefulness for engagement against their trust-and-safety risks?
Resolution: A few months after launch, Scopely updated the app to allow players to mute the chat entirely. As complaints remained, it has also added an ability to only connect to friends you already know on Facebook or via your contacts (if you agree to upload your contacts to the service), effectively sandboxing the chat to only users the player has some connection with.
The company has also added the ability to ?report? a chat if the user feels it is inappropriate.
Finally, to address the broader complaints about the game, Scopely introduced a ?classic mode? to focus more on the traditional game, rather than all the bells and whistles of the full Scrabble Go.
Originally posted at the Trust & Safety Foundation website.
Filed Under: chat, content moderation, scammers, scams, scrabble
Creating Family Friendly Chat More Difficult Than Imagined (1996)
from the the-kids-will-find-a-way dept
Summary: Creating family friendly environments on the internet presents some interesting challenges that highlight the trade-offs in content moderation. One of the founders of Electric Communities, a pioneer in early online communities, gave a detailed overview of the difficulties in trying to build such a virtual world for Disney that included chat functionality. He described being brought in by Disney alongside someone from a kids? software company, Knowledge Adventure, who had built an online community in the mid-90s called ?KA-Worlds.? Disney wanted to build a virtual community space, HercWorld, to go along with the movie Hercules. After reviewing Disney?s requirements for an online community, they realized chat would be next to impossible:
Even in 1996, we knew that text-filters are no good at solving this kind of problem, so I asked for a clarification: “I?m confused. What standard should we use to decide if a message would be a problem for Disney?”
The response was one I will never forget: “Disney?s standard is quite clear:
No kid will be harassed, even if they don?t know they are being harassed.”…
“OK. That means Chat Is Out of HercWorld, there is absolutely no way to meet your standard without exorbitantly high moderation costs,” we replied.
One of their guys piped up: “Couldn?t we do some kind of sentence constructor, with a limited vocabulary of safe words?”
Before we could give it any serious thought, their own project manager interrupted, “That won?t work. We tried it for KA-Worlds.”
“We spent several weeks building a UI that used pop-downs to construct sentences, and only had completely harmless words ? the standard parts of grammar and safe nouns like cars, animals, and objects in the world.”
“We thought it was the perfect solution, until we set our first 14-year old boy down in front of it. Within minutes he?d created the following sentence:
> I want to stick my long-necked Giraffe up your fluffy white bunny.
In that initial 1996 project, chat was abandoned, but as they continued to develop HercWorld, they quickly realized that they still had to worry about chat, even without a chat feature:
It was standard fare: Collect stuff, ride stuff, shoot at stuff, build stuff? Oops, what was that last thing again?
“?kids can push around Roman columns and blocks to solve puzzles, make custom shapes, and buildings.”, one of the designers said.
I couldn?t resist, “Umm. Doesn?t that violate the Disney standard? In this chat-free world, people will push the stones around until they spell Hi! or F-U-C-K or their phone number or whatever. You?ve just invented Block-ChatTM. If you can put down objects, you?ve got chat. We learned this in Habitat and WorldsAway, where people would turn 100 Afro-Heads into a waterbed.” We all laughed, but it was that kind of awkward laugh that you know means that we?re all probably just wasting our time.
Decisions for family-friendly community designers:
- Is there a way to build a chat that will not be abused by clever kids to reference forbidden content (e.g., swearing, innuendo, harassment, abuse)?
- Can you build a chat that does not require universal moderation and pre-approval of everything that users will say?
- Are there ways in which kids will still be to communicate with others even without an actual chat feature?
- How much of a ?community? do you have with no chat or extremely limited chat?
Questions and policy implications to consider:
- Is it possible to create an online family friendly environment that will work?
- If so how do you prevent abuse?
- If not, how do you handle the fact that kids will get online whether they are allowed to or not?
- How do you incentivize companies to create spaces that actually remain as child-friendly as possible?
- If ?the kids will always find a way? to get around limitations, does it make sense to hold the companies themselves responsible?
- Should family friendly environments require full-time monitoring, or pre-vetting of any usage?
Resolution: Disney eventually abandoned the idea of HercWorld due to all of the issues raised. However, the interview highlights the fact that they tried again a couple of years later, with an online chat where users could only pull from a pre-selected list of sentences, but it did not have much success:
“The Disney Standard” (now a legend amongst our employees) still held. No harassment, detectable or not, and no heavy moderation overhead.
Brian had an idea though: Fully pre-constructed sentences ? dozens of them, easy to access. Specialize them for the activities available in the world. Vaz Douglas, our project manager working with Zoog, liked to call this feature “Chatless Chat.” So, we built and launched it for them. Disney was still very tentative about the genre, so they only ran it for about six months; I doubt it was ever very popular.
The same interview notes that Disney tried once again in 2002 with a new world called ?ToonTown?, with pulldown menus that allowed you to construct very narrowly tailored speech within the chat to try to avoid anything that violated the rules.
As the story goes, Disney still had problems with this. To make sure people were only communicating with people they knew in real life, one of the restrictions in this new world was that you had to have a secret code from any user you wished to chat with. The thinking was that parents would print these out for kids who could then share them with their friends in real life, and they could link up and ?chat? in the online world.
And yet, once again, people figured out how to get around the restrictions:
Sure enough, chatters figured out a few simple protocols to pass their secret code, several variants are of this general form:
User A:”Please be my friend.” User A:”Come to my house?” User B:”Okay.” A:[Move the picture frames on your wall, or move your furniture on the floor to make the number 4.] A:”Okay” B:[Writes down 4 on a piece of paper and says] “Okay.” A:[Move objects to make the next letter/number in the code] “Okay” B:[Writes?] “Okay” A:[Remove objects to represent a “space” in the code] “Okay” [Repeat steps as needed, until?] A:”Okay” B:[Enters secret code into Toontown software.] B:”There, that worked. Hi! I?m Jim 15/M/CA, what?s your A/S/L?”
Incredibly, there was an entire Wiki page on the Disney Online Worlds domain that included a variety of other descriptions on how to exchange your secret number within the game, even as users were not supposed to be doing so:
For example, let’s say you have a secret code (1hh 5rj) which you would like to give to a toon named Bob.
First, you should make clear that you want to become their SF. You: Please be my friend! You: (random SF chat) You: I can’t understand you You: Let’s work on that Bob: Yes Now, start the secret. You: (Jump 1 time and say OK. Jump 1 time because that is the first thing in your code. Say OK to confirm that was part of your secret.) Bob: OK (Wait for this, as this means he has written down or otherwise recorded the 1) You: Hello! OK (Say hello because the first letter of hello is h, which is the second part of your secret.) Bob: OK (again, wait for confirmation) Repeat above step, as you have the same letter for the third part of your secret. Bob: OK (by now you should know to wait for this) You: (Jump 5 times and say OK. Jump 5 times as this is the 4th part of your secret) Bob: OK You: Run! OK (The 5th part of your secret is r, and “Run!” starts with r) Bob: OK You: Jump! OK (Say this because j is the last part of your secret.) Bob: OK At this point, you have successfully transmitted the code to Bob. Most likely, Bob will understand, and within seconds, you will be Secret Friends!
So even though Disney eventually did enable a very limited chat, with strict rules to keep people safe, it still left open many challenges for early trust & safety work.
Images from HabitChronicles
Filed Under: chat, content moderation, family friend
Companies: disney
Researchers Say Chinese Government Now Censoring Images In One-To-One Chat
from the shot-spotters dept
It looks like China is continuing to set the gold standard for internet censorship. For a long time, the Great Firewall has been actively censoring content based on keywords. Activists and dissidents have worked around this filtering by placing text in images, but that doesn’t appear to be working nearly as well as it used to.
Toronto’s Citizen Lab noticed some unusual things happening in days surrounding the death of China’s only Nobel Peace Prize winner (and longtime political prisoner), Liu Xiaobo.
On WeChat, we collected keywords that trigger message censorship related to Liu Xiaobo before and after his death. Before his death, messages were blocked that contained his name in combination with other words, for example those related to his medical treatment or requests to receive care abroad. However, after his death, we found that simply including his name was enough to trigger blocking of messages, in English and both simplified and traditional Chinese. In other words, WeChat issued a blanket ban on his name after his death, greatly expanding the scope of censorship.
We documented censorship of images related to Liu on WeChat after his death, and for the first time found images blocked in one-to-one chat. We also found images blocked in group chat and WeChat Moments (a feature that resembles Facebook’s Timeline where users can share updates, upload images, and short videos or articles with their friends), before and after his death.
China has tackled image censorship before, but it hasn’t been able to achieve this in one-to-one chat until now. And it’s being done stealthily to prevent senders or receivers from knowing their images have been blocked.
Similar to keyword-based filtering, censorship of images is only enabled for users with accounts registered to mainland China phone numbers. The filtering is also not transparent. No notice is given to a user if the picture they sent is blocked. Censorship of an image is concealed from the user who posted the censored image.
The censorship is only apparent to international users without registered Chinese phone numbers. And, like most blanket censorship efforts, it’s far from perfect.
The exact mechanism that WeChat uses to determine which images to filter is unclear and in our testing sample we found unexpected results. Blocked images included screenshots of official government statements on Liu Xiaobo’s death, which we did not expect to be censored. We also found images that were not blocked that could be seen as sensitive, such as an image of book covers of “Charter 08” and a Biography of Liu Xiaobo, which are both banned in mainland China.
As Citizen Lab points out, this censorship effort is especially concerning, as it indicates the Chinese government is possibly in the business of internet-enabled retroactive amnesia. If it leaves the filtering in place long enough and censors enough websites and personal chats, the history of Liu Xiaobo will be slowly rewritten with narratives approved by the Chinese government.
Filed Under: censorship, chat, china, free speech, images, messaging, one on one
Thai Government Demands Popular Chat App Reveal Any Time Any User Insults The King
from the encrypted-chats-are-important dept
I spent some time in Asia earlier this year, and while people in the US focus on Facebook Messenger, Snapchat, Google Hangouts and a variety of other chat apps, the chat app of choice over there was Line. Basically everyone used it. A year ago, Line moved towards true end-to-end encryption. Earlier this year, the company made end-to-end encrypted chats the default, rather than as a user option (Thank you Snowden!).
Good timing. The company has apparently now refused to obey a Thai government demand that it alert the government to anyone insulting the Thai royal family on the messaging app. For years, we’ve written about Thailand’s ridiculous lese majeste laws, which make it a crime to insult the king. As we’ve noted, the law is used as a way to censor and crack down on political opponents. And, of course, with the death of the Thai king last month, there’s been a sudden uptick in Thai officials going after people for supposed lese majeste violations.
But Line is telling the government that it just can’t help out here.
“We do not monitor or block user content. User content is also encrypted, and cannot be viewed by LINE,” the statement sent to DPA said.
Of course, there’s been some controversy in the past over this. Back in 2014, Thailand announced that it was instituting a broad surveillance program to snoop on basically all internet communications for the sake of seeking out and punishing lese majeste violators. A few months later, Thai government officials flat out claimed that this included monitoring Line messages, something that the company flat out denied (though, that may have also inspired the move to encryption). While Thai officials have, at times, even claimed the ability to read encrypted messages, it seemed like that was just idle boasting, rather than a legitimate revelation of surveillance capabilities.
There is one oddity about Line’s response to the Thai government, though:
“We ask the authorities seeking to obtain user data to make official requests through diplomatic channels and have so advised the Thai authorities,” LINE added.
So, uh, if the messages are all end-to-end encrypted and there’s no way for Line to access them to share with any government, why is it asking the Thai government to use diplomatic channels to make an official request?
Filed Under: chat, encryption, insults, king, lese majeste, messaging, thailand
Companies: line
The Rise Of More Secure Alternatives To Everyone's Favorite Chat App, Slack
from the well-this-could-get-interesting dept
Like a ton of people and companies, we’ve been using Slack here. While we saw some folks claim it was revolutionary, we found it to be a nice, but somewhat marginal, upgrade to our previous use of Skype chat rooms. But, over time, it has certainly gotten comfortable, and there have been some nice feature add-ons and integrations that have made it a pretty cool service overall — though if you really want to use it to its fullest extent and switch to the paid version, it can get pretty pricey, pretty quickly. I also am in a bunch of other group Slack chats, as it’s basically become the platform of choice for group discussions.
However, in these days where hacked emails are in the headlines, I can see why some might get nervous about using a tool like Slack. Not that there have been any known breaches of Slack that I’m aware of, and I’m sure that the company takes security very seriously (it would undermine its entire business if it failed on that front…), it’s been interesting to see other options start to pop up, which might be more appetizing for those who are extra security conscious.
Just as we’ve been encouraged to see greater use of encryption on mobile phones, email and on websites, it’s good to see new entrants trying to take on Slack with a focus on security and privacy. The most recent, and perhaps most interesting, player in the space is SpiderOak, which recently launched its Semaphor Slack competitor on the market. I’ve been playing around with it — and while it’s early on, it certainly has potential. SpiderOak is the company you should already know of that provides an encrypted “zero knowledge” cloud backup solution. Since you keep the keys, even though it’s hosted in the cloud, SpiderOak has no way to decrypt your files should anyone hack in, or should the government come calling. It’s now taken that approach to Semaphor, which obviously takes its inspiration from Slack (and feels quite similar), but with the same zero knowledge encrypted setup. You get a key and that encrypts all of the data in your group messaging.
There are some limitations there — of course — because any team member might leak their key (though whoever gets in would just have access to whatever that team member can see). And, because of this setup, it’s not as easy to do “integrations” with third-party apps and services, which is a key selling point of Slack. Semaphor is apparently trying to work its way around this limitation by creating bots that act as their own users within Semaphor (something Slack has also), but where the bots themselves become the key to integrations. It’s a bit more clumsy, but if it helps keep things secure, that seems promising.
SpiderOak also, kindly, makes the Semaphor client source code available for anyone to audit, which is necessary if anyone’s going to take their encryption seriously. Of course, Semaphor is, like Slack, working off a Freemium model, where additional features require per user fees, which can add up. One nice feature of Semaphor that Slack doesn’t have: the ability for individuals to pay their own way. That is, there are lots of Slack groups that are general interest groups around certain topics, and not a company’s own internal group. Those groups are never going to use a paid option, because there’s no “company” to pay for all users. Semaphor offers an alternative, where each user can just pay their own way — which might be appealing to some user groups.
The other alternatives that have been getting some attention lately are a couple of attempts to basically create a truly open source Slack clone that can be self-hosted. The two big players here are Mattermost and RocketChat. Both have built open source, self-hosted Slack clones (and both try to make money by offering paid hosting for those who want it). Mattermost is quite upfront that it’s building a Slack alternative — it’s all over its website — though it also points out that it’s tried to improve on some things in Slack. RocketChat doesn’t seem to mention Slack, and, frankly, feels a bit behind Mattermost in development (though it also announced that it’s about to run a Kickstarter campaign to jumpstart more development.
Now, whether or not a self-hosted open source alternative is more secure than Slack… may depend. If you’re doing the self-hosted version then you’re basically relying on your own ability to keep the implementation secure. That might work. Or, whoever you have securing your installation might not be as good or as responsive as, say, the security team at Slack. But, using an open source solution that you host obviously does provide you with a lot more control and the ability to make any changes you think are necessary.
As someone who talks quite frequently about how competition drives innovation, it’s great to see all of this happening. I don’t think any of them will harm Slack’s place in the market, which has become pretty standard in a lot of companies, but as more and more companies are realizing that they need to really think through security of their communications tools, it’s a very good thing to see competition popping up. Hopefully, these competitors get stronger as well, and help drive more overall innovation — including the focus on security and encryption — across the entire market.
Filed Under: chat, cloud services, encryption, messaging, open source, security, work
Companies: mattermost, rocketchat, slack, spideroak
To Avoid Government Surveillance, South Koreans Abandon Local Software And Flock To German Chat App
from the loss-of-trust dept
South Korea seems to have a rather complicated relationship with the Internet. On the one hand, the country is well-known for having the fastest Internet connection speeds in the world; on the other, its online users are subject to high levels of surveillance and control, as the site Bandwidth Place explains:
> Under the watchful eye of the Korea Communications Standards Commission (KCSC), Internet use, web page creation, and even mapping data are all regulated. As noted recently by the Malaysian Digest, children under 16 are not permitted to participate in online gaming between midnight and 6 a.m. — accessing the Internet requires users to enter their government-issued ID numbers. In addition, South Korean map data isn’t allowed to leave the country, meaning Google Maps can’t provide driving directions, and last year the KCSC blocked users from accessing 63,000 web pages. While it’s possible to get around these restrictions using a virtual private network (VPN), those found violating the nation?s Internet rules are subject to large fines or even jail time.
A story on the site of the Japanese broadcaster NHK shows how this is playing out in the world of social networks. Online criticism of the behavior of the President of South Korea following the sinking of the ferry MV Sewol prompted the government to set up a team to monitor online activity. That, in its turn, has led people to seek what the NHK article calls “cyber-asylum” — online safety through the use of foreign mobile messaging services, which aren’t spied on so easily by the South Korean authorities. According to the NHK article:
> Many users have switched [from the hugely-popular home-grown product KakaoTalk] to a German chat app called Telegram. It had 50,000 users in early September. Now 2 million people have signed up.
That’s a useful reminder that fast Internet speeds on their own are not enough to keep people happy, and that even companies holding 90% of a market, as Kakao does in South Korea, can suffer badly once they lose the trust of their users by seeming too pliable to government demands for private information about their customers.
This seems like the type of lesson that the giant US internet companies and the NSA (along with its defenders) should be learning.
Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+
Filed Under: chat, kcsc, south korea, surveillance
Companies: kakaotalk, telegram
NSA Collects Email Contact Lists, Instant Messaging Chat Buddy Lists From Overseas With No Oversight At All
from the well,-there's-that dept
The Washington Post is out with the latest revelations from the Snowden leaks and it shows that the NSA relies on foreign telcos and “allied” intelligence agencies to scoop up data on email contact lists and instant messaging buddy lists to help build its giant database of connections. Remember a few weeks ago how it was reported that the NSA was basically building a secret shadow social network? It seems like this might be one of the ways it’s able to tell who your friends are.
There are a variety of important points here. First off, this information is not coming directly from the tech companies (which, again, suggests that earlier claims that the NSA had direct access to all their servers was mistaken). Rather they’re picking this information up off the backbone connections in foreign countries. It also explains why they get so much data from Yahoo — because, for no good reason at all, Yahoo hasn’t forced encryption on its webmail users until… the news of this started to come out.
And here’s the big problem: because all of this information is collected overseas, rather than at home, it’s not subject to “oversight” (and I use that term loosely) by the FISA court or Congress. Those two only cover oversight for domestic intelligence. The fact that the NSA can scoop up all this data overseas is just a bonus.
Also, while the program is ostensibly targeted at “metadata” concerning connections between individuals, the fact that it collects “inboxes” and “buddy lists” appears to reveal content at times. With buddy lists, it can often collect content that was sent while one participant was offline (where a server holds the message until the recipient is back online), and with inboxes, they often display the beginning of messages, which the NSA collects.
Separately, because this is allowing them to gather so much data, it apparently overwhelmed the NSA’s datacenters. At times, this is because they get inundated with… spam. For example, one of the documents revealed show that a target they had been following in Iran had his Yahoo email address hacked for spamming, and that presented a problem:
In fall 2011, according to an NSA presentation, the Yahoo account of an Iranian target was “hacked by an unknown actor,” who used it to send spam. The Iranian had “a number of Yahoo groups in his/her contact list, some with many hundreds or thousands of members.”
The cascading effects of repeated spam messages, compounded by the automatic addition of the Iranian’s contacts to other people’s address books, led to a massive spike in the volume of traffic collected by the Australian intelligence service on the NSA’s behalf.
After nine days of data-bombing, the Iranian’s contact book and contact books for several people within it were “emergency detasked.”
Because of this mess, the NSA has tried to stop collecting certain types of information, doing “emergency detasks” of certain collections. This, yet again, shows how ridiculous Keith Alexander’s “collect it all” mantra is. When you collect it all, you get inundated with a ton of bogus data, and the information presented here seems to support that.
Filed Under: buddy lists, chat, contacts, email, information, nsa, nsa spying, nsa surveillance, telcos