photos – Techdirt (original) (raw)

Twitter Admits It Messed Up In Suspending Accounts Under Its New Policy, But Policies Like This Will ALWAYS Lead To Overblocking

from the who-didn't-see-that-coming dept

Last week, we called out a questionable move by Twitter to put in place a new policy that banned images or videos that included someone without their permission. The company claimed that this was to prevent harassment and also (for reasons I still don’t understand) buried the fact that this was not supposed to apply to public figures or newsworthy media. However, as we pointed out at the time, the standard here was incredibly subjective, and wide open to abuse. Indeed, we pointed out examples in that article of the policy clearly being abused. And even as Fox News talking heads insisted the policy would only be used against conservatives, in actuality, a bunch of alt right/white nationalists/white supremacists immediately saw this as the perfect way to get back at activists who had been calling out their behavior, leading to a mass brigading effort to “report” those activists for taking pictures and videos at white nationalist rallies and events.

In other words, exactly what we and tons of others expected.

And, a few days later, Twitter admitted it messed up the implementation of the policy — though it doesn’t appear to be rolling back the policy itself.

Twitter said Friday that it had mistakenly suspended accounts under a new policy following a flood of ?coordinated and malicious reports? targeting anti-extremism researchers and journalists first reported Thursday by The Washington Post.

The company said it had corrected the errors and launched an internal review to ensure that its new rule ? which allows someone whose photo or video was tweeted without their consent to request its removal ? was ?used as intended.?

[….]

In a statement Friday, Twitter spokesman Trenton Kennedy said that the company had been overwhelmed with a ?significant amount? of malicious reports and that its ?enforcement teams made several errors? in the aftermath.

What I am perplexed about, however, is that I know for a fact that Twitter has a bunch of smart and thoughtful people who work on trust and safety issues and who clearly would know how this policy would play out in practice, and yet for whatever reason the policy was still rolled out the way it was. I don’t know if this is because people underestimated how quickly and wildly it would be abused, or if others at the company simply overruled concerns raised by experts, or if it was something else entirely. Either way… it’s a weird and surprising misstep for Twitter.

That said, it’s also illustrative of a really important point that we’ve been trying to raise for ages, going back to the DMCA takedown process, and covering all sorts of other policy debates regarding content moderation: if you give people tools to take down some content, they will be abused. Always. That’s not to say you should never moderate or never take down content, because that’s impossible. There is always going to be some content that sites need to take down, whether for legal purposes or because it’s harming the integrity of the site (things like spam or harassment).

But any such policy always opens itself up to abuse and dishonest reporting. And any company that has such policies (i.e., every company with third party content) needs to have a plan in place not just to deal with the abusive/problematic content on the site, but also the abuse of the moderation process to silence voices that shouldn’t actually be silenced.

Again, from my interactions with Twitter trust & safety people, I know they know this. And this is part of why I find the rollout of this policy so perplexing.

However, it’s also an important lesson for policymakers in various state legislatures, in DC, and around the globe. There is so much effort these days to pressure (or require!) internet companies to remove “bad” content, but almost none of those policy plans take into account the ways in which those policies will be abused to silence reporting, silence marginalized voices, and silence those calling out abuses of power. Twitter’s rollout of this policy has been a disaster (and one that could have been prevented) but at the very least it should be a warning to policymakers who seem to think that they can design policy requirements to moderate certain content, without bothering to explore the likelihood that those mandates will be abused to silence important speech.

Filed Under: content moderation, overblocking, photos, private information, private media, videos
Companies: twitter

If MLB Thought Its Website Shenanigans Would Intimidate MLB Players, That Plan Has Backfired

from the swing-and-a-miss dept

We had just discussed some actions Major League Baseball has taken on its MLB.com website which is either fallout from the labor lockout currently going on or MLB playing leverage games with players, depending on your perspective. Essentially, MLB scrubbed most of its website, particularly on the home and “news” pages, of references to any current players. Instead, those pages are full of stories about retired players, candidates for the Hall of Fame, and that sort of thing. In the tabs for the current rosters, the site still has all of the names of players listed, but has replaced each and every player headshot with a stock image of a silhouette. MLB says it was doing this to ensure that no player “likenesses or images” are considered in use for commerce or advertising… but that doesn’t make much sense. The names are still there and this specific section is a factual representation of current team rosters.

Instead, this appears to be a small part of a strong-arming tactic, in which MLB is flexing its ability to scrub its and individual team sites of information and, in this case, pictures of players. But if MLB thought that it was going to cause the players any real pain by removing those headshots from the site, well, many players went ahead and proved on Twitter that, well, not so much.

A bunch of players, including [Noah] Syndergaard, joined in on the fun by using their new headshot as a Twitter avatar.

It’s way more widespread than that. Players all over Twitter and elsewhere took to replacing their own social media avatars with the silhouette “headshot”. It became very clear that the players were simply poking MLB in the eye, despite the league trying to punish players over these labor negotiations.

Which is yet another PR hit to the league. It’s worth keeping in mind that this is not a player strike; it is a owners lockout. That becomes very important in the wake of the last labor stoppage MLB had, which was the disastrous players strike in 1994. Because that was a player strike, the public very much blamed the players for the loss of an MLB season. That’s not the case here, where the owners are crying poor to the players union while also spending millions and millions of dollars to gobble up free agents just before the previous CBA expired.

With labor issues like this in professional sports, optics is everything. MLB only recovered from the last stoppage thanks to a steroid-driven homerun race between Sammy Sosa and Mark McGuire, among others. You can damn well bet that the league doesn’t want anything remotely like that to happen again, which means it can’t let the public’s anger get out of control.

And a few days in, having the players publicly mocking MLB’s tactics on a platform designed to engage directly with the public and fans is not a good start if the league expects to have any of the sentiment out there falling in its favor.

Filed Under: baseball, cba, labor, lockout, photos, rosters, website
Companies: mlb

Twitter's New 'Private Information' Policy Takes Impossible Content Moderation Challenges To New, Ridiculous Levels

from the the-end-of-everything dept

I’ve been working on a post about a whole bunch of interesting (and good!) things that Twitter has been doing over the last few months. I’m not sure when that post will be ready, but I need to interrupt that process to note a truly head-scratching change in Twitter’s moderation policy announced this week. Specifically, Twitter announced that its “private information policy” has now been expanded to include “media.” Specifically, Twitter says that it will remove photos and videos that are posted “without the permission of the person(s) depicted.” In other words, Twitter has taken the old, meme-ified, “I’m in this photo and I don’t like it” into official policy for taking down content.

Buried deeper in the rules is a very subjective conditional:

This policy is not applicable to media featuring public figures or individuals when media and accompanying text are shared in the public interest or add value to public discourse.

But that’s going to lead to some very, very big judgment calls about (a) who is a “public figure” and (b) what is “in the public interest.” And early examples suggest that Twitter’s Trust & Safety team are failing this test.

I can understand the high level, first pass thinking that leads to this policy: there are situations in which photos or videos that are taken surreptitiously and then are used to mock or harass someone can certainly raise questions and there are perfectly reasonable policy choices to be made on how to handle those scenarios. But how do you distinguish those rare circumstances with a much wider series of cases where people may not have given permission to be in photos or videos, but keeping that content online is clearly beneficial. This can range from the obvious incidental background images of people walking by in a crowded place to — much more concerning — situations of journalism being done by individuals, recording important events. Twitter’s insistence that it won’t apply to “public interest” issues is hardly reassuring. We’ve seen those claims before and they rarely work in practice.

The most obvious example of this is one of the biggest stories of 2020: the video taping of police officer Derek Chauvin kneeling on George Floyd’s neck until he died. In theory, under the broadness of this policy, that video would be taken down off of Twitter. There are lots of other situations as well, including things like Amy Cooper, who was filmed in Central Park calling the police on Christian Cooper (no relation to Amy) who was in the park bird-watching. There are plenty of other examples where people are filmed in public, without their permission, but it’s done to reveal important things that have happened in the world. For example, law enforcement relied on help from social media to help identify people who stormed the Capital on January 6th. It seems that under this new policy, all those photos of January 6th insurrectionists could be removed from Twitter. Is sharing them in the public interest? Depends on who you ask, I imagine…

For years we’ve seen tons of people abusing other systems to take down content they didn’t like. For example, there was the part owner of the Miami Heat who literally sued over an unflattering photo by first obtaining the copyright for it. Or the revenge porn extortionist who tried to force stories about him offline with copyright notices. In Europe, we’ve seen something similar with abuses of the “right to be forgotten” to memory hole news stories.

And here, Twitter is setting up to just take down any such photo or video upon request? This seems wide open for massive levels of abuse. Indeed, there are already a number of reports about the policy being used to silence activists and researchers:

Predictably, @Twitter's new "private media" policy is being used to protect white nationalists from public scrutiny.

Twitter has locked Atlanta Antifascists out of their account, over a 2018 tweet about a White Student Union racist organizer. @afainatl are appealing. Disgusting. pic.twitter.com/eUS4P2bBHU

— Atlanta Anti-Racist News (@ATLantiracism) December 1, 2021

URGENT: As we feared, @TwitterSafety is already locking and suspending the accounts of extremism researchers under its new "Private Media" policy.

The video is from September (predating the policy) and shows two right-wing extremists IN PUBLIC, planning violent assaults. pic.twitter.com/dp7zlt1u4r

— Chad Loder (they/them) (@chadloder) November 30, 2021

NEW: A Minneapolis activist has been targeted under @TwitterSafety's new Private Media policy for posting a screenshot of a public Facebook post by a prominent local landlord who runs a public, 25,000-member crime watch group.

The "private media" is a post linking to a GoFundMe. pic.twitter.com/ZuJ4KTthUg

— Chad Loder (they/them) (@chadloder) December 1, 2021

Yes, even some of the examples above may be considered edge cases with more nuance than is presented by the people posting them, but as we’ve seen with copyright and the right to be forgotten, give people a tool to get any information removed from social media, and it will be massively and widely abused to try to hide bad behavior.

I’m honestly perplexed at why Twitter implemented such a broad policy, so difficult to enforce, and so open to abuse. It seems extremely unlike the more thoughtful trust & safety moves the company has made over the past few years.

Filed Under: content moderation, media, newsworthy, photos, private information, public figure, trust & safety, videos
Companies: twitter

Georgia School District Inadvertently Begins Teaching Lessons In First Amendment Protections After Viral Photo

from the not-how-this-works dept

There’s this dumb but persistent meme in American culture that somehow the First Amendment simply doesn’t exist within the walls of a public school district. This is patently false. What is true is that there have been very famous court cases that have determined that speech rights for students at school may be slightly curtailed and must face tests over “substantial disruption” of the speech in question in order to have it limited. Named after the plaintiff in that cited case, the “Tinker test” essentially demands that schools not simply dislike a student’s speech or the discomfort that comes from it, but instead must be able to demonstrate that such speech is disruptive to the school and students broadly. The facts of that case, for instance, dealt with students being suspended for wearing anti-war armbands. Those suspensions were seen as a violation of the students’ First Amendment rights, because obviously.

Subsequent cases, such as Morse v. Frederick, have very slightly and narrowly expanded the limitations on speech within schools. In this case, for instance, a student’s speech encouraging the use of illegal drugs was found to be a valid target for school punishment. But, narrow or not, some analysis has worried that cases like this could be used to expand the curtailing of student speech:

By contrast, the Eleventh Circuit extended Morse’s rationale about illegal drugs to the context of student speech that is “construed as a threat of school violence”. Boim, 494 F.3d at 984 (upholding the suspension of a high school student for a story labeled as a “dream” in which she described shooting her math teacher). Moreover, the court concluded that Morse supports the idea that student speech can be regulated where “[in] a school administrator’s professional observation … certain expressions [of student speech] have led to, and therefore could lead to, an unhealthy and potentially unsafe learning environment”.

Disallowing student speech that amounts to threats of violence indeed seems to make sense. That being said, speaking of “an unhealthy and potentially unsafe learning environment”:

You’d be forgiven if you thought that picture was taken from the Paulding County high school six months ago, with so few masks. But it wasn’t. Instead, it was taken on August 4th, the first day back to school for Paulding County. Whatever your thoughts on whether and how schools should be opening, you really need to go read that entire article from BuzzFeed. The overwhelming impression left is that Paulding County appears to have reopened its schools in as callous and cavalier manner possible while still staying just inside government guidelines. Masks? Sure, if you want, but they’re optional. Distancing? Of course, but we can’t really enforce it in any meaningful way. And overall safety?

North Paulding teachers said they too felt they had no choice but to show up to work, even after a staff member texted colleagues saying she had tested positive for the virus. The staffer had attended planning sessions while exhibiting symptoms, one teacher said.

She did not attend school after testing positive. But teachers have heard nothing from the school, they said, which won’t confirm that staff members have tested positive, citing privacy concerns.

The Paulding County School Superintendent, Brian Otott, began reaching out to parents to reassure them that what they saw in the viral photo going around Twitter was fine, just fine. It lacked context, you see. Context, one presumes, is another word for safety. Or, if we are to believe Otott, the context is essentially: yes, this is totally happening, but the state said we can operate this way.

Otott claimed in his letter that the pictures were taken out of context to criticize the school’s reopening, saying that the school of more than 2,000 students will look like the images that circulated for brief periods during the day. The conditions were permissible under the Georgia Department of Education’s health recommendations, he said.

This from the same state that has the 6th highest number of total COVID-19 cases, the 11th most total cases per capita, the 4th most total new cases in the last week, and the 6th most new cases per capita in the last week. So, you know, not the state doing the best job in the country by a long shot at containing outbreaks of this virus.

Which perhaps makes sense, actually, since Otott seems chiefly interested in containing not the virus in his school halls, but rather any criticism of his district. Remember that viral photo that kicked off this discussion? Well…

At least two students say they have been suspended at North Paulding High School in Georgia for posting photos of crowded hallways that went viral on Twitter.

The photos show students packed into hallways between classes, not appearing to practice social distancing and with few masks visible, amid the coronavirus panic. They went viral after being shared by the account @Freeyourmindkid.

Those suspensions being handed out are five day suspensions and are being levied at violations of school rules around using cell phone cameras without permission. A couple of things to say about that.

First, the removal of a student from a School-sanctioned petri dish of a novel coronavirus feels odd as a punishment. Were it not for the intentions of the Superintendent, it would be damn near heroic as an attempt to save these kids from getting sick.

Second, refer back to my two paragraph throat-clearing above. This isn’t constitutional. Nothing about the students sharing their concerns amounts to a disruption of school, or anything else that would qualify this protected speech for scholastic punishment. Taking a fearful 15 year old student and punishing him or her for their fear is beyond reproach. And, about those school rules for cell phones:

On Wednesday, an intercom announcement at the school from principal Gabe Carmona said any student found criticizing the school on social media could face discipline.

Again, plainly unconstitutional. One wonders why anyone should have faith in a school administration that isn’t even educated enough on the rights of its own students to keep from ignorantly broadcasting its idiocy over school intercoms. Why are these people even allowed to teach children in the best of times, never mind during a pandemic as these kids get herded like cattle to the slaughter through school halls?

While I guess we’ll all get to see what happens in this idiotic school district now, and maybe even learn some lessons from what occurs, I’m generally not of the opinion that we should treat our own children like they were the subjects of some kind of bizarre modern-day Tuskegee test.

Filed Under: 1st amendment, free speech, georgia, paulding county, photos, school reopenings, students, suspensions
Companies: north paulding hs

Court Tells Grandma To Delete Photos Of Grandkids On Facebook For Violating The GDPR

from the what-the-what? dept

We’ve talked for many years now about the overreach of the GDPR and how its concepts of “data protection” often conflict with both concepts of free expression and very common every day activities. The latest example, first highlighted by Neil Brown, is that a Dutch court has said that a grandmother must delete photos of her grandkids that she posted to Facebook and Pinterest, because it violates the GDPR. There is, obviously, a bit more to the case, and it involves a family dispute involving the parents and the grandmother, but, still, the end result should raise all sorts of questions.

And while many EU data protections folks are saying this was to be expected based on earlier EU rulings regarding the GDPR, it doesn’t make the result any less ridiculous. As the BBC summarizes:

The case went to court after the woman refused to delete photographs of her grandchildren which she had posted on social media.

The mother of the children had asked several times for the pictures to be deleted.

The GDPR does not apply to the “purely personal” or “household” processing of data.

However, that exemption did not apply because posting photographs on social media made them available to a wider audience, the ruling said.

There are a few interesting elements in the actual ruling. First, the court notes that since no one made a copyright claim, it doesn’t sound like the parents hold the copyright on the images — which is notable only in that the court seems to think it’s natural to use copyright to censor a grandma proudly posting photos of her grandkids.

But on the GDPR question, it notes that the lack of evidence regarding the privacy settings the grandmother used leads the court to assume they were posted publicly:

The General Data Protection Regulation (hereinafter: AVG) protects the fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data. However, this Regulation does not apply to the processing of personal data by a natural person in the exercise of a purely personal or household activity. Although it cannot be excluded that the placing of a photo on a personal Facebook page falls under a purely personal or household activity, in the preliminary opinion of the Court in preliminary relief proceedings, it has not been sufficiently established how [defendant] set up or protected her Facebook account or her Pinterst account. It is also unclear whether the photographs can be found through a search engine such as Google. In addition, with Facebook it cannot be ruled out that placed photos may be distributed and may end up in the hands of third parties. In view of these circumstances it has not appeared in the scope of these preliminary relief proceedings that there is a purely personal or domestic activity of [defendant]. This means that the provisions of the General Data Protection Act (AVG) and the General Data Protection Implementation Act (hereinafter: UAVG) apply to the present dispute.

And, then you combine that with the fact that children are involved, and the court says, yup, GDPR requires takedown:

The UAVG stipulates that the permission of their legal representative(s) is required for the posting of photographs of minors who have not yet reached the age of 16. It has been established that the minor children of [plaintiff] are under the age of 16 and that [plaintiff], as legal representative, has not given permission to [defendant] to post photographs of her children on social media. In the case of [child 1], his father did not give [defendant] permission either. In view of this the Court in preliminary relief proceedings will order [defendant] to remove the photo of [child 1] on Facebook and the photo of [plaintiff] and her children on Pinterest. In addition, [defendant] will be prohibited from posting pictures of the minor children of [plaintiff] on social media without permission (as referred to in the AVG and UAVG). The emotional importance of [defendant] to be allowed to place photographs on social media cannot lead to a different judgment in this respect.

Neil Brown, who highlighted this situation in the first place, has pondered that even if grandparents posting pictures of their grandkids is normal behavior, that doesn’t mean it’s good and it removes “autonomy” over our own data. I have a ton of respect for Brown, but this is a very European view that includes an assumption that we should have “autonomy” over anything about ourselves — which, when judged against the harsh light of reality, seems incredibly silly.

Yes, there are cases where people will have things posted online about themselves that they’d rather are not there. And I understand that this is an even more fraught area when it comes to children. But there are very real free expression concerns as well, and the ability to use this as a tool of blatant censorship seems way too likely.

Filed Under: data protection, family disputes, gdpr, grandchildren, grandmother, photos, privacy
Companies: facebook, pinterest

Elon Musk And SpaceX Just Backed Down From Earlier Promise To Release SpaceX Photos To The Public Domain

from the this-is-disappointing dept

Well, this is very disappointing. Back in 2015, you may recall that there was an effort to get SpaceX to put its photos into the public domain. As you hopefully know, all NASA photos, as works of the US government, are in the public domain — which let us post photos like this one:

But as more and more spaceflight gets privatized, there were concerns that future space photos may increasingly get locked up behind copyright.

After an initial outcry, SpaceX initially agreed to use a Creative Commons license, but one that restricted usage to non-commercial efforts. As we pointed out at the time, that really wasn’t good enough. Why does SpaceX need copyright as incentive to take photographs?

After people pointed this out to Elon Musk, he said that they had a good point and that he changed SpaceX’s policy to dedicate all the photos to the public domain. And that’s how it’s been for over four and a half years.

Until now. As Vice’s Motherboard reports, SpaceX has now gone back to a more restrictive Creative Commons license, one that says no commercial use is allowed. While using CC is better than going all out with full restrictions, this is still a very disappointing move. The company has told reporters that news organizations can still use the images, and many will have to rely on that promise. While Creative Commons has put a lot of effort into “clarifying” what is meant by “non-commercial” in recent years, including highlighting that for profit news orgs should still be able to make use of such works, that’s not really been tested in court.

And, considering that Elon Musk has an occasionally antagonistic relationship with the press, you could see an unfortunate situation in which he decides to go after a journalism organization that upsets him by claiming that they were misusing the “NC” part of the license on a SpaceX photo.

So, once again, we have to ask: why is SpaceX doing this? Why is it going back on Musk’s earlier promise that all SpaceX photos would be in the public domain? Why does SpaceX need the restrictions of copyright as an incentive to take photos? Isn’t just being able to get to space enough incentives to take some photos?

Filed Under: cc licenses, copyright, elon musk, photos, public domain, space, space images
Companies: spacex

Securing The Nation With Insecure Databases: CBP Vendor Hacked, Exposing Thousands Of License Plate, Car Passenger Photos

from the guess-you-have-to-give-up-some-security-to-gain-some-security? dept

US Customs and Border Protection has suffered an inevitability in the data collection business. The breach was first reported by the Washington Post. It first appeared to affect the DHS’s airport facial recognition system, but further details revealed it was actually a border crossing database that was compromised.

The breach involved photos of travelers and their vehicles, which shows the CPB is linking people to vehicles with this database, most likely to make it easier to tie the two together with the billions of records ICE has access to through Vigilant’s ALPR database.

The breach involved a contractor not following the rules of its agreement with the CBP. According to the vendor agreement, all harvested data was supposed to remain on the government’s servers. This breach targeted the vendor, which means the contractor had exfiltrated photos and plate images it was specifically forbidden from moving to its own servers.

According to reports from other news agencies, the breach likely involve Perceptics, a Tennessee-based manufacturer of stationary license plate readers. The Register first reported a breach there on May 23, after being contacted by a hacker possibly involved with the attack on the company’s servers. The CBP claims it was not aware of this breach until May 31. But this piece of info from the Register’s article seems to indicate Perceptics may be the vendor the agency has refused to name.

Perceptics recently announced, in a pact with Unisys Federal Systems, it had landed “a key contract by US Customs and Border Protection to replace existing LPR technology, and to install Perceptics next generation License Plate Readers (LPRs) at 43 US Border Patrol check point lanes in Texas, New Mexico, Arizona, and California.”

This is all but confirmed in the Washington Post’s report, which contains another link to Perceptics the CBP has refused to officially confirm.

CBP would not say which subcontractor was involved. But a Microsoft Word document of CBP’s public statement, sent Monday to Washington Post reporters, included the name “Perceptics” in the title: “CBP Perceptics Public Statement.”

No personal info was included in the breach, which the CBP said affected about 100,000 travelers entering and exiting the US through a single point of entry. It also claims it hasn’t seen any of the data surface on the light or dark web, so there’s that, if that statement is actually true.

This news has prompted many reactions, including some very obvious ones: first and foremost, the easiest way to minimize the damage of inevitable data breaches is to not harvest so much damn data. Unfortunately, the DHS’s plans only involve expansion of its existing collection programs, including a larger rollout of its airport biometric scanning and its new mandatory collection of social media info from incoming foreigners.

It’s pretty tough to secure a nation when you can’t secure a database. This breach may have been the result of a vendor breaking the rules, but the Office of Personnel Management breach proves the US government isn’t immune from these attacks. The more you gather and store in one place, the more often you’ll be targeted by enemies foreign and domestic.

Finally, the incident has angered a handful of Congressional reps.

House Homeland Security Committee Chairman Bennie Thompson (D-Miss.) announced on Monday that his committee would hold hearings next month to examine the collection of biometric information by the Department of Homeland Security (DHS), which includes CBP.

Thompson also noted that he wants to ensure “we are not expanding the use of biometrics at the expense of the privacy of the American public.”

Homeland Security Committee ranking member Mike Rogers (R-Ala.), used the breach to criticize DHS’s handling of cybersecurity challenges, saying in a statement to The Hill that “the agency is ill-equipped to handle emerging cyberthreats.”

“The data breach resulted from a contractor acting improperly and against agency policy,” Rogers said. “We need to take steps to ensure this does not happen again.”

Ensuring contractors follow the rules isn’t really a solution. It may reduce the number of attack vectors, but it doesn’t address the underlying issue: we’re collecting more data on people than ever before and breaches are not a matter of “if,” but “when.” Until Congress gets serious about scaling back these massive collections, these will remain popular targets with the potential to cause a tremendous amount of harm to the millions of people who pass through our borders and airports.

Filed Under: alpr, cbp, hacked, license plates, photos, privacy
Companies: perceptics

Another Day, Another Company Scraping Photos To Train Facial Recognition AI

from the ALL-YOUR-FACE-ARE-BELONG-TO-US dept

If your face can be found online, chances are it’s now part of a facial recognition database. These aren’t the ones being utilized by law enforcement, although those are bad enough. The ones used by law enforcement are littered with millions of noncriminals, all part of a system that works worse than advertised 100% of the time.

The faces aren’t in those databases (yet!), but they’re being used to train facial recognition AI with an eye on selling it to law enforcement and other government agencies. Another photo storage company has been caught using users’ photos to fine tune facial recognition software… all without obtaining consent from those whose faces became fodder for the tech mill.

“Make memories”: That’s the slogan on the website for the photo storage app Ever, accompanied by a cursive logo and an example album titled “Weekend with Grandpa.”

Everything about Ever’s branding is warm and fuzzy, about sharing your “best moments” while freeing up space on your phone.

What isn’t obvious on Ever’s website or app — except for a brief reference that was added to the privacy policy after NBC News reached out to the company in April — is that the photos people share are used to train the company’s facial recognition system, and that Ever then offers to sell that technology to private companies, law enforcement and the military.

This has been 2019’s theme for the first five months of the year. Users of popular photo apps and services are being notified belatedly — and not by the companies performing the harvesting — that their faces are an integral part of law enforcement machinery and/or the military-industrial complex.

Ever’s oh-shit-we-got-caught statement doesn’t offer much mollification.

Doug Aley, Ever’s CEO, told NBC News that Ever AI does not share the photos or any identifying information about users with its facial recognition customers.

Lots of people would rather not be participants in creating surveillance tech. Most never seek employment at companies crafting products for law enforcement, intelligence agencies, and the US military. Without being informed, the photos they thought they were harmlessly sharing with family and friends have been used to make surveillance easier and more pervasive, if not actually any better.

Ever is just the latest. Prior to this, Flickr photos were swept up in a facial recognition data set compiled by IBM.

The photo is undeniably cute: a mom and a dad — he with a stubbly beard and rimless glasses, she with choppy brown hair and a wide grin — goofing around and eating ice cream with their two toddler daughters.

The picture, which was uploaded to photo-sharing site Flickr in 2013, isn’t just adorable; with a bunch of different faces in various positions, it’s also useful for training facial-recognition systems, which use artificial intelligence to identify people in photos and videos. It was among a million images that IBM harnessed for a new project that aims to help researchers study fairness and accuracy in facial recognition, called Diversity in Faces.

IBM also apologized for using people’s photos for its data set without their permission. It said users were welcome to opt out at any time, but did not give users tools to find out whether their photos had been used. Nor is there any way to expeditiously remove found photos other than by handing over your Flickr ID to IBM.

And if it’s not a tech company harvesting photos to run AI tests, it’s random internet users showing just how easy it is to compile a data set using other people’s photos.

Tinder users have many motives for uploading their likeness to the dating app. But contributing a facial biometric to a downloadable data set for training convolutional neural networks probably wasn’t top of their list when they signed up to swipe.

A user of Kaggle, a platform for machine learning and data science competitions which was recently acquired by Google, has uploaded a facial data set he says was created by exploiting Tinder’s API to scrape 40,000 profile photos from Bay Area users of the dating app — 20,000 apiece from profiles of each gender.

The data set, called People of Tinder, consists of six downloadable zip files, with four containing around 10,000 profile photos each and two files with sample sets of around 500 images per gender.

Tinder’s reaction was to call this a violation of its Terms of Service. But this determination doesn’t undo the damage nor make it impossible for someone else to do the same thing. Tinder users spoken to by TechCrunch weren’t happy their photos — some of which have never been uploaded outside of the app — are being used by a person they don’t know to perform research.

It’s not that there’s no legitimate uses for publicly-available photos. But transparency is the key and no one harvesting photos to train AI systems or perform research seems too concerned about being upfront with the people whose photos they’re using. It’s even worse in the case of Ever, where the app company itself is the one developing facial recognition software on the side, which should make users question the intent of the app developers. Did they really want to offer another photo service or were they just using this to gather faces for their real moneymaker?

Filed Under: ai, facial recognition, photos, social media
Companies: ever

Why Your Holiday Photos And Videos Of The Restored Notre Dame Cathedral Could Be Blocked By The EU's Upload Filters

Although the terrible fire at Notre Dame cathedral in Paris destroyed the roof and spire, the main structure escaped relatively unscathed. Thoughts now are on repairing the damage, and rebuilding the missing parts. France has announced that it will hold an international competition to redesign the roofline. As the Guardian points out, the roof was ancient, but the spire was not:

Notre Dame was built over a period of nearly 200 years, starting in the middle of the 12th century, but the lead-covered spire, which reached a height of 93 metres from the ground, was only added in the mid-19th century, during a major restoration project completed by the architect Eugène Viollet-le-Duc.

That fact has stimulated a lively debate about whether the roof should be restored to how it was, using Viollet-le-Duc’s design for the spire, or rebuilt with a completely new, contemporary appearance. The French Prime Minister, Édouard Philippe, acknowledged this issue when he announced the competition:

“The international competition will allow us to ask the question of whether we should even recreate the spire as it was conceived by Viollet-le-Duc,” Philippe told reporters after a cabinet meeting dedicated to the fire.

“Or, as is often the case in the evolution of heritage, whether we should endow Notre Dame with a new spire. This is obviously a huge challenge, a historic responsibility.”

Techdirt readers may be interested in what might otherwise seem a rather rarefied architectural discussion because of how French law implements EU copyright exceptions. The site copyrightexceptions.eu explains:

In the European Copyright framework the rights of users and public interest organisations are codified as exceptions and limitations to the exclusive rights of authors and other rightsholders. As such, they form one side of the balance between the rights of creators to exercise control of their works and the rights of the public to access culture and information. While the exclusive rights of creators and other rightsholders have been largely harmonised across the 28 member states of the European Union, exceptions and limitations are far from harmonised. Article 5 of the 2001 Copyright in the Information Society (InfoSoc) Directive (2001/29/EC) contains a list of 20 optional and one harmonised exceptions. In 2012 the Orphan Works Directive (2012/28/EU EC) added another mandatory exception. This has created a situation where user rights across Europe are a patchwork.

One of the optional copyright exceptions in EU law is whether to protect works of architecture, and sculptures in public places, or to allow “freedom of panorama“. France chose the latter, but imposed a key condition:

The implemented exception authorises “reproductions and representations of works of architecture and sculpture, placed permanently in public places (voie publique), and created by natural persons, with the exception of any usage of a commercial character”

This is why pictures of the Eiffel Tower at night taken for commercial purposes require a license: although the copyright of the tower itself has expired, the copyright on the lights that were installed in 1989 has not. And it’s not just about the Eiffel Tower. As the credits at the end of this time-lapse video show (at 2 minutes 10 seconds) other famous Parisian landmarks that require copyright permission to film them include the Louvre’s Pyramid and the Grande Arche in the French capital’s business district.

It is not clear whether taking photos or videos of these landmarks and then posting them online counts as commercial use. They may be for personal use, and thus exempt in themselves, but they are generally being posted to commercial Internet services like Facebook, which might require a license. That lack of clarity is just the sort of thing that is likely to cause the EU Copyright Directive’s upload filters to block images of modern buildings in France — including the re-built spire of Notre Dame cathedral, if it is a new design.

A key proposal that the Pirate MEP Julia Reda put forward in her copyright evaluation report, which fed into the Copyright Directive, was to implement a full freedom of panorama right across the EU. The European Parliament backed the idea, as did all the EU nations except one — France, as Politico later revealed — so the idea was dropped. That lack of an EU-wide freedom of panorama is yet another example of how the Copyright Directive failed to throw even a tiny crumb to citizens, while handing out even more power for the copyright industry to use and abuse. So if one day your holiday pictures and videos of the re-built Notre Dame cathedral get blocked in the EU, you will know who to blame.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

Filed Under: architecture, article 13, copyright, france, freedom of panorama, notre dame, photos

Miami City Attorney Tries To Erase Photos Of Fired Firefighters From The Internet

from the this-isn't-how-this-problem-gets-fixed dept

Six firefighters fired over a racist incident are the possible, but unlikely, beneficiaries of Florida public records law. Here’s how they ended up fired, via the Miami Herald, which broke the story. (h/t Boing Boing)

Miami’s fire chief on Thursday blasted six fired firefighters accused of draping a noose over a black colleague’s family photos, and released images of the “egregious and hateful” vandalism.

Photos of the scene at Fire Station 12, located on Northwest 46th Street near Charles Hadley Park, show that someone took a black lieutenant’s family photos out of their picture frames, drew penises onto the pictures, then reinserted them in their frames and placed them on a wood shelf next to a teddy bear figurine. Someone also hung a noose made of thin, white rope over one of the photos.

Five more firefighters are still under investigation. The six firefighters — Capt. William W. Bryson, Lt. Alejandro Sese, David Rivera, Harold Santana, Justin Rumbaugh and Kevin Meizoso — were all terminated after the completion of a Miami police investigation. We know their names and what they look like, thanks to the Miami Herald’s reporting and an apparent misstep by a Miami government agency.

On Thursday, ahead of a press conference scheduled for Friday morning with Miami’s mayor, Miami Fire Rescue also released the fired firefighters’ department photos even though Florida law exempts pictures of current and former firefighters from disclosure under the state’s broad public records laws.

Now, the city — facing a possible lawsuit from the firefighters union — is throwing CTRL-Z notices at local news agencies.

Just after midnight Friday morning, an assistant city attorney wrote an email to multiple news outlets demanding that the media “cease and desist from further showing the firefighters pictures in your coverage of this event.” Jones said the photos of the six men had been released accidentally.

“As former first responders, their photos are confidential and exempt under Florida’s public disclosure law and should not have been released,” wrote Kevin R. Jones.

Too bad. That’s a problem for the city, not journalists. The Miami Herald will be keeping the photos up. So will WPLG, which interviewed the victim of the racist acts. It’s been relegated to a sidebar, but the photos are still there.

ABC News has also kept the photos up, albeit as an image that lasts only as long as it takes for the autoplaying video to load. Those looking for a longer-lasting image will have to make do with the sidebar thumbnail.

The images are already out there. Telling the media to unpublish the photos is a ridiculous move. The union plans to sue the city for releasing the photos, but that’s not going to do anything to return the internet to the state it was in prior to the accidental photo dump.

As for the firefighters inadvertently left unprotected by this “violation” of Florida’s open records law, it would seem the best way to keep your photo from being displayed in stories about racist acts by public servants is refraining from engaging in bigoted acts while employed as public servants. Trying to turn online media sources into self-serving time machines only ensures maximum visibility.

Filed Under: alejandro sese, cease and desist, david rivera, firefighters, harold santana, justin rumbaugh, kevin meizoso, miami, photos, racists, reporting, streisand effect, william bryson