identification – Techdirt (original) (raw)
Eighth Circuit Says Cops Can Come With Probable Cause For An Arrest AFTER They’ve Already Arrested Someone
from the do-what-now dept
Well, this is a bit of a doozy. This case — via the Institute for Justice — involves a possible First Amendment violation but somehow ends with a judicial blessing of cops who make things up after the fact to justify an arrest that has already taken place.
That’s literally what happened here. Mason Murphy was walking down a Missouri road when he was accosted by Officer Michael Schmitt. From the opening of this very unfortunate decision [PDF]:
Schmitt stopped his car, approached Murphy, and asked Murphy to identify himself. Murphy refused to identify himself, and Schmitt put Murphy in handcuffs after nine minutes of argument. Murphy asked why Schmitt arrested him, and Schmitt refused to answer.
So far, it would appear no criminal act was committed and that the cuffing of Murphy by Schmitt was in retaliation for Murphy’s refusal to identify himself and, First Amendment-wise, his refusal to shut up.
I said “so far,” but nothing really changed following this first nine minutes of unjustified detention. It continued. And it got worse.
On the drive to the sheriff’s department, Murphy again asked Schmitt why he was being arrested. Schmitt responded that the arrest was for “failure to identify.”
Now, that could have been a legitimate charge. State law does allow officers to demand identification in certain cases.
They shall also have the power to stop any person abroad whenever there is reasonable ground to suspect that he is committing, has committed or is about to commit a crime and demand of him his name, address, business abroad and whither he is going.
Note the bold print, though. To demand identification from Murphy, Officer Schmitt would have needed to suspect his walking on the side of the street was a criminal act. But Schmitt apparently didn’t consider this to be a criminal act. Nor did he seem to have any idea whether any criminal act had been committed that would justify (1) his demand Murphy identify himself, and (2) the subsequent arrest for “failure to identify.”
None of that happened. Officer Schmitt arrested first and asked rather desperate questions later.
Once at the station, Schmitt can be heard making a call to an unknown individual and saying he “saw the dip shit walking down the highway and [he] would not identify himself.” Schmitt then asked the unknown individual: “What can I charge him with?”
Without any predicate suspected offense, Officer Schmitt could not demand Murphy identify himself. Murphy could not have possibly violated the law Schmitt first thought he could arrest him for. Schmitt appears to have recognized this fact, hence his call for charging advice that might allow him to reverse engineer the probable cause to support his actions.
Meanwhile, Murphy sat in a cell for two hours until officers identified him via a credit card in his wallet and released him.
Murphy sued, claiming his First Amendment right to mouth off to a law enforcement officer was violated by this obviously retaliatory arrest that was completely unsupported by probable cause.
Both the lower court and the appeals court say there was probable cause, even if Officer Schmitt didn’t appear to know it at the time he accosted and cuffed Murphy.
A Missouri statute requires pedestrians to “walk only on the left side of the roadway or its shoulder facing traffic which may approach from the opposite direction.” Mo. Rev. Stat. § 300.405.
Murphy agreed there was probable cause to arrest him under this statute, but pointed out this was much like the Supreme Court’s Nieves case, where the court decided any probable cause for an arrest automatically defeats First Amendment retaliation claims. The justices in this case noted there are exceptions to this rule, like the sudden enforcement of laws law enforcement officers had never bothered to enforce prior to the retaliatory arrest.
The parties agree Schmitt had probable cause to arrest Murphy because Murphy was in violation of Missouri Revised Statute § 300.405. Murphy argues the facts in this case fit into the possible Nieves exception because, like the hypothetical in Nieves, this is a situation where “officers have probable cause to make arrests, but typically exercise their discretion not to do so.”
[…]
The Supreme Court in Nieves gave an example of an individual who is arrested for jaywalking in an intersection where “jaywalking is endemic but rarely results in arrest” while the individual is “vocally complaining about police conduct[.]” Nieves 139 S. Ct. at 1727. Murphy relies heavily on the similarities between jaywalking and walking on the wrong side of the road to prove his point.
The Eighth Circuit says the cases aren’t comparable. Murphy submitted no evidence showing this law enforcement agency routinely saw people violating this pedestrian law but chose not to enforce it. And, as far as the court’s counterargument goes in terms of case specifics, it’s correct.
But that ignores the bigger issue: Officer Schmitt — as captured on his own recordings — never once mentioned anything about this law or Murphy’s violation of it. Instead, he opted for “failure to identify” and only released Murphy once his identification had been forcibly obtained and he could find no other reason — including this particular law — to charge him with a crime.
So, while the court may see this as a straight-up exercise of the Supreme Court’s Nieves precedent (probable cause beats First Amendment claims), it ignores the fact the officer’s own statements and actions showed he did not actually have probable cause to effect the arrest and that all justifications for the stop, detention, and arrest of Murphy were obtained after the fact. That’s the bigger problem. By focusing on the law that went unmentioned by the arresting officer, the court is giving its blessing to cops who arrest first and seek justification later.
The dissenting opinion lays it all out in all of its ugliness:
Later events indicate Officer Schmitt was scrambling to justify the arrest. While in the police car, Officer Schmitt told Murphy he was arrested for “[f]ailure to identify.” He then changed his tune when he told someone via his police radio that Murphy was stumbling and walking on the wrong side of the road. Yet Murphy was not stumbling or acting impaired. When Officer Schmitt arrived at the jail with Murphy, he made a phone call in which he described Murphy as a “dip shit walking down the highway” who “would not identify himself” and “ran his mouth off.” He then asked, “What can I charge him with?” Later, Officer Schmitt falsely claimed that Murphy was drunk. Officer Schmitt even admitted on multiple occasions that he did not “smell anything” on Murphy. Despite all this, Officer Schmitt insisted Murphy “sit here for being an asshole.” Roughly two hours later, Murphy was released.
Sure looks like retaliation from here. It was “contempt of cop,” which isn’t a crime, but every cop somehow believes it is and will seek any law at all to justify their decision to make innocent people sit in jail for “being an asshole.” If being an asshole was a crime, most cops would violate this law multiple times a day.
It’s that chain of events that matters and the post facto attempt to justify the arrest for a law most likely rarely, if ever, enforced makes it clear this was plain old retaliation.
Under these factual allegations, I cannot join the majority’s conclusion that Murphy failed to state a plausible claim. If the Sunrise Beach Police Department regularly enforces the Missouri statute prohibiting a person from walking on the wrong side of the road, one would suspect Officer Schmitt and the other officers he spoke with would have had little trouble identifying that law as the basis for the arrest. Instead, viewing the factual allegations in the complaint in a light most favorable to Murphy, Officer Schmitt arrested Murphy for challenging and criticizing him before later exploring various legal justifications for the arrest. Indeed, the allegations of post hoc decision-making indicate pretext, which supports application of the Nieves exception.
Schmitt retains the qualified immunity he really didn’t earn. He hassled somebody who wasn’t receptive to being hassled and turned his inability to walk away from a confrontation he created into two hours of misery for someone who was doing nothing more than walking down a road. And because of this ruling, cops like Schmitt will continue to engage in this sort of behavior because the courts are telling them they’ll just keep getting away with it.
Filed Under: 1st amendment, 8th circuit, identification, mason murphy, michael schmitt, missouri, probable cause
Elon Musk Finally Realizes That Verification Requires More Than A Credit Card, Planning To Make Users Upload Gov’t ID
from the id-this dept
As you’ll surely recall, Elon’s first big brilliant idea upon taking over Twitter was to conflate two separate offerings that Twitter had: Twitter Blue, a premium upsell with extra features (some of which were useful) with Twitter’s blue check verification program, which was created to help more well known users avoid impersonation. The original blue check system was far from perfect, but it was actually a verification program, in which Twitter went through something of a process to make sure an account actually belonged to the person who claimed to be behind it.
Elon merged the two, took away all the “legacy” bluechecks and basically gave them to anyone willing to pay $8/month. It hasn’t gone well. There have been multiple stories of impersonation, while the bluecheck now seems to symbolize foolish Elon Musk fans with poor decision making ability. Or, you know, neo Nazis.
Now, many months later, it seems that Musk is finally coming around to the realization that maybe “verification” requires more than a functioning credit card.
Of course, as per how things work with Elon, he’s choosing to do it in a sketchy manner. Engadget reports that ExTwitter is now experimenting with a new “ID Verified” option, powered by one of the many 3rd party services that provide ID validation, Au10tix, which makes users upload a government issued ID and a selfie.
Owji, who often uncovers unreleased features in X, first spotted an “ID verified” badge on Musk’s profile earlier this month. Now, he’s discovered an in-app message detailing how it works, suggesting that it may be getting closer to an official launch. “Verify your account by providing government-issued ID,” it says. “This usually takes about 5 minutes.” It explains that users will need to provide a photo of their ID and a selfie.
It seems X is partnering with a third-party “identity intelligence” company Au10tix on the feature. The fine print notes that information shared for verification will be seen by Au10tix as well as X. X will keep “ID images, including biometric data, for up to 30 days” and will use the information “for the purposes of safety and security, including preventing impersonation.”
There are a few of these services out there and… they’re not exactly known for being particularly reliable.
Still, it will be interesting to see how many of Elon’s groupies will be thrilled about having to upload a government issued ID to the company. They seem to trust him implicitly, but they also seem to be the sorts of folks who often don’t really like to give up their real identities.
Meanwhile, does anyone really feel comfortable that if a user who uploaded his or her ID to exTwitter does something to piss off Elon that he wouldn’t use information regarding their identity against them?
So, once again, Elon seems to have realized that his way of doing things doesn’t really work, which brings him all the way back around to the way that Twitter used to do things, but with an extra layer of stupidity/danger involved. It’s not the first time that’s happened, nor will it be the last.
Filed Under: elon musk, government issued id, identification, privacy, selfie, twitter blue, verification
Companies: au10tix, twitter, x
Michigan Supreme Court Says Photographing, Fingerprinting People Without Probable Cause Is Unconstitutional
from the because-of-course-it-is dept
The only surprise in this decision isn’t that the court ruled the way it did. It’s that the Grand Rapids, Michigan police department apparently believed it wasn’t a violation of rights.
Here are the origins of the case, as summarized by ABC affiliate WZZM. (WZZM apparently feels no one needs to read the actual opinion and did not include it, despite it being freely available at the Michigan Supreme Court’s website. This is bad journalism and is inexcusable.)
The incidents involved two Black teenagers in 2011 and 2012, though the American Civil Liberties Union said photos and fingerprints were taken from thousands of people in Grand Rapids.
Denishio Johnson was stopped after cutting through the parking lot of a fitness club where there had been vehicle thefts.
Keyon Harrison was stopped after handing a model train engine to someone. He said it was part of a school project. Johnson and Harrison were photographed and fingerprinted but not charged with crimes.
They subsequently sued Grand Rapids police.
Here’s the coda, which arrived before this decision but not before the above residents had their rights violated.
Grand Rapids has dropped the practice.
And there’s this caveat, which shows the PD still believes it should be able to do this and is now being held back by The (Judicial) Man.
But it had defended fingerprinting as a way to determine someone’s identity when they had no identification.
Come on, man. The police have no right to identify everyone. They only need to identify certain people. And that need is subject to the Constitution, which means probable cause, which certainly was in short supply in these cases, not to mention the thousands of incidents where cops got away with it.
The decision [PDF] is fairly brief but does a thorough job discussing the relevant issues. The PD argued this was nothing more than a Terry stop — a brief investigative encounter backed by reasonable suspicion. The court says neither of these cases fit the description. They were prolonged and unnecessarily intrusive.
In these cases, defendants only argued that fingerprinting was appropriate under Terry v Ohio, 392 US 1 (1968), and that Harrison consented to fingerprinting. Under Terry, a brief, on-the-scene detention of an individual is not a violation of the Fourth Amendment as long as the officer can articulate a reasonable suspicion for the detention. In these cases, fingerprinting pursuant to the P&P policy exceeded the permissible scope of a Terry stop because it was not reasonably related in scope to the circumstances that justified either stop; fingerprinting is not related to an officer’s immediate safety, and Terry caselaw does not justify stops merely for the general purpose of crimesolving. The fingerprinting in these cases also exceeded the permissible duration of a Terry stop.
In Docket No. 160959, VanderKooi called an officer in for backup to execute the P&P policy, but Harrison had already answered questions regarding his identity; therefore, calling another officer for backup after having already determined that no criminal activity was taking place was beyond the permissible duration of the Terry stop.
Similarly, in Docket No. 160958, as soon as the officers concluded that no crime had taken place in the parking lot where Johnson was detained, the reasons justifying the initial stop were dispelled, and execution of the P&P policy was an impermissible extension of the duration of the Terry stop. Because the P&P policy impermissibly exceeded both the scope and duration of a Terry stop, neither of the searches fell within the stop-and-frisk exception to the warrant requirement.
The cops also argued this was standard identification procedure, despite there being nothing standardized about it. The court shuts down this argument, too.
Defendants argue that fingerprinting nevertheless falls within the scope of a Terry stop because determining an individual’s identity is an important government interest.
[…]
The fingerprinting in these cases was not reasonably related in scope to the circumstances that justified either stop. Absent some sort of indication that the GRPD has access to a database that includes the fingerprints of all residents of and visitors to the City, fingerprinting individuals who fail to carry government-issued identification does not seem to be a useful or productive exercise in confirming any individual’s identity because there is no guarantee that a match exists that would provide more information. Instead, fingerprinting under the P&P policy appears to be aimed at solving past or future crimes. There is no indication in the record that the GRPD officers believed that fingerprinting would tie either plaintiff to the circumstances that justified each Terry stop.
The court sees this practice for what it is: an abuse of rights for the purpose of padding the PD’s fingerprint database and fishing for hits on past criminal activities.
To the extent that defendants argue that fingerprinting could help the officers determine whether either plaintiff could be linked to other crimes, such as the prior break-ins, those crimes were necessarily unconnected to the reasons justifying the actual stops. It goes unsaid that Terry caselaw does not justify stops merely for the general purpose of crime-solving, especially for those crimes that have yet to occur.
This is exactly why the Constitution exists: to prevent, deter, or provide redress for indiscriminate use of government power. That this deterrence arrived after the fact does not reflect badly on the Constitution. Instead, it exposes the self-serving actions of the Grand Rapids PD, which unilaterally decided its own interests (efficiency, crime solving) were more important than the public’s rights to be free of unreasonable searches and seizures.
Filed Under: 4th amendment, grand rapids, grand rapids pd, identification, michigan, probable cause
Former Employees Say ID.me Grew Too Fast, Got Too Careless With Users And Their Data
from the just-throwing-humans-under-the-growth-bus dept
ID.me hasn’t always been a government contractor powerhouse. For more than a decade, it wasn’t really on anybody’s radar. The personal identification software began as a Craigslist for military personnel before morphing into an ID service designed to combat fraud and ensure military members could access the many government programs available to them.
Not exactly the humblest of beginnings (considering the number of active and retired military members the service could potentially access) but ID.me managed to stay off the public radar for most of a decade. Then, the pandemic hit.
Suddenly, there were tons of government programs accessible by those affected by the unexpected rollout of COVID v.19.0. ID.me jumped into the breach, selling dozens of states on its verification software, promising to limit benefit fraud. Unfortunately, its facial recognition tech was underwhelming. But due to its nature as the only (government) game in town, people were stuck using a system that separated people from their unemployment payments while simultaneously separating them from any useful tech support.
Some unemployment applicants have said that ID.me’s facial recognition models fail to properly identify them (generally speaking, facial recognition technology is notoriously less accurate for women and people of color). And after their applications were put on hold because their identity couldn’t be verified, many should-be beneficiaries have had to wait days or weeks to reach an ID.me “trusted referee” who could confirm what the technology couldn’t.
As people continued to struggle to access their benefits, the company’s CEO, Blake Hall, continued to claim ID.me’s tech support staff was tip-top while blaming users for their own negative experiences with the company.
In his statement to Motherboard, Hall said that facial recognition failures are not a problem with the technology but with the people using it to verify their identity. “For example, if someone uploads a selfie that only shows half their face.”
At the point that article was published, 21 states were using ID.me as their preferred ID verification provider. ID.me expanded its business quickly after that, hooking up with the IRS while using inflated claims of COVID-related fraud to drive its business.
The IRS soon decided to move away from ID.me as a sole provider for taxpayer verification following these publications and pressure applied by Senator Ron Wyden. Soon after that, reports began trickling in alleging the explosive demand for ID.me’s services had apparently caught the company unprepared. The dearth of human backstops for its automated ID verification services led to more unemployment fraud — like one person wearing a bad wig availing himself of $900,000 in benefits by duping ID.me’s supposedly awesome facial recognition AI.
Employees unfortunate enough to become part of ID.me’s mid-pandemic, understaffed dystopia have been speaking with Business Insider’s Caroline Haskins. It’s definitely not pretty. Throw that many people into the mix during a period of exponential growth and bad stuff happens.
As it grew rapidly, so did errors, technical hurdles, and strain on its relatively new staff of customer service representatives, nine former ID.me employees told Insider. Helpline queues for the millions of Americans who relied on unemployment benefits in 2020 and 2021 would sometimes number in the thousands because of these strains, they said.
ID.me’s rush to hire and train nearly 1,500 new workers at this time also led to lax verification practices and poor user privacy protections, these people said. Information like passports and social security numbers were often posted in internal Slack channels, and some employees were hired and given access to confidential data without completed background checks. Some chattered about how easy it would be to steal users’ information.
Demands escalated for government benefit recipients during this surge as well. With access to the usual verification methods disappearing rapidly as ID.me expanded its reach, millions of Americans were stuck dealing with an understaffed government contractor that forced benefit seekers to interact with faulty facial recognition tech.
Meanwhile, back at ID.me, the crisis continued. The limited staff was unable to handle the influx of recipients experiencing problems, forcing those who needed help the most to try to get by without any of the financial assistance they were entitled to. Employees who had been on the job for only a few weeks were tasked with training new hires. Still understaffed (and definitely undertrained), employees were given mere seconds to make calls on submitted ID documents.
Document reviewers were expected to rule on a document every 20-30 seconds, video chat agents were expected to verify about 40 people a day, and email representatives were expected to send around 70 emails a day, according to the former employees. Team “leads” would show workers a top-to-bottom ranking of employees based on how many accounts or documents they verified that week, sometimes every day, they said.
You can’t verify actual humans in that amount of time. Placing time limits on activities that are supposed to reduce fraud only makes it more likely actual fraudsters will be able to manipulate the system. Enacting quotas tends to lead to people doing more busywork than actual work, encouraging customer service reps to solve the easy problems and ignore complaints that might be more time-consuming. And, lest this get forgotten in the rush, actual people relying on benefits to survive were at the other end of the arbitrary time limits/quotas set by the company.
Not only was ID.me failing to verify benefit recipients in a timely fashion, it was apparently incapable of verifying its own employees before handing them access to millions of Americans’ personal information.
One former employee told Insider that training for the job was remote, and everyone was allowed to take company-issued laptops home with them, but after a couple days of training, ID.me told this person they had to complete training in the office because the company hadn’t finished running background checks on the new hires. This shocked them.
“It was disturbing to me that my background check wasn’t completed and that I was allowed to take home a computer with people’s information on it,” the former employee said. “I could have, as I was going through my training, been taking pictures of people’s things. Nobody was watching me.”
You can only get away with this sort of thing when your user base is locked in. And it very much was in multiple states, with access to state and federal benefits locked up behind ID.me’s verification tech. While it’s understandable governments will seek the assistance of private contractors to handle things like verification, making companies like ID.me the only option separates taxpayers from their benefits and leaves them at the mercy of a company that was entirely unprepared to handle the amount of business it had managed to drum up during the pandemic.
What ID.me should have done is rein in its expansion plans if it was incapable of handling increased demand. And more government agencies should have ensured the company was capable of handling millions of concurrent users before spending taxpayers’ money on services that left taxpayers high and dry.
Filed Under: facial recognition, government contracts, identification, privacy
Companies: id.me
Court: Just Because An Anonymous Yelp Reviewer Is Mean, Doesn't Mean You Get To Unmask The Reviewer
from the anonymity-matters dept
I’ve never understood why so many doctors sue over bad reviews, but it just keeps happening. Dr. Muhammad Mirza has built up something of a reputation for suing people who leave bad reviews on Yelp — and has been successful in stifling speech:
Dr Mirza says he’s already won or reached settlements with three reviewers, forcing them to take down the false review and pay an undisclosed amount of money.
As that article notes, he was able to get courts to force Yelp to turn over the names of anonymous reviewers in the past, and it appears that has emboldened him to continue suing reviewers.
However, in one of his more recent cases, thankfully, a court has pushed back on the unmasking attempt. This was yet another case where Dr. Mirza had to go to court against Yelp to try to get the company to unmask an anonymous reviewer who wrote:
?Worst experience I?ve ever had! Woke up looking like a monster!!! Cheap product and he?s absolutely not experienced nor does he care!!!!!?
As Yelp pointed out to the court, this statement clearly is not defamatory as there are no statements of fact that can be proven true or false — it’s all opinion. And, thankfully, anonymous speech is protected under the 1st Amendment. In a recent ruling in NY the court agreed and rejected Dr. Mirza’s attempts to unmask that reviewer.
The ruling is pretty short, but worth reading. It notes the importance of protecting anonymous speech. That does not mean that anyone who is anonymous can get away with saying anything, but there is a reasonably high bar for unmasking such speech:
Anonymous Internet speech is protected by the First Amendment. See In re Anonymous Online Speakers, 661 F.3d 1168, 1173-77 (9th Cir. 2011); accord Rich v. Butowsky, No. 20 Misc. 80081, 2020 WL 5910069, at *3 (N.D. Cal. Oct. 6, 2020). Anonymous speech ?is not unlimited, however, and the degree of scrutiny varies depending on the circumstances and the type of speech at issue.? Anonymous Online Speakers, 661 F.3d at 1173; accord Butowsky, 2020 WL 5910069, at *3. Courts in the Ninth Circuit have required pleadings to meet a variety of standards before requiring disclosure of an anonymous speaker?s identity. Anonymous Online Speakers, 661 F.3d at 1175-76 (collecting cases) (noting that some cases require plaintiff to make a prima facie showing of its claim, that others rely on a motion to dismiss or good faith standard, while others rely on a standard somewhere between the motion to dismiss and the prima facie standard). Plaintiffs argue the Court should apply the First Amendment standard set forth in Highfields Capital Mgmt., L.P. v. Doe, 385 F. Supp. 2d 969 (N.D. Cal. 2005), and Yelp does not object to application of the test. Because Highfields is persuasive on this issue, it will be applied. See Butowsky, 2020 WL 5910069, at *3 (applying the First Amendment standard put forward by the parties); see also Music Grp. Macao Commercial Offshore Ltd. v. Does, 82 F. Supp. 3d 979, 983 (N.D. Cal. 2015) (concluding Highfields provided the correct standard among ?the developing tests in the area of anonymous online speech,? where challenged speech was (1) derogatory statements about a corporate official and (2) criticism of plaintiffs? business).
Under the Highfields test, a party seeking enforcement of a subpoena must first make out ?a real evidentiary basis for believing that the defendant has engaged in wrongful conduct that has caused real harm to the interests of the plaintiff.? Highfields, 385 F. Supp. 2d at 970. The Ninth Circuit has characterized this as a requirement for the plaintiff to establish a prima facie case for its claims. Anonymous Online Speakers, 661 F.3d at 1175. If a plaintiff successfully makes a prima facie case, the court must next ?assess and compare the magnitude of the harms that would be caused? to (1) the plaintiff?s First Amendment interests and (2) the defendant?s commercial interests. Highfields, 385 F. Supp. 2d at 976, R&R adopted, 385 F. Supp. 2d at 971. If such an assessment reveals that disclosing the defendant?s identity ?would cause relatively little harm to the defendant?s First Amendment and privacy rights,? but is ?necessary to enable [the] plaintiff to protect against or remedy serious wrongs,? then the court should allow the disclosure.
In this case, the initial complaint fails to meet the standard of claiming defamation.
WHEREAS, the Complaint?s defamation claim arises under New York law.1 The elements of a cause of action for defamation are: ?(a) a false statement that tends to expose a person to public contempt, hatred, ridicule, aversion, or disgrace, (b) published without privilege or authorization to a third party, (c) amounting to fault as judged by, at a minimum, a negligence standard, and (d) either causing special harm or constituting defamation per se.? Braunstein v. Day, 144 N.Y.S.3d 624, 625 (2d Dep?t 2021) (internal quotation marks omitted). Statements of opinion are not actionable, as ?[a]n opinion cannot be proven false and therefore does not give rise to liability for defamation purposes.? Gottwald v. Sebert, 148 N.Y.S.3d 37, 47 (1st Dep?t 2021). Statements must be viewed in context, and where a communication has a ?loose, figurative or hyperbolic tone? that ?suggest[s] to a reasonable reader that the author was merely expressing his opinion based on a negative business interaction with [a] plaintiff[],? that statement is one of opinion. Torati v. Hodak, 47 N.Y.S.3d 288, 290 (1st Dep?t 2017) (internal quotation marks and alterations omitted). Courts must also be mindful that ?readers give less credence to allegedly defamatory remarks published on the Internet than to similar remarks made in other contexts.? Id. (internal quotation marks and alterations omitted).
WHEREAS, Plaintiffs have not made a sufficient showing of a prima facie defamation claim under New York law, as the Review, read in context, would be perceived by a reasonable person to be nothing more than a matter of personal opinion as to the quality of Plaintiffs? products and services. New York courts have consistently declined to find anonymous reviews analogous to the Review actionable for purposes of defamation. See id. (concluding that negative comments anonymously posted on consumer review websites, describing plaintiff as a ?bad apple,? ?incompetent and dishonest,? and a ?disastrous businessman? were not actionable); Woodbridge Structured Funding, LLC v. Pissed Consumer, 6 N.Y.S.3d 2, 3 (1st Dep?t 2015) (finding online review claiming defendants ?Lie To Their Clients? and ?will forget about you and . . . all the promises they made to you? non-defamatory); Sandals Resorts Int?l Ltd. v. Google, Inc., 925 N.Y.S.2d 407, 410-11 (1st Dep?t 2011) (email criticizing plaintiff?s operations in Jamaica, despite containing specific factual allegations, was still a non-actionable opinion); see also Mirza v. Amar, 513 F. Supp. 3d 292, 299 (E.D.N.Y. 2021) (rejecting Plaintiff Mirza?s claims that a separate statement similar to the Review was defamatory).
And, importantly, the review did not have statements of fact, as it was clearly all opinion:
Plaintiffs next argue that two of the Review?s claims — that Mirza is ?not experienced? and uses ?[c]heap? products — are actionable statements of fact. This argument is unpersuasive because where ?some of the statements are based on undisclosed, unfavorable facts known to the writer, the disgruntled tone, anonymous posting, and predominant use of statements that cannot be definitively proven true or false, supports the finding that the challenged statements are only susceptible of a nondefamatory meaning, grounded in opinion.? Woodbridge, 6 N.Y.S.3d at 3. Because Plaintiffs have not made a prima facie case of defamation, their request for the identity of the John Doe defendant is improper.
I understand that it sucks to get negative reviews online. And, that not all online reviews are truthful. But that does not mean you get to automatically uncloak anonymous critics, nor does it mean you get to sue them for defamation.
In the meantime, kudos to Yelp for fighting for the rights of its reviewers, rather than just rolling over and handing out the info like lots of sites might do.
Filed Under: 1st amendment, anonymity, defamation, identification, muhammad mirza, new york, reviews, slapp
Companies: yelp
Moving the Web Beyond Third-Party Identifiers
from the privacy-and-cookies dept
(This piece overlaps a bit with Mike’s piece from yesterday, “How the Third-Party Cookie Crumbles”; Mike graciously agreed to run this one anyway, so that it can offer additional context for why Google’s news can be seen as a meaningful step forward for privacy.)
Privacy is a complex and critical issue shaping the future of our internet experience and the internet economy. This week there were two major developments: first, the State of Virginia passed a new data protection law, the Consumer Data Protection Act (CDPA), which has been compared to Europe’s General Data Protection Regulation; and second, Google announced that it would move away from all forms of third-party identifiers for Web advertising, rather than look to replace cookies with newer techniques like hashes of personally identifiable information (PII). The ink is still drying on the Virginia law and its effective date isn’t until 2023, meaning it may be preempted by federal law if this Congress moves a privacy bill forward. But Google’s action will change the market immediately. While the road ahead is long and there are many questions left to answer, moving the Web beyond cross-site tracking is a clear step forward.
We’re in the midst of a global conversation about what the future of the internet should look like, across many dimensions. In privacy, one huge part of that discussion, it’s not good enough in 2021 to say that user choice means “take it or leave it”; companies are expected to provide full-featured experiences with meaningful privacy options, including for advertising-based services. These heightened expectations—some set by law, some by the market—challenge existing assumptions around business models and revenue streams in a major way. As a result, the ecosystem must evolve away from its current state toward a future that offers a richer diversity of models and user experiences.
Google’s Privacy Sandbox, in particular, could be a big step forward along that evolutionary path. It’s plausible that a combination of subscription services, contextual advertising and more privacy-preserving techniques for learning can collectively match or even grow the pie for advertising revenue beyond what it is today, while providing users with compelling and meaningful choices that don’t involve cross-site tracking. But that can’t be determined until new services are built, offered and measured at scale.
And sometimes, to make change happen, band-aids need to be ripped off. By ending its support for third-party identifiers on the Web, that’s what Google is doing. Critics of the move will focus on the short-term impact for those smaller advertisers who currently rely on third-party identifiers and tracking to target specific audiences, and will need to adapt their methods and strategies significantly. That concern is understandable; level playing fields are important, and centralization in the advertising ecosystem is widely perceived to be a problem. However, the writing has been on the wall for a long time for third-party identifiers and cross-site tracking. Firefox blocked third-party cookies by default in September 2019; Apple’s Safari followed suit in April 2020—Firefox first made moves to block third-party cookies as far back as 2013, but it was, then, an idea ahead of its time. And the problem was never the cookies per se; it was the tracking they powered.
As for leveling the playing field for the future, working through standards bodies is an established approach for Web companies to share information and innovate collectively. Google’s engagement with the W3C should, hopefully, help open doors for other advertisers, limiting any reinforcement effects for Google’s position in Web advertising.
Further, limits on third-party tracking do not apply to first-party behavior, where a company tracks the pages on its own site that a user visits, for example when a shopping website remembers products that a user viewed in order to recommend other items of potential interest. While first-party relationships are important and offer clear positive value, it’s also not hard to imagine privacy-invasive acts that use solely first-party information. But Google’s moves must be contextualized within the backdrop of rapidly evolving privacy law—including the Virginia data protection law that just passed. From that perspective, they’re not a delaying tactic nor a substitute for legislation, but rather a complementary piece, and in particular a way to catalyze much-needed new thinking and new business models for advertising.
I don’t think it’s possible for Google to put privacy advocates’ minds at ease concerning its first-party practices through voluntary action. To stop capitalizing totally on its visibility into activity within its network would leave so much money on the table Google might be violating its fiduciary duty as a public company to serve its shareholders’ interest. If it cleared that hurdle and stopped anyway, what would prevent the company from going back and doing it later? The only sustainable answer for first-party privacy concerns is legislation. And that kind of legislation will struggle to be feasible until new techniques and new business models have been tested and built. And that more than anything is the dilemma I think Google sees, and is working constructively to address.
Often, private sector privacy reforms are derided as merely scratching the surface of a deeper business model problem. While there’s much more to be done, moving beyond third-party identifiers goes deeper, and deserves broad attention and engagement to help preserve good balances going forward.
Filed Under: 3rd party cookies, cookies, identification, privacy
Companies: google
Secret Agents Implicated In The Poisoning Of Opposition Leader Alexey Navalny Identified Thanks To Russia's Black Market In Everybody's Personal Data
from the poor-data-protection-is-bad-for-Vlad dept
Back in August, the Russian opposition leader Alexei Navalny was poisoned on a flight to Moscow. Despite initial doubts — and the usual denials by the Russian government that Vladimir Putin was involved — everyone assumed it had been carried out by the country’s FSB, successor to the KGB. Remarkable work by the open source intelligence site Bellingcat, which Techdirt first wrote about in 2014, has now established beyond reasonable doubt that FSB agents were involved:
A joint investigation between Bellingcat and The Insider, in cooperation with Der Spiegel and CNN, has discovered voluminous telecom and travel data that implicates Russia’s Federal Security Service (FSB) in the poisoning of the prominent Russian opposition politician Alexey Navalny. Moreover, the August 2020 poisoning in the Siberian city of Tomsk appears to have happened after years of surveillance, which began in 2017 shortly after Navalny first announced his intention to run for president of Russia.
That’s hardly a surprise. Perhaps more interesting for Techdirt readers is the story of how Bellingcat pieced together the evidence implicating Russian agents. The starting point was finding passengers who booked similar flights to those that Navalny took as he moved around Russia, usually earlier ones to ensure they arrived in time but without making their shadowing too obvious. Once Bellingcat had found some names that kept cropping up too often to be a coincidence, the researchers were able to draw on a unique feature of the Russian online world:
Due to porous data protection measures in Russia, it only takes some creative Googling (or Yandexing) and a few hundred euros worth of cryptocurrency to be fed through an automated payment platform, not much different than Amazon or Lexis Nexis, to acquire telephone records with geolocation data, passenger manifests, and residential data. For the records contained within multi-gigabyte database files that are not already floating around the internet via torrent networks, there is a thriving black market to buy and sell data. The humans who manually fetch this data are often low-level employees at banks, telephone companies, and police departments. Often, these data merchants providing data to resellers or direct to customers are caught and face criminal charges. For other batches of records, there are automated services either within websites or through bots on the Telegram messaging service that entirely circumvent the necessity of a human conduit to provide sensitive personal data.
The process of using these leaked resources to establish the other agents involved in the surveillance and poisoning of Navalny, and their real identities, since they naturally used false names when booking planes and cars, is discussed in fascinating detail on the Bellingcat site. But the larger point here is that strong privacy protections are good not just for citizens, but for governments too. As the Bellingcat researchers put it:
While there are obvious and terrifying privacy implications from this data market, it is clear how this environment of petty corruption and loose government enforcement can be turned against Russia’s security service officers.
As well as providing Navalny with confirmation that the Russian government at the highest levels was probably behind his near-fatal poisoning, this latest Bellingcat analysis also achieves something else that is hugely important. It has given privacy advocates a really powerful argument for why governments — even the most retrogressive and oppressive — should be passing laws to protect the personal data of every citizen effectively. Because if they don’t, clever people like Bellingcat will be able to draw on the black market resources that inevitably spring up, to reveal lots of things those in power really don’t want exposed.
Follow me @glynmoody on Twitter, Diaspora, or Mastodon.
Filed Under: alexey navalny, black market, identification, poison, russia
Appeals Court Says An IP Address Is 'Tantamount To A Computer's Name' While Handing The FBI Another NIT Win
from the [extremely-superintendent-chalmers-voice]-good-lord dept
Fortunately, this profoundly-wrong conclusion is buried inside a decision that’s merely off-base. If it was the crux of the case, we might have witnessed a rush of copyright trolls to the Eleventh Circuit to take advantage of the panel’s wrongness.
But this decision is not about IP addresses… not entirely. They do play a part. The Eleventh Circuit Court of Appeals is the latest federal appellate court to deny suppression motions filed over the FBI’s use of an invalid warrant to round up suspected child porn consumers. The “Playpen” investigation involved the FBI seizing a dark web child porn site and running it for a few weeks while it sent out malware to anyone who visited the site. The FBI’s “Network Investigative Technique” (NIT) sent identifying info back to the FBI, including IP addresses and an assortment of hardware data.
As the court notes in its decision [PDF], pretty much every other appeals court has already gotten in on this action. (Spoiler alert: every other appeals court has granted the FBI “good faith” even though the DOJ was actively pursuing a law change that would make the actions it took in this case legal. The violation of jurisdiction limitations by the FBI’s NIT was very much not legal when it occurred.)
By our count, we become today the eleventh (!) court of appeals to assess the constitutionality of the so-called “NIT warrant.” Although the ten others haven’t all employed the same analysis, they’ve all reached the same conclusion—namely, that evidence discovered under the NIT warrant need not be suppressed. We find no good reason to diverge from that consensus here…
That being said, there are some interesting issues discussed in the opinion, but here’s where it kind of falls apart. The Eleventh Circuit may be joining ten (!) other circuits in upholding the FBI’s illegal search, but it’s the first to make this preposterous claim while doing so. (h/t Orin Kerr)
In the normal world of web browsing, an internet service provider—Comcast or AT&T, for example—assigns an IP address to every computer that it provides with internet access. An IP address is a unique numerical identifier, tantamount to a computer’s name.
That’s… just completely wrong. An IP address doesn’t identify a device any more than it identifies a person or location. It is very definitely not “tantamount to a computer’s name.” The court uses this erroneous conclusion for pretty benign ends — to veto the DOJ’s belated attempt to rebrand its NIT malware as a “tracking device” in order to salvage its invalid search warrant. Even so, this slip-up is embarrassing, especially in a decision that contains a great deal of technical discussion.
But I suppose all’s well that ends unsurprisingly. The Eleventh Circuit agrees with the other circuits: the warrant obtained was invalid from the moment it was obtained as it allowed the FBI to perform searches outside of the jurisdiction in which it was issued. But there’s no remedy for the two alleged child porn consumers. As the court states here, the error was the magistrate judge’s, who should never have signed a warrant granting extra-jurisdictional searches. According to the Eleventh Circuit, the FBI agent had every reason to believe the granted warrant was valid and that the searches could be executed. No one’s evidence is getting suppressed and no one’s convictions are being overturned.
The problem with this assumption is that it glosses over the issue of the DOJ’s Rule 41 politicking, which was well underway when this FBI agent approached a judge with a warrant that asked permission to violate a rule that hadn’t been rewritten yet. To call this “good faith” presumes a lot about the FBI and its investigators. It concludes they were unaware of the DOJ’s petitioning of the US court system to rewrite Rule 41 when everything about this case points to the fact that these investigators knew about the proposed rule change and knew this NIT deployment wasn’t legal at the point they handed the affidavit to the magistrate.
In the end, it’s another unearned win for the FBI. And it’s one that comes paired with a tech gaffe that’s going to sound very appealing (!) to IP trolls.
Filed Under: 11th circuit, computer names, identification, ip address
Strike 3 Gets Another Judge To Remind It That IP Addresses Aren't Infringers
from the ip-freely dept
While copyright trolling has continued to be a scourge across many countries, America included, there have finally been signs of the courts beginning to push back against them. One of the more nefarious trolls, Strike 3 Holdings, masquerades as a pornography company while it actually does the far dirtier work of bilking internet service account holders based on non-evidence. Armed typically with nothing more than IP addresses, the whole trolling enterprise relies on using those IP addresses to have ISPs unmask their own customers, under the theory that those customers are the most likely infringers of Strike 3 content. The courts have finally begun catching on to how faulty the very premise is, with more than one judge pushing back on IP addresses even being actual evidence.
It’s a list that continues to grow, with one Judge in Florida apparently taking issue with the use of IP addresses entirely.
In February, Judge Ungaro was assigned a case filed by the adult entertainment company “Strike 3 Holdings,” which has filed hundreds of lawsuits over the past several months.
The company accused IP-address “72.28.136.217” of sharing its content through BitTorrent without permission. The Judge, however, was reluctant to issue a subpoena. She asked the company how the use of geolocation and other technologies could reasonably pinpoint the identity and location of the alleged infringer.
Strike 3 went on to boast that its IP address matching was roughly 95% effective. After all, as Blackstone’s Ratio goes: Better to let ten guilty men go free than to let any more than five out of one-hundred suffer.
That, of course, is not how the saying goes. Instead, the idea is supposed to be that justice is based on good, quality evidence that points directly to the accused. Instead, in addition to mentioned flawed IP location issue, Strike 3 flatout admits that this IP-to-user identification doesn’t actually tell who infringed what.
Strike 3 further admitted that, at this point, it doesn’t know whether the account holder is the actual copyright infringer. However, the company believes that this is the most plausible target and says it will try to find out more once the identity of the person in question is revealed.
That’s actually remarkably honest as far as copyright trolls go: we’re not entirely sure our IP address is correctly identified, and that IP address doesn’t actually tell us who the infringer is, but we promise to try to get actual evidence if you help us with this non-evidence. Still, it doesn’t make much of a case for the court ordering anything at all, does it?
That’s why it really shouldn’t be a surprise when you get a judge stating things like the following.
“There is nothing that links the IP address location to the identity of the person actually downloading and viewing Plaintiff’s videos, and establishing whether that person lives in this district,” Judge Ungaro writes.
The order points out that an IP-address alone can’t identify someone. As such, it can’t accurately pinpoint the person who allegedly downloaded the copyright infringing content.
“For example, it is entirely possible that the IP address belongs to a coffee shop or open Wi-Fi network, which the alleged infringer briefly used on a visit to Miami,” Judge Ungaro notes. “Even if the IP address were located within a residence in this district, the geolocation software cannot identify who has access to that residence’s computer and who actually used it to infringe Plaintiff’s copyright,” she adds.
Exactly. And the rules of evidence aren’t there just for the sake of letting porn-watchers go free on technicalities. They matter. If we were to allow copyright trolls to substitute the kind of shoddy facts like IP addresses for actual evidence, and if courts were to accept that substitute, then what we’re really all allowing for is a substitute for justice. The public doesn’t want that.
And, it would appear, more and more judges are finally realizing that they don’t want that either.
Filed Under: copyright, copyright troll, identification, ip address, subpoena
Companies: strike 3 holdings
Indian Supreme Court Rules Aadhaar Does Not Violate Privacy Rights, But Places Limits On Its Use
from the mixed-result dept
Techdirt wrote recently about what seems to be yet another problem with India’s massive Aadhaar biometric identity system. Alongside these specific security issues, there is the larger question of whether Aadhaar as a whole is a violation of Indian citizens’ fundamental privacy rights. That question was made all the more pertinent in the light of the country’s Supreme Court ruling last year that “Privacy is the constitutional core of human dignity.” It led many to hope that the same court would strike down Aadhaar completely following constitutional challenges to the project. However, in a mixed result for both privacy organizations and Aadhaar proponents, India’s Supreme Court has handed down a judgment that the identity system does not fundamentally violate privacy rights, but that its use must be strictly circumscribed. As The New York Times explains:
The five-judge panel limited the use of the program, called Aadhaar, to the distribution of certain benefits. It struck down the government’s use of the system for unrelated issues like identifying students taking school exams. The court also said that private companies like banks and cellphone providers could not require users to prove their identities with Aadhaar.
The majority opinion of the court said that an Indian’s Aadhaar identity was unique and “unparalleled” and empowered marginalized people, such as those who are illiterate.
The decision affects everything from government welfare programs, such as food aid and pensions, to private businesses, which have used the digital ID as a fast, efficient way to verify customers’ identities. Some states, such as Andhra Pradesh, had also planned to integrate the ID system into far-reaching surveillance programs, raising the specter of widespread government spying.
In essence, the Supreme Court seems to have felt that although Aadhaar’s problems were undeniable, its advantages, particularly for India’s poorest citizens, outweighed those concerns. However, its ruling also sought to limit function creep by stipulating that Aadhaar’s compulsory use had to be restricted to the original aim of distributing government benefits. Although that seems a reasonable compromise, it may not be quite as clear-cut as it seems. The Guardian writes that it still may be possible to use Aadhaar for commercial purposes:
Sharad Sharma, the co-founder of a Bangalore-based technology think tank which has worked closely with Aadhaar’s administrators, said Wednesday’s judgment did not totally eliminate that vision for the future of the scheme, but that private use of Aadhaar details would now need to be voluntary.
“Nothing has been said [by the court] about voluntary usage and nothing has been said about regulating bodies mandating it for services,” Sharma said. “So access to private parties for voluntary use is permitted.”
That looks to be a potentially large loophole in the Supreme Court’s attempt to keep the benefits of Aadhaar while stopping it turning into a compulsory identity system for accessing all government and business services. No doubt in the coming years we will see companies exploring just how far they can go in demanding a “voluntary” use of Aadhaar, as well as legal action by privacy advocates trying to stop them from doing so.
Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+
Filed Under: aadhaar, biometric, id, identification, india, privacy