a Better Bubble™

TechDirt 🕸

Court (For Now) Says NY Times Can Publish Project Veritas Documents

2 years 2 months ago

We've talked about the hypocrite grifters who run Project Veritas, who, even when they have legitimate concerns about attacks on their own free speech, ran to court to try to silence the NY Times. Bizarrely, a NY judge granted Project Veritas' demand for prior restraint against the NY Times falsely claiming that attorney-client material could not be published.

The NY Times appealed that ruling and now a court has... not overturned the original ruling, but for now said that the NY Times can publish the documents, saying that it will not enforce the original ruling until an appeal can be heard. This is... better than nothing, but fully overturning the original ridiculous ruling would have been much better. Because it was clearly prior restraint. But, at least for now, the prior restraint will not be enforced.

Still, the response from Project Veritas deserves separate comment, because it's just naively stupid:

In a phone interview on Thursday, Mr. O’Keefe said: “Defamation is not a First Amendment-protected right; publishing the other litigants’ attorney-client privileged documents is not a protected First Amendment right.”

While it's accurate that defamation is not protected by the 1st Amendment, he's wrong that publishing attorney-client communications is -- in most cases -- very much protected. He's fuzzing the lines here, by basically arguing that because Project Veritas is, separately, suing the NY Times, that bans the NY Times from publishing any attorney-client privileged material it obtains via standard reporting tactics.

But that fuzzing suggests something that just isn't true: that there's some exception to the 1st Amendment from publishing attorney-client materials. That's wrong. The attorney-client privilege is with respect to having to disclose certain documents to another party in litigation. If you can successfully show that the documents are privileged, they don't need to be disclosed to the other party. That's the extent of the privilege. It has no bearing whatsoever on whether or not someone else obtaining those materials through other means has a right to publish them. Of course they do and the 1st Amendment protects that.

And, I should just note, that considering Project Veritas' main method of operating is trying to obtain private documents, or record secret conversations, it is bizarre beyond belief that Project Veritas is literally claiming that private material has some sort of 1st Amendment protection. Because that seems incredibly likely to come back and bite Project Veritas at a later time. Of course, considering they're hypocritical grifters with no fundamental principles beyond "attack people with views we don't like," I guess it's not surprising that their viewpoint on free speech and the 1st Amendment shifts depending on who it's protecting.

Mike Masnick

Yet Another Israeli Malware Manufacturer Found Selling To Human Rights Abusers, Targeting iPhones

2 years 2 months ago

Exploit developer NSO Group may be swallowing up the negative limelight these days, but let's not forget the company has plenty of competitors. The US government's blacklisting of NSO arrived with a concurrent blacklisting of malware purveyor, Candiru -- another Israeli firm with a long list of questionable customers, including Uzbekistan, Saudi Arabia, United Arab Emirates, and Singapore.

Now there's another name to add to the list of NSO-alikes. And (perhaps not oddly enough) this company also calls Israel home. Reuters was the first to report on this NSO's competitor's ability to stay competitive in the international malware race.

A flaw in Apple's software exploited by Israeli surveillance firm NSO Group to break into iPhones in 2021 was simultaneously abused by a competing company, according to five people familiar with the matter.

QuaDream, the sources said, is a smaller and lower profile Israeli firm that also develops smartphone hacking tools intended for government clients.

Like NSO, QuaDream sold a "zero-click" exploit that could completely compromise a target's phones. We're using the past tense not because QuaDream no longer exists, but because this particular exploit (the basis for NSO's FORCEDENTRY) has been patched into uselessness by Apple.

But, like other NSO competitors (looking at you, Candiru), QuaDream has no interest in providing statements, a friendly public face for inquiries from journalists, or even a public-facing website. Its Tel Aviv office seemingly has no occupants and email inquiries made by Reuters have gone ignored.

QuaDream doesn't have much of a web presence. But that's changing, due to this report, which builds on earlier reporting on the company by Haaretz and Middle East Eye. But even the earlier reporting doesn't go back all that far: June 2021. That report shows the company selling a hacking tool called "Reign" to the Saudi government. But that sale wasn't accomplished directly, apparently in a move designed to further distance QuaDream from both the product being sold and the government it sold it to.

According to Haaretz, Reign is being sold by InReach Technologies, Quadream's sister company based in Cyprus, while Quadream runs its research and development operations from an office in the Ramat Gan district in Tel Aviv.

[...]

InReach Technologies, its sales front in Cyprus, according to Haaretz, may be being used in order to fly under the radar of Israel’s defence export regulator.

Reign is apparently the equivalent of NSO's Pegasus, another powerful zero-click exploit that appears to still be able to hack most iPhone models. But it's not a true equivalent. According to this report, the tool can be rendered useless by a single system software update and, perhaps more importantly, cannot be remotely terminated by the entity deploying it, should the infection be discovered by the target. This means targeted users have the opportunity to learn a great deal about the exploit, its deployment, and possibly where it originated.

That being said, it's not cheap:

One QuaDream system, which would have given customers the ability to launch 50 smartphone break-ins per year, was being offered for $2.2 million exclusive of maintenance costs, according to the 2019 brochure. Two people familiar with the software's sales said the price for REIGN was typically higher.

With more firms in the mix -- and more scrutiny from entities like Citizen Lab -- it's only a matter of time before information linking NSO competitors to human rights abuses and indiscriminate targeting of political enemies threatens to make QuaDream and Candiru household names. And, once again, it's time to point out this all could have been avoided by refusing to sell powerful hacking tools to human rights abusers who were obviously going to use the spyware to target critics, dissidents, journalists, ex-wives, etc. That QuaDream chose to sell to countries like Saudi Arabia, Singapore, and Mexico pretty much guarantees reports of abusive deployment will surface in the future.

Tim Cushing

Surprise: U.S. Cost Of Ripping Out And Replacing Huawei Gear Jumps From $1.8 To $5.6 Billion

2 years 2 months ago

So we've noted that a lot of the U.S. politician accusations that Huawei uses its network hardware to spy on Americans on behalf of the Chinese government are lacking in the evidence department. The company's been on the receiving end of a sustained U.S. government ban based on accusations that have never actually been proven publicly, levied by a country (the United States) with a long, long history of doing exactly what it accuses Huawei of doing.

To be clear, Huawei is a terrible company. It has been happy to provide IT and telecom support to the Chinese government as it wages genocide against ethnic minorities. It has also been caught helping some African governments spy on the press and political opponents. And it may very well have helped the Chinese government spy on Americans. So it's hard to feel too bad about the company.

At the same time, if you're going to levy accusations (like "Huawei clearly spies on Americans") you need to provide public evidence. And we haven't. Eighteen months of investigations found nothing. That didn't really matter much to the FCC (under Trump and Biden) or Congress, which ordered that U.S. ISPs and network operators rip out all Huawei gear and replace it to an estimated cost of $1.8 billion. Yet just a few years later, the actual cost to replace this gear has already ballooned to $5.8 billion and is likely to get higher:

"The FCC has told Congress that applications to The Secure and Trusted Communications Networks Reimbursement Program have generated requests totaling about $5.6 billion – far more than the allocated funding. The program was established to reimburse providers with 10 million or fewer customers who must remove Huawei Technologies Company and ZTE equipment."

That's quite a windfall for companies not named Huawei, don't you think?

My problem with these efforts has always been a nuanced one. I have no interest in defending a shitty global telecom gear maker with an atrocious human rights record which very well may be a proven to be a surveillance lackey for the Chinese government. Yet at the same time, domestic companies like Cisco have, for much of the last decade, leaned on unsubstantiated allegations of spying to shift market share in their favors. DC is flooded with lobbyists who can easily exploit both xenophobia and intelligence worries to their tactical advantage, then bury the need for evidence under ambiguous claims of national security:

"What happens is you get competitors who are able to gin up lawmakers who are already wound up about China,” said one Hill staffer who was not authorized to speak publicly about the matter. “What they do is pull the string and see where the top spins.”

But some experts say these concerns are exaggerated. These experts note that much of Cisco’s own technology is manufactured in China."

So my problem here isn't necessarily that Huawei doesn't deserve what's happening to it. My problem here is generally a lack of transparency in a process that's heavily dictated by lobbyists, who can hide any need for evidence behind national security claims. This creates an environment where decisions are made on a "noble and patriotic basis" that wind up being beyond common sense, reproach, and oversight. That's a nice breeding ground for fraud.

My other problem is the hypocrisy of a country that doesn't believe in limitations on spying, complaining endlessly about spying, without modifying any of its own, very similar behaviors. AT&T has been proven to be directly tethered to the NSA to the point where it's literally impossible to determine where one ends and the other begins. Yet were another country to ban AT&T from doing business there, the heads of the very same folks breathlessly concerned about surveillance ethics would explode. What makes us beyond reproach here? Our ethical track record?

And my third problem is that the almost myopic, focus on Huawei has been so massive, we've failed to take on numerous other privacy and security issues, whether that's the lack of a meaningful federal privacy law, the rampant security and privacy issues inherent in the Internet of things space (where Chinese-made hardware is rampant), or election security with anywhere close to the same level of urgency. These all are equally important issues, all exploited by Chinese intelligence, that see a small fraction of the hand-wringing and action reserved for issues like Huawei.

Again, none of this is to defend Huawei or deny it's a shitty company with dubious ethics. But the lack of transparency or skepticism creates an environment ripe for fraud and myopia by policymakers who act as if the entirety of their efforts is driven by the noblest and most patriotic of intentions. And, were I a betting man, I'd wager this whole rip and replace effort makes headlines for all the wrong reasons several years down the road.

Karl Bode

Daily Deal: The Complete GameGuru Unlimited Bundle

2 years 2 months ago

GameGuru is a non-technical and fun game maker that offers an easy, enjoyable and comprehensive game creation process that is designed specifically for those who are not programmers or designers/artists. It allows you to build your own game world with easy to use tools. Populate your game by placing down characters, weapons, and other game items, then press one button to build your game, and it's ready to play and share. GameGuru is built using DirectX 11 and supports full PBR rendering, meaning your games can look great and take full advantage of the latest graphics technology. The bundle includes hundreds of royalty-free 3D assets. It's on sale for $50.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Daily Deal

Senator Blumenthal, After Years Of Denial, Admits He's Targeting Encryption With EARN IT

2 years 2 months ago

Senator Richard Blumenthal has now admitted that EARN IT is targeting encryption, something he denied for two years, and then just out and said it.

Since the very beginning many of us have pointed out that the EARN IT Act will undermine encryption (as well as other parts of the internet). Senator Richard Blumenthal, the lead sponsor on the bill, has insisted over and over again that the bill has nothing to do with encryption. Right after the original bill came out, when people called this out, Blumenthal flat out said "this bill says nothing about encryption" and later claimed that "Big Tech is using encryption as a subterfuge to oppose this bill."

That's been his line ever since -- insisting the bill has nothing to do with encryption. And to "show" that it wasn't about encryption, back in 2020, he agreed to a very weak amendment from Senator Leahy that had some language about encryption, even though as we pointed out at the time, that amendment still created a problem for encryption.

The newest version of EARN IT replaced Leahy's already weak amendment with one that is a more direct attack on encryption. But it has allowed slimy "anti-porn" groups like NCOSE to falsely claim that it has "dealt with the concerns about encryption." Except, as we detailed, the language of the bill now makes encryption a liability for any web service, as it explicitly says that use of encryption can be used as evidence that a website does not properly deal with child sexual abuse material.

But still, through it all, Blumenthal kept lying through his teeth, insisting that the bill wasn't targeting encryption. Until yesterday when he finally admitted it straight up to Washington Post reporter Cat Zakrzewski. In her larger story about EARN IT, I'm not sure why Zakrewski buried this point all the way down near the bottom, because this is the story. Blumenthal is asked about the encryption bit and he admits that the bill is targeting encryption:

Blumenthal said in an interview that lawmakers incorporated these concerns into revisions, which prevent the implementation of encryption from being the sole evidence of a company’s liability for child porn. But he said lawmakers wouldn’t offer a blanket exemption to using encryption as evidence arguing companies might use it as a “get-out-of-jail-free card.”

In other words, he knows that the bill targets encryption despite two whole years of blatant denials. To go from "this bill makes no mention of encryption" to "we don't want companies using encryption as a 'get-out-of-jail-free card'" is an admission that this bill is absolutely about encryption. And if that's the case, why have their been no hearings about the impact this would have on encryption and national security? Because, that seems like a key point that should be discussed, especially with Blumenthal admitting this thing that he denied for two whole years.

During today's markup, Blumenthal also made some nonsense comments about encryption:

The treatment of encryption in this statute is the result of hours, days, of consultation involving the very wise and significant counsel from Sen. Leahy who offered the original encryption amendment and said at the time that his amendment would not protect tech companies for being held liable for doing anything that would give rise to liability today for using encryption to further illegal activity. That's the key distinction here. Doesn't prohibit the use of encryption, doesn't create liability for using encryption, but the misuse of encryption to further illegal activity is what gives rise to liability here.

This is, beyond being nonsense word salad, just utterly ridiculous. No one ever said the bill "prohibited" encryption, but that it would make it a massive liability. And he's absolutely wrong that it "doesn't create liability for using encryption" because it literally does exactly that in saying that encryption can be used as evidence of liability.

The claim that it's only the "misuse of encryption" shows that Senator Blumenthal (1) has no clue what he's talking about and (2) needs to hire staffers who actually do understand this stuff, because that's not how this works. Once you say it's the "misuse of encryption" you've sunk encryption. Because now every lawsuit will just claim that any use of encryption is misuse and the end result is that you need to go through a massive litigation process to determine if your use of encryption is okay or not.

That's the whole reason why things like Section 230 are important, because they avoid having every company have to spend over a million dollars to prove that the technical decision they made were okay and not a "misuse." But now if they have to spent a million dollars every time someone sues them for their use of encryption, then it makes it ridiculously costly -- and risky -- to use encryption.

So, Blumenthal is either too stupid to understand how all of this actually works, or as he seems to have admitted to the reporter despite two years of denial, he doesn't believe companies should be allowed to use encryption.

EARN IT is an attack on encryption, full stop. Senator Blumenthal has finally admitted that, and anyone who believes in basic privacy and security should take notice.

Oh, and as a side note, remember back in 2020 when Blumenthal flipped out at Zoom for not offering full end-to-end encryption? Under this bill, Zoom would be at risk either way. Blumenthal is threatening them if they use encryption and if they don't. It's almost as if Richard Blumenthal doesn't know what he's talking about regarding encryption.

Mike Masnick

Yes, It Really Was Nintendo That Slammed GilvaSunner YouTube Channel With Copyright Strikes

2 years 2 months ago

Well, for a story that was already over, this became somewhat fascinating. We have followed the Nintendo vs. GilvaSunner war for several years now. The GilvaSunner YouTube channel has long been dedicated to uploading and appreciating a variety of video game music, largely from Nintendo games. Roughly once a year for the past few years, Nintendo would lob copyright strikes at a swath of GilvaSunner "videos": 100 videos in 2019, a bit less than that in 2020, take 2021 off, then suddenly slam the channel with 1,300 strikes in 2022. With that last copyright MOAB, the GilvaSunner channel has been shuttered voluntarily, with the operator indicating that it's all too much hassle.

Well, on the internet, and in our comments on that last post, there began to be speculation as to whether or not it was actually Nintendo behind all of these copyright strikes... or an imposter. Those sleuthing around found little tidbits, such as the name used on the strike not matching up to the names displayed in the past when Nintendo has acted against YouTube videos.

It was... strange. Why? Well, because it looked like many people going out and trying to find a reason to believe that Nintendo wasn't behaving exactly as anyone who had witnessed Nintendo's behavior would expect. If this was someone impersonating Nintendo with these actions, it was utterly indistinguishable from how Nintendo would normally behave. Guys, they do this shit all the time.

And this time too, as it turns out. You can hear it straight from YouTube's mouth.

Jumping in – we can confirm that the claims on @GilvaSunner's channel are from Nintendo. These are all valid and in full compliance with copyright rules. If the creator believes the claims were made in error, they can dispute with these steps: https://t.co/ivyjVNwLVu

— TeamYouTube (@TeamYouTube) February 5, 2022

This is where I will stipulate for the zillionth time that Nintendo is within it's rights to take these actions. But we should also stipulate that the company doesn't have to go this route and the fact that it prioritizes control of its IP in the strictest fashion over letting its fans enjoy some video game music should tell you everything you need to know.

In the meantime, to the internet sleuths: I appreciate your dedication to either Nintendo or to simply digging into these kinds of details for funsies or whatever. That being said, as the old saying goes, if you hear the sound of hooves, assume it's a horse and not a zebra.

Timothy Geigner

Even Officials In The Intelligence Community Are Recognizing The Dangers Of Over-Classification

2 years 2 months ago

The federal government has a problem with secrecy. Well, actually it doesn't have a problem with secrecy, per se. That's often considered a feature, not a bug. But federal law says the government shouldn't have so much secrecy, what with the FOIA being in operation. And yet, the government feels compelled to keep secrets from its biggest employer: the US taxpayers.

Over-classification remains a problem. It has been a problem ever since long before a government contractor went rogue with a massive stash of NSA documents, showing that many of the government's secrets should have been shared or, at the very least, more widely discussed as the government turned 9/11 into a constitutional bypass on the information superhighway.

Since then, efforts have been made to dial back the government's proclivity for classifying documents that pose no threat to government operations and/or government security. In fact, the argument has been made (rather convincingly) that over-classification is counterproductive. It's more likely to result in the exposure of so-called secrets rather than secure the blanket-exemption-formality that keeps secrets from the general public.

Efforts have been made to counteract this overwhelming desire to keep the public locked out of discussions about government activities. These efforts have mostly failed. And that has mainly been due to vague and frequent invocations of national security concerns, which allow legislators and federal judges to shut off their brains and hammer the [REDACT] button repeatedly.

But ignoring the problem hasn't made the problem go away, no matter how many billions the federal government refuses to throw at the problem. Over-classification still stands between the public and information it should have access to. And it stands between federal agencies and efficient use of tax dollars. The federal government generates petabytes of data every month. And far too often, the agencies generating the data decide it's no one's business but their own.

It's not just legislators noting the widening gap between the government's massive stockpiles of data and the public's ability to access them. It's also those generating the most massive stashes of bits and bytes, as the Washington Post points out, using the words of an Intelligence Community official.

The U.S. government is drowning in its own secrets. Avril Haines, the director of national intelligence, recently wrote to Sens. Ron Wyden (D-Ore.) and Jerry Moran (R-Kan.) that “deficiencies in the current classification system undermine our national security, as well as critical democratic objectives, by impeding our ability to share information in a timely manner.” The same conclusions have been drawn by the senators and many others for a long time.

As this letter hints at, over-classification doesn't just affect the great unwashed whose power is generally considered to be far too limited to change things. It also affects agencies and the entities that oversee the agencies -- the latter of which are asked to engage in oversight while being locked out of the information they need to perform this task.

If there's any good news here, it's that the Intelligence Community recognizes it's part of the problem. But this is just one person in the IC. It's unlikely every official feels this way.

The government is working towards a solution, but its work is being performed at the speed of government -- something further hampered by the back-and-forth of periodic regime changes and their alternating ideas about how much transparency the government owes to its patrons.

The IC letter writer almost sees a silver lining in the nearly opaque cloud enveloping agencies involved in national security efforts.

So far, Ms. Haines said, current priorities and resources for fixing the classification systems “are simply not sufficient.” The National Security Council is working on a revised presidential executive order governing classified information, and we hope the White House will come up with an ambitious blueprint for modernization.

The silver lining is "so far," and the efforts being made elsewhere to change things. The rest of the non-lining is far less silver: the resources aren't sufficient and the National Security Council is grinding bureaucratic gears by working with the administration to change things. If it doesn't happen soon, changes will be at the discretion of the next administration. And the next administration may no longer feel streamlining declassification is a priority, putting projects that have been in the on-again, off-again works since Snowden's exposes on the back burner yet again.

Our government will never likely feel Americans can be trusted with information about the programs their tax dollars pay for. But perhaps a little more momentum -- this time propelled by something within the Intelligence Community -- will prompt some incremental changes that may eventually snowball into actual transparency and accountability.

Tim Cushing

First Circuit Tears Into Boston PD's Bullshit Gang Database While Overturning A Deportation Decision

2 years 2 months ago

A federal court has delivered a rebuke of police gang databases in, of all things, a review of a deportation hearing.

As we've been made painfully aware, gang databases are just extensions of biased policing efforts. People are placed in gang databases for numerous, incredibly stupid reasons. People are designated gang members simply for living, working, and going to school in areas where gang activity is prevalent. Infants have been added to gang databases because cops can't be bothered to perform any due diligence. There's no way for people to know they've been designated as gang-affiliated and, worse, there's often no way to challenge this designation and get yourself removed from these lists, which tend to result in additional harassment by police officers or "gang enhancements" that lengthen sentences for anyone listed in these dubious databases.

In 2015, Homeland Security Investigations officers performed a sweep in Boston, Massachusetts, rounding up suspected MS-13 gang members for deportation. This sweep snared Cristian Diaz Ortiz, who was 16, had entered the country illegally, and was now living with his uncle.

Oritz applied for asylum, citing the fear of being subjected to MS-13 gang violence if he was sent back to his home country, El Salvador. From the First Circuit Appeals Court decision [PDF]:

On October 1, 2018, Diaz Ortiz filed an application for asylum, withholding of removal, and CAT protection, basing his request on multiple grounds, including persecution because of his evangelical Christian religion. He also reported that an aunt had been murdered in 2011 by members of MS-13, and he feared that the gang would kill him as well if he returned to El Salvador. In a subsequently filed affidavit, Diaz Ortiz stated that, while he was living in El Salvador, MS-13 had threatened his life "on multiple occasions" because he was a practicing evangelical Christian. He said he repeatedly refused the gang's demands that he join MS-13, but gang members continued to follow him and issue threats. In 2015, the gang physically attacked him and warned "that they would kill [him] and [his] family if [he] did not stop saying [he] was a Christian and living and preaching against the gang way of life."

The Immigration Judge sided with the Department of Homeland Security. It largely made this decision due to the introduction of a "Gang Assessment Database" that said Ortiz was not a practicing Christian who might fear retaliation if removed from the country, but rather an MS-13 infiltrator. The "gang package" (as the court refers to it) was compiled by the Boston PD. It stated the following:

Cristian Josue DIAZ ORTIZ has been verified as an MS-13 gang member by the Boston Police Department (BPD)/Boston Regional Intelligence Center (BRIC).

Cristian Josue DIAZ ORTIZ has documented associations with MS-13 gang members by the Boston Police Department and Boston School Police Department (BSPD). (See the attached BPD & BSPD incident/field interview reports and gang intelligence bulletins.)

Cristian Josue DIAZ ORTIZ has been documented carrying common MS-13 gang related weapons by the Boston Police Department. (See the attached BPD incident/field interview reports.) [A footnote states that the only "weapon" ever documented by the BPD was a bike chain and a padlock carried in Ortiz's backpack.]

Cristian Josue DIAZ ORTIZ has been documented frequenting areas notorious for MS13 gang activity by the Boston Police Department. These areas are 104 Bennington St. and the East Boston Airport Park/Stadium in East Boston, Massachusetts which are both known for MS-13 gang activity including recent firearms arrests and a homicide.

According to the Boston PD, Oritz racked up "points" by associating with gang members and being in areas MS-13 members frequented. If enough points are accrued, a person gets placed in the gang database. But the underlying events had nothing to do with gang activity, despite what the summary provided by the DHS said.

The BPD documented nine "interactions" with Ortiz in which it assigned "gang" points to him. Three of those instances involved Ortiz smoking marijuana (a civil infraction in Massachusetts) with students and others the BPD claimed were "known MS-13 members." Four others involved Ortiz "loitering" in a place near "known gang member" or being approached and talked to by "known gang members." And one of the interactions was the time the BPD "discovered" Oritz carrying a bike lock and chain in his backpack -- something not all that uncommon for bike owners (which Ortiz was).

This "gang package" was critiqued by a law enforcement expert who testified that Ortiz should never have been included in the gang database. The former Boston police officer pointed out Ortiz had never been suspected of criminal activity and was apparently being penalized solely for spending time with people of his same ethnicity. The gang package's claim that Ortiz had a "history" of carrying weapons was clearly undercut by the BPD's documentation of a single incident where an officer recovered something that could be used as a weapon (the bike chain), but was not inherently a tool of unlawful violence.

The immigration judge ignored all of this, finding only the DHS and BPD credible. So did the Board of Immigration Appeals (BIA). Fortunately for Ortiz, the First Circuit isn't as easily impressed by the Boston PD's police work. It has some very harsh words for the two lower levels that blew off their obligations to the asylum seeker.

If the IJ and BIA had performed even a cursory assessment of reliability, they would have discovered a lack of evidence to substantiate the gang package's classification of Diaz Ortiz as a member of MS-13. Most significantly, the record contains no explanation of the basis for the point system employed by the BPD. The record is silent on how the Department determined what point values should attach to what conduct, or what point threshold is reasonable to reliably establish gang membership.

As the appeals court points out, these databases are inherently unreliable because literally anything can be used to imply someone is a gang member. The lower courts were wrong to completely dismiss Ortiz's challenge of the BPD's assessment.

That silence is so consequential because, during the period relevant to this case, the list of "items or activities" that could lead to "verification for entry into the Gang Assessment Database" was shockingly wide-ranging. It included "Prior Validation by a Law Enforcement Agency" (nine points), "Documented Association (BPD Incident Report)" (four points), and the open-ended "Information Not Covered by Other Selection Criteria" (one point). The 2017 form for submitting FIO [Field Interview Operations] reports to the database states that a "Documented Association" includes virtually any interaction with someone identified as a gang member: "[w]alking, eating, recreating, communicating, or otherwise associating with confirmed gang members or associates."

The points are easy to acquire, but there's no consistency in how the Boston PD assigns them, lending more credibility to the assumption that gang databases mainly exist to confirm cops' biases.

Moreover, the point system was applied to Diaz Ortiz in a haphazard manner. He was assigned points for most, but not all, of his documented interactions with purported MS-13 members. When he was assigned points, he was not always assigned the same number per interaction. Although he was assigned two points for "contact" with alleged gang members or associates on most occasions, he was assigned five points for the "Intelligence Report" submitted by the Boston School Police that describes an encounter that appears no different from the other "contacts." Only two items in the Rule 335 list carry five points: "Information from Reliable, Confidential Informant" and "Information Developed During Investigation and/or Surveillance." We thus cannot accept the BIA's implicit conclusion that the gang package's points-driven identification of Diaz-Ortiz as a "VERIFIED and ACTIVE" member of MS-13 was reliable.

Case in point:

The entry for November 28, 2017 -- the report from a Boston school officer -- illustrates several of these issues. The gist of the entry is that two officers made "casual conversation" with a student in a "full face mask" whom they identified as a member of MS-13, and they then saw the student walk over to a group of teenage boys that included Diaz Ortiz. The report identifies no improper conduct by any of the students; it does not say that the mask bore gang colors or symbols;23 it does not indicate that the masked student spoke directly to Diaz Ortiz. Nor does the report explain the basis for identifying the student as an MS-13 member other than to say that the BRIC labeled the student as a "verified" member. Therefore, we at most can infer from this paltry set of facts that Diaz Ortiz was standing near an individual who was identified as an MS-13 member by the BRIC, with the only basis for that identification the possible use of the same problematic point system that identified Diaz Ortiz as a member. Yet, Diaz Ortiz received five points merely because that student decided to walk over and join a group that included him.

Yes, the BPD decided Ortiz was affiliated with a notorious El Salvadoran gang internationally known for violently [checks gang package] smoking the reefer and conversing in public.

The whole opinion is worth reading. It ruthlessly picks apart the BPD's gang database, reaching conclusions that apply to every gang database run by any law enforcement agency in America. This vacates the lower courts' decisions, which means Ortiz can again plead his case before the BIA. And this time he'll get a new judge because the First Circuit feels that sending it back to the original immigration judge would just allow that judge to re-engage with their pre-existing biases.

Gang databases are garbage. Even the most cursory examination of the underlying factors common to almost every gang database makes that clear. But the immigration court couldn't be bothered to do this, which almost resulted in someone being sent back to El Salvador where interactions with actual gang members might have resulted in his death, rather than just being an unwilling participant in Boston's "Whose Gang Is It Anyway?," where everything's made up and, unfortunately, the points do matter.

Tim Cushing

Content Moderation Case Study: Russia Slows Down Access To Twitter As New Form Of Censorship (2021)

2 years 2 months ago

Summary:

On March 10 2021, the Russian Government deliberately slowed down access to Twitter after it accused the platform of repeatedly failing to remove posts about illegal drug use, child pornography, and pushing minors towards suicide. 

State communications watchdog Roskomnadzor (RKN) claimed that “throttling” the speed of uploading and downloading images and videos on Twitter was to protect its citizens by making its content less accessible. Using Deep Packet Inspection (DPI) technology, RKN essentially filtered internet traffic for Twitter-related domains. As part of Russia’s controversial 2019 Sovereign Internet Law, all Russian Internet Service Providers (ISPs) were required to install this technology, which allows internet traffic to be filtered, rerouted, and blocked with granular rules through a centralized system. In this example, it blocked or slowed down access to specific content (images and videos) rather than the entire service. DPI technology also gives Russian authorities unilateral and automatic access to ISPs’ information systems and access to keys to decrypt user communications. 

Twitter throttling in Russia meme. Translation: “Runet users; Twitter”

The University of Michigan’s researchers reported connection speeds to Twitter users were reduced on average by 87 percent and some Russian internet service providers reported a wider slowdown in access. Inadvertently, this throttling affected all website domains that included the substring t.co (Twitter’s shortened domain name), including Microsoft.com, Reddit.com, Russian state operated news site rt.com and several other Russian Government websites, including RKN’s own.

Although reports suggest that Twitter has a limited user base in Russia, perhaps as low as 3% of the population (from an overall population of 144 million), it is popular with politicians, journalists and opposition figures. The ‘throttling’ of access was likely intended as a warning shot to other platforms and a test of Russia’s technical capabilities. Russian parliamentarian, Aleksandr Khinshtein, an advocate of the 2019 Sovereign Internet Law, was quoted as saying that: 

Putting the brakes on Twitter traffic “will force all other social networks and large foreign internet companies to understand Russia won’t silently watch and swallow the flagrant ignoring of our laws.” The companies would have to obey Russian rules on content or “lose the possibility to make money in Russia.” — Aleksandr Khinshtein

The Russian Government has a history of trying to limit and control citizen’s access and use of social media. In 2018, it tried and ultimately failed to shut down Telegram, a popular messaging app. Telegram, founded by the Russian émigré, Pavel Durov, refused to hand over its encryption keys to RKN, despite a court order. Telegram was able to thwart the shutdown attempts by shifting the hosting of its website to Google Cloud and Amazon Web Services through ‘domain fronting’ – which the Russian Government later banned. The Government eventually backed down in the face of technical difficulties and strong public opposition.
Many news outlets suggest that these incidents demonstrate that Russia, where the internet has long been a last bastion of free speech as the government has shuttered independent news organizations and obstructed political opposition, is now tipping towards the more tightly controlled Chinese model and replicating aspects of its famed Great Fire Wall – including creating home-grown alternatives to Western platforms. They also warn that as Russian tactics become bolder and its censorship technology more technically sophisticated – they will be easily co-opted and scaled up by other autocratic governments.

Company considerations:

  • To what extent should companies comply with such types of government demands? 
  • Where do companies draw the line between acquiescing to government demands/local law that are contrary to its values or could result in human rights violations vs expanding into a market or ensuring that its users have access?
  • To what extent should companies align their response and/or mitigation strategies with that of other (competitor) US companies affected in a similar way by local regulation?
  • Should companies try to circumvent the ‘throttling’ or access restrictions through technical means such as reconfiguring content delivery networks?
  • Should companies alert its users that their government is restricting/throttling access?

Issue considerations:

  • When are government takedown requests too broad and overreaching? Who – companies, governments, civil society, a platform’s users – should decide when that is the case?
  • How transparent should companies be with its users about why certain content is taken down because of government requests and regulation? Would there be times when companies should not be too transparent?
  • What can users and advocacy groups do to challenge government restrictions on access to a platform?
  • Should – as the United Nations suggest – access to the internet be seen as a part of a suite of digital human rights?

Resolution:

The ‘throttling’ of access to Twitter content initially lasted two months. According to RKN, Twitter removed 91 percent of its takedown requests after RKN threatened to block Twitter if it didn’t comply. Normal speeds for desktop users resumed in May after Twitter complied with RKN’s takedown requests but reports indicate that throttling is continuing for Twitter’s mobile app users until it complies fully with RKN’s takedown requests.

Originally posted to the Trust and Safety Foundation website.

Copia Institute

Emails Show The LAPD Cut Ties With The Citizen App After Its Started A Vigilante Manhunt Targeting An Innocent Person

2 years 2 months ago

It didn't take long for Citizen -- the app that once wanted to be a cop -- to wear out its law enforcement welcome. The crime reporting app has made several missteps since its inception, beginning with its original branding as "Vigilante."

Having been booted from app stores for encouraging (unsurprisingly) vigilantism, the company rebranded as "Citizen," hooking um… citizens up with live feeds of crime reports from city residents as well as transcriptions of police scanner output. It also paid citizens to show up uninvited at crime scenes to report on developing situations.

But it never forgot its vigilante origins. When wildfires swept across Southern California last year, Citizen's principals decided it was time to put the "crime" back in "crime reporting app." The problem went all the way to the top, with Citizen CEO Andrew Frame dropping into Slack conversations and live streams, imploring employees and app users to "FIND THIS FUCK."

The problem was Citizen had identified the wrong "FUCK." The person the app claimed was responsible for the wildfire wasn't actually the culprit. Law enforcement later tracked down a better suspect, one who had actually generated some evidence implicating them.

After calling an innocent person a "FUCK" and a "devil" in need of finding, Citizen was forced to walk back its vigilantism and rehabilitate its image. Unfortunately for Citizen, this act managed to burn bridges with local law enforcement just as competently as the wildfire it had used to start a vastly ill-conceived manhunt.

As Joseph Cox reports for Motherboard, this act ignited the last straw that acted as a bridge between Citizen and one of the nation's largest law enforcement agencies, the Los Angeles Police Department. Internal communications obtained by Vice show the LAPD decided to cut ties with the app after the company decided its internal Slack channel was capable of taking the law into its own hands.

On May 21, several days after the misguided manhunt, Sergeant II Hector Guzman, a member of the LAPD Public Communications Group, emailed colleagues with a link to some of the coverage around the incident.

“I know the meeting with West LA regarding Citizen was rescheduled (TBD), but here’s a recent article you might want to look at in advance of the meeting, which again highlights some of the serious concerns with Citizen, and the user actions they promote and condone,” Guzman wrote. Motherboard obtained the LAPD emails through a public records request.

Lieutenant Raul Jovel from the LAPD’s Media Relations Division replied “given what is going on with this App, we will not be working with them from our shop.”

Guzman then replied “Copy. I concur.”

Whatever lucrative possibilities Citizen might have envisioned after making early inroads towards law enforcement acceptance were apparently burnt to a crisp by this misapprehension that nearly led to a calamitous misapprehension. Rather than entertain Citizen's mastubatorial fantasies about being the thin app line between good and evil, the LAPD (wisely) chose to kick the upstart to the curb.

The stiff arm continues to this day. The LAPD cut ties and has continued to swipe left on Citizen's extremely online advances. The same Sgt. Guzman referenced in earlier emails has ensured the LAPD operates independently of Citizen. When Citizen asked the LAPD if it would be ok to eavesdrop on radio chatter to send out push notifications to users about possible criminal activity, Guzman made it clear this would probably be a bad idea.

“It’s come up before. Always turned down for several reasons,” Guzman wrote in another email.

And now Citizen goes it alone in Los Angeles. In response to Motherboard's reporting, Citizen offered up word salad about good intentions and adjusting to "real world operational experiences." I guess that's good, in a certain sense. From the statement, it appears Citizen is willing to learn from its mistakes. The problem is its mistakes have been horrific rather than simply inconvenient, and it appears to be somewhat slow on the uptake, which only aggravates problems that may be caused by over-excited execs thinking a few minutes of police scanner copy should result in citizen arrests.

Tim Cushing

Over 60 Human Rights/Public Interest Groups Urge Congress To Drop EARN IT Act

2 years 2 months ago

We've already talked about the many problems with the EARN IT Act, how the defenders of the bill are confused about many basic concepts, how the bill will making children less safe and how the bill is significantly worse than FOSTA. I'm working on most posts about other problems with the bill, but it really appears that many in the Senate simply don't care.

Tomorrow they'll be doing a markup of the bill where it will almost certainly pass out of the Judiciary Committee, at which point it could be put up for a floor vote at any time. Why the Judiciary Committee is going straight to a markup, rather than holding hearings with actual experts, I cannot explain, but that's the process.

But for now at least over 60 human rights and public interest groups have signed onto a detailed letter from CDT outlining many of the problems in the bill, and asking the Senate to take a step back before rushing through such a dangerous bill.

Looking to the past as prelude to the future, the only time that Congress has limited Section 230 protections was in the Allow States and Victims to Fight Online Sex Trafficking Act of 2017 (SESTA/FOSTA). That law purported to protect victims of sex trafficking by eliminating providers’ Section 230 liability shield for “facilitating” sex trafficking by users. According to a 2021 study by the US Government Accountability Office, however, the law has been rarely used to combat sex trafficking.

Instead, it has forced sex workers, whether voluntarily engaging in sex work or forced into sex trafficking against their will, offline and into harm’s way. It has also chilled their online expression generally, including the sharing of health and safety information, and speech wholly unrelated to sex work. Moreover, these burdens fell most heavily on smaller platforms that either served as allies and created spaces for the LGBTQ and sex worker communities or simply could not withstand the legal risks and compliance costs of SESTA/FOSTA. Congress risks repeating this mistake by rushing to pass this misguided legislation, which also limits Section 230 protections.

It also discusses the attacks on encryption hidden deep within the bill.

End-to-end encryption ensures the privacy and security of sensitive communications such that only the sender and receiver can view them. This security is relied upon by journalists, Congress, the military, domestic violence survivors, union organizers, and anyone who seeks to keep their communications secure from malicious hackers. Everyone who communicates with others on the internet should be able to do so privately. But by opening the door to sweeping liability under state laws, the EARN IT Act would strongly disincentivize providers from providing strong encryption. Section 5(7)(A) of EARN IT states that provision of encrypted services shall not “serve as an independent basis for liability of a provider” under the expanded set of state criminal and civil laws for which providers would face liability under EARN IT. Further, Section 5(7)(B) specifies that courts will remain able to consider information about whether and how a provider employs end-to-end encryption as evidence in cases brought under EARN IT. This language, originally proposed in last session’s House companion bill, takes the form of a protection for encryption, but in practice it will do the opposite: courts could consider the offering of end-to-end encrypted services as evidence to prove that a provider is complicit in child exploitation crimes. While prosecutors and plaintiffs could not claim that providing encryption, alone, was enough to constitute a violation of state CSAM laws, they would be able to point to the use of encryption as evidence in support of claims that providers were acting recklessly or negligently. Even the mere threat that use of encryption could be used as evidence against a provider in a criminal prosecution will serve as a strong disincentive to deploying encrypted services in the first place.

Additionally, EARN IT sets up a law enforcement-heavy and Attorney General-led Commission charged with producing a list of voluntary “best practices” that providers should adopt to address CSAM on their services. The Commission is free to, and likely will, recommend against the offering of end-to-end encryption, and recommend providers adopt techniques that ultimately weaken the cybersecurity of their products. While these “best practices” would be voluntary, they could result in reputational harm to providers if they choose not to comply. There is also a risk that refusal to comply could be considered as evidence in support of a provider’s liability, and inform how judges evaluate these cases. States may even amend their laws to mandate the adoption of these supposed best practices. For many companies, the lack of clarity and fear of liability, in addition to potential public shaming, will likely disincentivize them from offering strong encryption, at a time when we should be encouraging the opposite.

There's a lot more in the letter, and the Copia Institute is proud to be one of the dozens of signatories, along with the ACLU, EFF, Wikimedia, Mozilla, Human Rights Campaign, PEN America and many, many more organizations.

Mike Masnick

Terrible Vermont Harassment Law Being Challenged After Cops Use It To Punish A Black Lives Matter Supporter Over Her Facebook Posts

2 years 2 months ago

In June 2020, in Brattleboro, Vermont, something extremely ordinary happened. Two residents of the community interacted on Facebook. It was not a friendly interaction, which made it perhaps even more ordinary.

Here's the ordinariness in all of its mundane detail, as recounted in Brattleboro resident Isabel Vinson's lawsuit [PDF] seeking to have one of the state's laws found unconstitutional.

In June 2020, Christian Antoniello, a Brattleboro resident and the owner of a local business called the Harmony Underground, criticized the Black Lives Matter movement on his personal Facebook page, stating, “How about all lives matter. Not black lives, not white lives. Get over yourself no one’s life is more important than the next. Put your race card away and grow up.”

On June 6, Ms. Vinson posted on her own Facebook page and tagged the Harmony Underground’s business page. Ms. Vinson’s post stated: “Disgusting. The owner of the Harmony Underground here in Brattleboro thinks this is okay and no matter how many people try and tell him it’s wrong he doesn’t seem to care.” In the comments on her post, Ms. Vinson recommended that everyone “leave a review on his page so [Antoniello] can never forget to be honest,” and also tagged a Facebook group called “Exposing Every Racist.”

In response to Ms. Vinson’s Facebook post, a conversation thread ensued among several people, including Ms. Vinson, about her post, Mr. Antoniello, and other complaints about the business.

That's when things stopped being normal, and started becoming increasingly more bizarre.

Several weeks later, Antoniello and his wife reported to the Brattleboro Police Department that they were being harassed on Facebook and that Ms. Vinson’s Facebook activity caused them to fear for their safety.

This is kind of a normal reaction. Kind of. Not everyone subjected to online pitchforks will choose to make it a police matter, but this couple did.

If you're wondering where the criminal activity is, the Brattleboro police department has an answer for you.

On July 7, the Brattleboro Police Department cited Ms. Vinson under § 1027 based on her Facebook activity.

Here's what the state law (Section 1027) says:

A person who, with intent to terrify, intimidate, threaten, harass, or annoy makes contact by means of a telephonic or other electronic communication with another and makes any request, suggestion, or proposal that is obscene, lewd, lascivious, or indecent; threatens to inflict injury or physical harm to the person or property of any person; or disturbs, or attempts to disturb, by repeated telephone calls or other electronic communications, whether or not conversation ensues, the peace, quiet, or right of privacy of any person at the place where the communication or communications are received shall be fined not more than $250.00 or be imprisoned not more than three months, or both.

It's an amazingly broad law that criminalizes all sorts of speech since it can be stretched to fit nearly any speech a complainant doesn't care for. "Harass" is a pretty non-specific term. "Annoy" is even more vague.

That's the law being challenged by Vinson and the ACLU. It's a vague, unconstitutional law. And it's a law the PD obviously didn't sincerely believe applied to Vinson's Facebook post because it ditched everything about this highly questionable case the moment questions started being asked.

Two weeks later -- following an ACLU public records request for all documents related to Vinson's charge and prosecution -- the Brattleboro PD approached Vinson and offered to drop the charges in exchange for her entering a diversion program that could be completed in lieu of criminal charges. Vinson refused to enter the diversion program and said she was seeking legal representation. Here's what happened next:

Two days later, the Brattleboro police informed Ms. Vinson that she would not be charged.

All's well that ends abruptly in the face of the slightest resistance. But the law is still on the books. If the Brattleboro cops may decide not to take a second swing at Isabel Vinson with this law, law enforcement officers in the state are still free to misuse the law to punish people for saying things other people didn't like. And, needless to say, the vague law presents a perfect crime of opportunity for cops if a state resident says something cops don't like. That's why the state is being sued and the Vermont federal court being asked to declare the law unconstitutional. As it stands, the law presents an existential threat to free speech in the state. And Isabel Vinson's experience in Brattleboro shows what can happen when the threat goes from theoretical to fully-realized.

Tim Cushing

Daily Deal: Certified Refurbished Vivitar VTI Phoenix Foldable Drone

2 years 2 months ago

If capturing a bird's eye view of your favorite places is a fun way for you to unwind when you have some time, then the Vivitar VTI Phoenix Foldable Camera Drone (certified refurbished) is a great choice for updating your hobby's capabilities. All the pieces come secured in the sided carrying case, which helps protect them from damage as well as keeps them neatly organized. The two included batteries allow for a combined flight time of over 32 minutes, so you can get the most out of your drone's 1152p video camera video imaging. With a range of 2000 feet, Follow Me technology, GPS location locking, and Wi-Fi transmission capability, this drone has all the bells and whistles you need. It's on sale for $159.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Daily Deal

WarnerMedia Sued For Giving People Want They Wanted (The Matrix, Streaming) During An Historic Health Crisis

2 years 2 months ago

AT&T got a lot wrong (and still really can't admit it) with the company's $86 billion acquisition of Time Warner. There were endless layoffs, a steady dismantling of beloved brands (DC's Vertigo imprint, Mad Magazine), all for the company to lose pay TV subscribers in the end.

But the one thing the company did get right, with a little help from COVID, was its attacks on the dated, pointless, and often punitive Hollywood release window. Typically, this has involved a 90 day gap between the time a move appears in theaters and its streaming or DVD release (in France this window is even more ridiculous at three years). Generally, this is done to protect the "sanctity of the movie going experience," as if for thirty years the "sanctity of the movie going experience" hasn't involved sticky floors, over priced popcorn, big crowds and mass shootings.

During COVID, big streamers like AT&T and Comcast shifted a lot of their tentpole films (like Dune) directly to streaming, which technically saved human lives, but resulted in no limit of raised eyebrows and scorn among the "Loews at the mall is a sacred space you can't criticize" segment of Hollywood. You might recall that AMC Theaters was positively apoplectic when Comcast showed that release windows were a dated relic, declaring it would never again show a Comcast NBC Universal picture anywhere in the world if Comcast kept threatening the sacred release window (the threat lasted about a week).

WarnerMedia (in the process of being spun off by AT&T) has faced similar whining from the industry. This week the company was hit with a lawsuit (pdf) by Village Roadshow Films, which claims the company "rushed" the release of The Matrix Resurrections from 2022 to 2021 as part of an (gasp) effort to boost streaming's popularity. All through 2021, AT&T/Time Warner released films simultaneously in theaters and on streaming to boost HBO Max subscriptions. And people liked it.

Unsurprisingly, Village Roadshow Films did not, claiming the effort (dubbed "Project Popcorn") was a "clandestine plan to materially reduce box office and correlated ancillary revenue generated from tent pole films that Village Roadshow and others would be entitled to receive in exchange for driving subscription revenue for the new HBO Max service." HBO Max and AT&T telegraphed this intention, so it seems hard to argue this was somehow clandestine. The suit also accuses WarnerMedia of ignoring the fact that piracy would have hurt the overall profits to be made from the film, though, again, metrics proving clear financial harm appear lacking.

But just as unsurprisingly, Warner Brothers thinks Village Roadshow Films is just annoyed by reality and shifting markets:

"In a statement shared with The Verge, Warner Bros. called the lawsuit “a frivolous attempt by Village Roadshow to avoid their contractual commitment to participate in the arbitration that we commenced against them last week. We have no doubt that this case will be resolved in our favor."

Again, while it's true that AT&T attacked the sacred old release window to goose streaming subscriptions, this was something that happened during an historic plague in which indoor transmission of a deadly virus could kill or disable you. It's also almost an afterthought that in the advanced home theater and mall shooting era, this is something consumers desperately wanted. For all its downsides, COVID had a strong tendency to painfully highlight shortcomings (see: broadband, the U.S. healthcare system) and dated antiquities (like release windows or a disdain for telecommuting) that no longer served us.

While there's a shrinking sect of Hollywood folks like Spielberg who still think in-person theaters and release windows are sacred and above reproach, COVID laid bare the fact that not that many people agree with them. And while that certainly disadvantaged folks financially dependent on older models (like theater owners and studios heavily vested in release windows), the reality is what it is, and a popular change was accelerated all the same.

Karl Bode

Whistleblower Alleges NSO Offered To 'Drop Off Bags Of Cash' In Exchange To Access To US Cellular Networks

2 years 2 months ago

The endless parade of bad news for Israeli malware merchant NSO Group continues. While it appears someone might be willing to bail out the beleaguered company, it still has to do business as the poster boy for the furtherance of human rights violations around the world. That the Israeli government may have played a significant part in NSO's sales to known human rights violators may ultimately be mitigating, but for now, NSO is stuck playing defense with each passing news cycle.

Late last month, the New York Times revealed some very interesting things about NSO Group. First, it revealed the company was able to undo its built-in ban on searching US phone numbers… provided it was asked to by a US government agency. The FBI took NSO's powerful Pegasus malware for a spin in 2019, but under an assumed name: Phantom. With the permission of NSO and the Israeli government, the malware was able to target US numbers, albeit ones linked to dummy phones purchased by the FBI.

The report noted the FBI liked what it saw, but found the zero-click exploit provided by NSO's bespoke "Phantom" (Pegasus, but able to target US numbers) might pose constitutional problems the agency couldn't surmount. So, it walked away from NSO. But not before running some attack attempts through US servers -- something that was inadvertently exposed by Facebook and WhatsApp in their lawsuit against NSO over the targeting of WhatsApp users. An exhibit declared NSO was using US servers to deliver malware, something that suggested NSO didn't care about its self-imposed restrictions on US targeting. In reality, it was the FBI and NSO running some tests on local applications of zero-click malware that happened to be caught by Facebook techies.

But there's more. Recent reports building on the NYT article contain statements that claim NSO approached service providers with (well, let's just say it) bribes to allow access to targets at a higher level that might mitigate some of the defensive efforts deployed by Facebook, Google, and Apple.

Here's what's been alleged in newer reports, like this one by Craig Timberg of the Washington Post:

The surveillance company NSO Group offered to give representatives of an American mobile-security firm “bags of cash” in exchange for access to global cellular networks, according to a whistleblower who has described the encounter in confidential disclosures to the Justice Department that have been reviewed by The Washington Post.

The mobile-phone security expert Gary Miller alleges that the offer came during a conference call in August 2017 between NSO Group officials and representatives of his employer at the time, Mobileum, a California-based company that provides security services to cellular companies worldwide. The NSO officials specifically were seeking access to what is called the SS7 network, which helps cellular companies route calls and services as their users roam the world, according to Miller.

Mobileum execs were (understandably) unsure how any of this was supposed to work in the unlikely event they were amenable to a foreign entity's requests for elevated access to US cellular networks. Fortunately, the NSO rep made it extremely clear how this was going to work, according to the whistleblower:

In Miller’s account to the Justice Department, when one of Mobileum’s representatives pointed out that security companies do not ordinarily offer services to surveillance companies and asked how such an arrangement would work, NSO co-founder Omri Lavie allegedly said, “We drop bags of cash at your office."

Simple enough. Except -- to quote C. Montgomery Burns -- at the end of the proposed transaction "the money and the very stupid man were still there." Mobileum execs say no such bribery took place -- not because NSO didn't offer it but because the company refused to accept the generous offer of extremely shady "bags of cash" from the Israeli malware maker.

NSO has its own explanation for these events, which is, basically: "It was a joke, probably."

In a statement through a spokesperson, Lavie said he did not believe he had made the remark. “No business was undertaken with Mobileum,” the statement said. “Mr Lavie has no recollection of using the phrase ‘bags of cash’, and believes he did not do so. However if those words were used they will have been entirely in jest.”

Hahahahahaaaa… here at the home of the zero-click exploit marketed to human rights violators we often joke about bribing tech companies to allow us more access to networks. Oh, our sides ache from the fun we have jesting about subverting networks to compromise targets of evil empires. Ell oh fucking ell.

Mobileum, on the other hand, says it has never done business with NSO and reported this proposed cash drop to the FBI in 2017 but never heard anything back from the agency. Two years later, the FBI was experimenting with NSO malware and trying to gauge the political and constitutional fallout of deploying unregulated malware against US citizens.

Even if NSO is to be believed, there's nothing good awaiting it on the US side of things. The Commerce Department has already blacklisted the company, destroying its ability to purchase US tech for the purpose of compromising it. And the Department of Justice has opened its own investigation into NSO, adding to its list of US-related woes.

NSO could have avoided all of this international attention by being more selective about who it sold to, and stripping customers of their licenses at the first hint of malfeasance. It didn't. And the fact that it may have been pressed into service as a malware-laden extension of the Israeli government's Middle East charm offensive isn't going to save it. NSO has to save itself but it lacks the tools to do so. Whatever it claims in defense of every reported allegation is presumed to be suspect, if not completely false. The reputation it has now is mostly earned. It made millions helping sketchy governments inflict further misery on citizens, dissidents, journalists, and political opponents. The company's honor is no longer presumed if, indeed, it ever was.

Tim Cushing

Apple Opposes Trademark For Indie Film 'Apple-Man' Claiming Potential Confusion

2 years 2 months ago

When it comes to silly trademark disputes, Apple has come up for discussion many, many times. The mega-corporation is a jealous defender of all of its IP, but most of our stories have focused on its disputes with companies that created logos that involve any sort of apple or other fruit. Sometimes it's not even companies that Apple is fighting with, but entire foreign political parties. The idea here is that when it comes to logos or trade dress, Apple appears to think that it owns all the apples.

But what about the word itself? Well, the company can get absurd at that level, too. For instance, Apple recently opposed the trademark application for a Ukrainian filmmaker's indie opus, entitled Apple-Man.

Apple in December filed an opposition with the U.S. Patent and Trademark Office seeking to block Ukrainian director Vasyl Moskalenko’s trademark application for his indie project. The world’s most valuable company argues that viewers will mistakenly believe Apple-Man is associated with Apple and that the movie will dilute its brand.

“The Apple Marks are so famous and instantly recognizable that the similarities in Applicant’s Mark will overshadow any minor differences and cause the ordinary consumer to believe that Applicant is related to, affiliated with, or endorsed by Apple,” states the filing, which is embedded below. “Consumers are likely to assume, erroneously, that Applicant’s Mark is a further extension of the famous Apple brand.”

Alright, so let's stipulate the following right up front: Apple's trademark on its name is no doubt famous. That affords the company far more protection on that mark than your normal everyday trademark. One of the main differences, however, is that Apple can enforce the mark not only for customer confusion, but for things like tarnishment, if someone used the term in a way that could be seen as disparaging to Apple.

In the quote above, Apple is going the traditional confusion route in its opposition. But that's unbelievably silly. This is an indie film that nobody is going to associate with Apple. It's also, because it's a film, entitled to First Amendment protections that are almost certain to override any trademark concerns, particularly those as flimsy as Apple's.

Elsewhere, Apple argues for dilution.

Apple also argues the trademark, if granted, will “cause dilution of the distinctiveness of the famous Apple Marks by eroding consumers’ exclusive identification of the Apple Marks with Apple.”

But consumers don't have an exclusive identification of the Apple Marks with Apple. That should be obvious on its face. Lots of companies, for instance, use the term "Apple" in branding for... you know... apples. There have also been other films, more specifically, that make use of the word "apple" in their names. There is one called The Apple. And another called Apples. So what does Apple's lawyers see as the difference between those films use and Apple-Man? ¯\_(ツ)_/¯

Jeremy Eche of JPG Legal, who represents Moskalenko, argues “apple” isn’t a proprietary word and viewers won’t be misled by the movie.

“This is ridiculous,” he tells The Hollywood Reporter. “They really want to own the word ‘Apple’ in every industry.”

Eche contends Apple is a “trademark bully” exploiting the system.

Of that there can be little doubt. So why is Apple even bothering with any of this? Well, outside council is involved, so the term "billable hours" immediately leaps to mind. But Apple's history of trademark bullying also doesn't exactly preclude a haphazard and capricious enforcement of its trademarks. The lawyers saw this one, so they went after it.

And before anyone wants to jump in the comments and point out that Apple makes and provides film content via AppleTV and iTunes... don't. Doing so does not suddenly mean the company can keep a filmmaker from making a film that uses the word in its title, nor for trademarking the name of that film.

Timothy Geigner

Appeals Court Can Rule That DMCA's Anti-Circumvention Rules Are Unconstitutional

2 years 2 months ago

As you hopefully know, there are two main parts to the DMCA law that was passed in 1998. There's DMCA 512, which is what you hear about most of the time. That's the part that includes the rules for notice and takedown regimes for user uploaded content (among other things). It's got problems, but in its current form has also enabled many important services to exist. The other part, which is much more problematic, is DMCA 1201, which is the anti-circumvention rules -- or you could call it the "DRM" part of the law. This has no redeeming value whatsoever. Under 1201 basically any attempt to circumvent a "technological" protection measure, can be deemed infringing even if the underlying content is never infringed upon. This part of the law is not only not necessary, but it's drafted in a manner that has been regularly abused -- enabling everyone from printer manufacturers to garage door opener companies to argue that simple reverse engineering to create competition is "infringement."

In fact, everyone -- even the drafters of the DMCA -- knew that 1201 went too far and would lead to massive collateral damage. Rather than not passing such a bill, Congress came up with its "escape valve" which is the triennial review process, whereby every three years, the Librarian of Congress can magically declare which things are exempt from 1201. This has exempted a few classes of important use cases, but just the fact that (1) these uses need to be renewed every three years, and (2) that you have to ask for permission that can only be granted every 3 years for things that should be perfectly legal... is a problem.

Way back in 2016, EFF brought a case challenging the constitutionality of 1201 on behalf of computer security researcher/professor Matthew Green and hardware hacker Bunnie Huang, arguing that the DMCA 1201 liability suppressed their speech by stopping security research and beneficial hacking efforts. In 2019, a court dismissed much of the constitutional challenge, while allowing other parts of the case to move forward.

However, those constitutional questions are now on appeal and the EFF recently filed its opening brief. It's worth reading.

Appellants’ research and expression would be highly valuable to society. Their work would also be perfectly lawful but for one thing—it requires circumventing digital locks and teaching others how to do the same. In the name of protecting copyrights, a federal statute, Section 1201(a) of the Digital Millennium Copyright Act (DMCA), makes it a crime to engage in or even distribute information about such circumvention, even if the circumvention serves an otherwise lawful purpose. This statute subverts the traditional contours of copyright law to criminalize speech and bar people from using information they possess for education, journalism, and expression. That, in turn, puts Section 1201(a) on a collision course with the First Amendment—one it cannot and should not survive.

Some useful and worth reading amicus briefs have also been filed in the case. Copyright scholars Pam Samuelson and Rebecca Tushnet filed a fantastic brief:

In 1998, Congress made a momentous departure from traditional copyright law by enacting Section 1201 of the Digital Millennium Copyright Act (“DMCA”). Section 1201 created a new class of right—a right to control access to legitimately acquired copies of copyrighted works that had been transferred to lawful owners, as well as a new antitrafficking right specific to access controls. 17 U.S.C. § 1201(a). Both new rights—as well as the significant civil and criminal penalties for infringing those rights—apply well beyond the traditional contours of secondary liability for aiding infringement by others. Id. §§ 1203, 1204. Moreover, these new rights disregard and override traditional mechanisms within the Copyright Act that struck the balance between copyright protection and First Amendment interests.

The Tech Law & Policy clinic at Colorado Law highlighted how much damage 1201 and the triennial review process has done to accessibility, security, and right to repair:

The right to engage in fair use is protected by the First Amendment. The Supreme Court has concluded that fair use is one of copyright law’s essential “built-in First Amendment accommodations” and serves as a “traditional First Amendment safeguard.” The Supreme Court has conceptualized fair use as a safety valve that prevents copyright law from suppressing the exercise of First Amendment rights.

Section 1201 eliminates fair use’s capacity to serve as a First Amendment safeguard when copyrighted works are encumbered with TPMs. It does so by effectively prohibiting fair uses that require the circumvention of TPMs.

And then there's an amicus brief from documentary film makers talking about how damaging 1201 has been to their own expression:

The Digital Millennium Copyright Act prevents filmmakers from exercising their First Amendment right to make fair use by making it illegal to access content on DVDs and other digital content protected by encryption. Congress intended to create a “fail-safe” mechanism to preserve the public’s right to make fair use. But the open-ended rulemaking process it devised is unduly burdensome and has led to exemptions that leave filmmakers uncertain as to how they can make fair use safely. Amici urge this Court to issue a limiting construction that preserves their First Amendment right to make fair use. In addition, if this Court is inclined to order equitable relief in this appeal, this Court should preserve existing exemptions until a more constitutionally appropriate procedure is in place and more workable exemptions have gone into effect.

Filmmakers depend on the doctrine of fair use to make commentary, criticism, instruction, and report on current events by utilizing portions of digitized movies and other digitized content. Fair use in filmmaking has been called a paradigmatic fair use, and without it a massive range of expressive conduct would be impossible. But fair use is of little consequence if filmmakers cannot access the high-quality digital material they seek to use in the first place. Suppose a filmmaker wants to analyze how special effects in the Star Wars film franchise have evolved from 1977 to the present day, examining various clips from the past 45 years. The law is quite clear that fair use permits the use of film clips without permission or payment to the Star Wars rightsholders. To do this, however, the filmmaker will need to obtain high-quality footage, which is likely to be locked behind encryption and other technological protection measures (“TPMs”). That is a problem for filmmakers because Congress made it a crime to circumvent technologies that control access to copyrighted content when it enacted the Digital Millennium Copyright Act (“DMCA”) in 1998, now codified at Section 1201 of the copyright statute. The result is that, barring an exemption from the Librarian of Congress, filmmakers cannot access the digital content they need for fair use without a credible fear of civil and criminal liability.

This isn't just an issue for big companies. This is about fundamental fair use rights of the public -- which Congress tossed away decades ago, and tried to pave over by insisting the Librarian of Congress could swoop in every 3 years and stop the most egregious attacks on free speech. But that's not how the 1st Amendment works.

Hopefully the court agrees.

Mike Masnick

Techdirt Podcast Episode 310: A Global History Of Free Speech

2 years 2 months ago

We talk a lot about free speech in different countries, and about the history of free speech in the US — but what about the global history of this fundamental concept? A new book released today, Free Speech: A History from Socrates to Social Media by Jacob Mchangama, tackles exactly this subject in great and insightful detail. This week, Jacob joins us on the podcast to discuss the sweeping story of free speech throughout the ages and around the world.

Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.

Leigh Beadon

Google Stadia's Failure Is Almost Complete

2 years 2 months ago

While Google's Stadia game streaming service arrived with a lot of promise, it generally landed with a disappointing thud. A limited catalog, deployment issues, and a quality that couldn't match current gen game consoles meant the service just never saw the kind of traction Google (or a lot of other people) originally envisioned. In the years since, developers have been consistently abandoning the platform, and Google has consistently sidelined the service, even shutting down its own development efforts as a parade of executives headed for the exists.

Now, Google is basically just selling the technology off to other companies eager to give video game streaming a go and succeed where Google couldn't.

In the last few months, Google executives have apparently been working on a plan to salvage some aspect of the project by selling Google Stadia tech to companies like Bungie and Peleton. In short, these companies will license the Google tech (now creatively named "Google Stream") for use in their own game streaming services called something entirely different. Google's first party Google Stadia service still exists for now, but it has been "deprioritized" within the company on the way to an inevitable, untimely death:

"The Stadia consumer platform, meanwhile, has been deprioritized within Google, insiders said, with a reduced interest in negotiating blockbuster third-party titles. The focus of leadership is now on securing business deals for Stream, people involved in those conversations said. The changes demonstrate a strategic shift in how Google, which has invested heavily in cloud services, sees its gaming ambitions."

Unfortunately (for Google) Sony just bought Bungie for $3.6 billion, and already has its own streaming technologies and platforms that Bungie will likely use (Sony also leans on Microsoft's cloud technology). And while Google also has been working on a game streaming deal with AT&T, such "me too" type efforts from the telecom sector never quite amount to much. That leaves Peloton, which is being rumored as an acquisition target by Amazon, and isn't doing gaming so much as it's doing the gamification of exercise.

Somebody will dominate the game streaming space, but it's not going to be Google. While the Google technology certainly works well, the business plan was an unmitigated failure by any measure. And much like Google Fiber (which Google eventually got bored with and froze without ever really admitting to anybody that's what happened), Stadia will die without being formally declared as dead, having never seen even a fraction of its originally envisioned potential.

Karl Bode

UK Government Refreshes Its Terrible 'Online Safety Bill,' Adds Even More Content For Platforms To Police

2 years 2 months ago

The UK's internet censorship bill rebranded from "Online Harms" to "Online Safety" last spring. The name change did nothing to limit the breadth of the bill, despite supposedly shifting the focus from "harm" to "safety." Whatever the name, it's still being touted by supporters as a fix for anything anyone doesn't like about the internet.

Speech will be policed. Lots of it. Everyone from megalithic Meta to the person running a niche message board will be subject to the new rules, which shifts liability from the posters of unwanted or illegal content to the third parties hosting it.

In order to find and remove content found on the ever-lengthening list of "bad" content (which, let's highlight again, includes legal content), platforms and services will have to perform more internal policing of content. This means that, in many cases, encryption for content and communications will no longer be a viable option. To comply with the law -- one that carries potential fines of up to 10% of a company's global revenues -- providers will have to remove end-to-end encryption so they can monitor communications between users.

The UK government isn't honest enough to call for the end of encryption. But it's willing to let attrition do its dirty work for it. The anti-encryption agitating continues, despite the UK government's Information Commissioner's Office telling the rest of the government that weakening or eliminating encryption will harm more children than it saves.

The bill marches forward, gathering even more speech-harming detritus. As CNBC reports, another round of UK government inquiries has resulted in the proposed law being made even worse.

The government said Friday that the bill will now include extra-priority provisions outlawing content that features revenge porn, drug and weapons dealing, suicide promotion and people smuggling, among other offences.

It will also target individuals who send online abuse and threats, with criminal sentences ranging up to five years.

Stuff that was already on the ban list has been given greater priority, aligning self-harm and drug dealing with the big baddies of "terroristic content" and child sexual abuse material. Online threats and "abuse" will get stiffer legal penalties.

But that's not all: there's more to add to the UK government's list of content it would like to treat as criminal acts.

The government said it is considering further recommendations, including specific offences such as sending unsolicited sexual images and trolling epilepsy sufferers, tackling paid-for scam advertising, and bringing forward criminal liability for senior company executives at the tech firms.

Every addition adds to the list of content that platforms and services must proactively monitor and remove. The addition of criminal liability for tech execs may seem like a crowd pleasing Guillotine 2.0, but in reality, it just means jailing people because their companies failed to achieve the impossible tasks the UK government has asked of them.

A lot of what's being added won't be easily detected by AI or human moderators -- certainly not proactively. Context matters but proactive monitoring means context will be ignored. The difference between revenge porn and regular porn isn't immediately and obviously clear. Pictures of guns or drugs are not necessarily promotional. And there are going to be some people in desperate need of help getting caught in the friction between talking about suicide and "suicide promotion."

It all sounds good when it's still on paper and reads like a blueprint for a trouble-free online existence. But it falls apart the moment you start asking questions about how this can be implemented without massively altering the contours of free speech in the UK and generating an incredible amount of collateral damage that may, in many cases, negatively affect the same vulnerable people the government believes this bill will protect.

Tim Cushing