a Better Bubble™

TechDirt 🕸

Daily Deal: The Complete Video Production Super Bundle

2 years 2 months ago
Aspiring filmmakers, YouTubers, bloggers, and business owners alike can find something to love about the Complete Video Production Super Bundle. Video content is fast changing from the future marketing tool to the present, and in these 10 courses you’ll learn how to make professional videos on any budget. From the absolute basics to the advanced […]
Daily Deal

Why It Makes No Sense To Call Websites 'Common Carriers'

2 years 2 months ago
There’s been an unfortunate movement in the US over the last few years to try to argue that social media should be considered “common carriers.” Mostly this is coming (somewhat ironically) from the Trumpian wing of grifting victims, who are trying to force websites to carry the speech of trolls and extremists claiming, (against all […]
Mike Masnick

New Right To Repair Bill Targets Obnoxious Auto Industry Behavior

2 years 2 months ago
It’s just no fun being a giant company aspiring to monopolize repair to boost revenues. On both the state and federal level, a flood of new bills are targeting companies’ efforts to monopolize repair by implementing obnoxious DRM, making repair tools and manuals hard to find, bullying independent repair shops (like Apple does), or forcing […]
Karl Bode

'Peaky Blinders' Production Company Working With Bushmills On A Themed Whiskey

2 years 2 months ago

Nearly a year ago, we talked about a trademark battle between Caryn Mandabach Productions, the company that produces Netflix's Peaky Blinders hit show, and Sadler's Brewhouse, a combined distillery that applied for a "Peaky Blinders" trademark for several spirits brands. Important to keep in mind is that "Peaky Blinders" isn't some made up gang in a fictional story. That name was taken from very real history in England, as evidenced by the folks that own Sadler's being descendants from one of the gang's members. It's also important to remember that television shows and alcohol are not the same marketplace when it comes to trademark law. Despite that, there has been a years-long dispute raging between Mandabach and Sadler's.

And now we have some indication as to why, since Bushmills has announced a partnership with Mandabach Productions to release its own "Peaky Blinders" themed whiskey.

Irish whiskey producer Bushmills could be launching a Peaky Blinders-inspired whiskey after applying to approve a label for the product. Proximo Spirits, which owns Bushmills, made the application to the US Alcohol and Tobacco Tax and Trade Bureau in January 2022.

Caryn Mandabach Productions, which produces the hit Neflix series about the flatcap-wearing gang, is thought to be mentioned on the proposed Bushmills label, which also allegedly says the whiskey is licensed by series distributor Banijay Group.

And this is where things get really interesting. Why? Well, the argument I made in the original post on this topic was that Mandabach really didn't have a good argument for opposition or infringement since the production company wasn't actually using the historical name of a real gang to make alcohol. Given the disparate markets, there didn't seem to be any real reason for concern about public confusion.

But now that is happening in reverse. A company behind the Netflix show is now partnering with another distillery to enter the spirits market with a "Peaky Blinders" brand and theme. If anything, I would think that Sadler's Brewhouse now has an argument for opposition, given the pending trademark application. Especially since it seems the production company, late to the party, has "plans" to get into the liquor business.

Earlier this month, The Sun revealed that the production company has its own plans to open a line of Peaky Blinders-themed bars and restaurants.

In which case I believe this would come down mostly to a "first to file" race. And if the production company had already filed trademark applications for the liquor business, you really would have thought that fact would be on display in its opposition and suit against Sadler's. But there was no hint of that in any of the documents that informed our previous post.

So, on Mandabach's side of things, this all appears to be backwards. Why it should win on any of this is not something I'm able to argue.

Timothy Geigner

ACLU & EFF Step Up To Tell Court You Don't Get To Expose An Anonymous Tweeter With A Sketchy Copyright Claim

2 years 2 months ago

In November, we wrote about a very bizarre case in which someone was using a highly questionable copyright claim to try to identify an anonymous Twitter user with the username @CallMeMoneyBags. The account had made fun of various rich people, including a hedge fund billionaire named Brian Sheth. In some of those tweets, Money Bags posted images that appeared to be standard social media type images of a woman, and the account claimed that she was Sheth's mistress. Some time later, an operation called Bayside Advisory LLC, that has very little other presence in the world, registered the copyright on those images, and sent a DMCA 512(h) subpoena to Twitter, seeking to identify the user.

The obvious suspicion was that Sheth was somehow involved and was seeking to identify his critic, though Bayside's lawyer has fairly strenuously denied Sheth having any involvement.

Either way, Twitter stood up for the user, noting that this seemed to be an abuse of copyright law to identify someone for non-copyright reasons, that the use of the images was almost certainly fair use, and that the 1st Amendment should protect Money Bag's identify from being shared. The judge -- somewhat oddly -- said that the fair use determination couldn't be made with out Money Bags weighing in and ordered Twitter to alert the user. Twitter claims it did its best to do so, but the Money Bags account (which has not tweeted since last October...) did not file anything with the court, leading to a bizarre ruling in which Twitter was ordered to reveal the identify of Money Bags.

We were troubled by all of this, and it appears that so was the ACLU and the EFF, who have teamed up to tell the court it got this very, very wrong. The two organizations have filed a pretty compelling amicus brief saying that you can't use copyright as an end-run around the 1st Amendment's anonymity protections.

The First Amendment protects anonymous speakers from retaliation and other harms by allowing them to separate their identity from the content of their speech to avoid retaliation and other harms. Anonymity is a distinct constitutional right: “an author’s decision to remain anonymous, like other decisions concerning omissions or additions to the content of a publication, is an aspect of the freedom of speech protected by the First Amendment.” McIntyre v. Ohio Elections Comm’n, 514 U.S. 334, 342 (1995). It is well-settled that the First Amendment protects anonymity online, as it “facilitates the rich, diverse, and far-ranging exchange of ideas,” Doe v. 2TheMart.com, Inc., 140 F. Supp. 2d 1088, 1092 (W.D. Wash. 2001), and ensures that a speaker can use “one of the vehicles for expressing his views that is most likely to result in those views reaching the intended audience.” Highfields, 385 F. Supp. 2d at 981. It is also well-settled that litigants who do not like the content of Internet speech by anonymous speakers will often misuse “discovery procedures to ascertain the identities of unknown defendants in order to harass, intimidate or silence critics in the public forum opportunities presented by the Internet.” Dendrite Int’l v. Doe No. 3, 775 A.2d 756, 771 (N.J. App. Div. 2001).

Thus, although the right to anonymity is not absolute, courts subject discovery requests like the subpoena here to robust First Amendment scrutiny. And in the Ninth Circuit, as the Magistrate implicitly acknowledged, that scrutiny generally follows the Highfields standard when the individual targeted is engaging in free expression. Under Highfields, courts must first determine whether the party seeking the subpoena can demonstrate that its legal claims have merit. Highfields, 385 F. Supp. 2d at 975-76. If so, the court must look beyond the content of the speech at issue to ensure that identifying the speaker is necessary and, on balance, outweighs the harm unmasking may cause.

The filing notes that the magistrate judge who ordered the unmasking apparently seemed to skip a few steps:

The Magistrate further confused matters by suggesting that a fair use analysis could be a proxy for the robust two-step First Amendment analysis Highfields requires. Order at 7. This suggestion follows a decision, in In re DMCA Subpoena, 441 F. Supp. 3d at 882, to resolve a similar case purely on fair use grounds, on the theory that Highfields “is not well-suited for a copyright dispute” and “the First Amendment does not protect anonymous speech that infringed copyright.”...

That theory was legally incorrect. While fair use is a free-speech safety valve that helps reconcile the First Amendment and the Copyright Act with respect to restrictions on expression, anonymity is a distinct First Amendment right.1 Signature Mgmt., 876 F.3d at 839. Moreover, DMCA subpoenas like those at issue here and in In re DMCA Subpoena, concern attempts to unmask internet users who are engaged in commentary. In such cases, as with the blogger in Signature Mgmt., unmasking is likely to chill lawful as well as allegedly infringing speech. They thus raise precisely the same speech concerns identified in Highfields: the use of the discovery process “to impose a considerable price” on a speaker’s anonymity....

ndeed, where a use is likely or even colorably a lawful fair use, allowing a fair use analysis alone to substitute for a full Highfields review gets the question precisely backwards, given the doctrine’s “constitutional significance as a guarantor to access and use for First Amendment purposes.” Suntrust Bank v. Houghton Mifflin, 268 F.3d 1257, 1260 n.3 (11th Cir. 2001). Fair use prevents copyright holders from thwarting well-established speech protections by improperly punishing lawful expression, from critical reviews, to protest videos that happen to capture background music, to documentaries incorporating found footage, and so on. But the existence of one form of speech protection (the right to engage in fair use) should not be used as an excuse to give shorter shrift to another (the right to speak anonymously).

It also calls out the oddity of demanding that Money Bags weigh in, when its Bayside and whoever is behind it that bears the burden of proving that this use was actually infringing:

Bayside incorrectly claims that Twitter (and by implication, its user) bears the burden of demonstrating that the use in question was a lawful fair use. Opposition to Motion to Quash (Dkt. No. 9) at 15. The party seeking discovery normally bears the burden of showing its legal claims have merit. Highfields, 385 F. Supp. 2d at 975-76. In this pre-litigation stage, that burden should not shift to the anonymous speaker, for at least three reasons.

First, constitutional rights, such as the right to anonymity, trump statutory rights such as copyright. Silvers v. Sony Pictures Entm’t, Inc., 402 F.3d 881, 883-84 (9th Cir. 2005). Moreover, fair use has an additional constitutional dimension because it serves as a First Amendment “safety valve” that helps reconcile the right to speak freely and the right to restrict speech. William F. Patry & Shira Perlmutter, Fair Use Misconstrued: Profit, Presumptions, and Parody, 11 Cardozo Arts & Ent. L.J. 667, 668 (1993). Shifting the Highfields burden to the speaker would create a cruel irony: an anonymous speaker would be less able to take advantage of one First Amendment safeguard—the right to anonymity—solely because their speech relies on another—the right to fair use. Notably, the Ninth Circuit has stressed that fair use is not an affirmative defense that merely excuses unlawful conduct; rather, it is an affirmative right that is raised as a defense simply as a matter of procedural posture. Lenz v. Universal, 815 F.3d 1145, 1152 (9th Cir. 2016). Second, Bayside itself was required to assess whether the use in question was fair before it sent its DMCA takedown notices to Twitter; it cannot now complain if the Court asks it to explain that assessment before ordering unmasking. In re DMCA Subpoena, 441 F. Supp. 3d at 886 (citing Lenz., 815 F.3d at 1153: “a copyright holder must consider the existence of fair use before sending a takedown notification under § 512(c)”)

Third, placing the burden on the party seeking to unmask a Doe makes practical sense at this early stage, when many relevant facts lie with the rightsholder. Here, for example, Bayside presumably knows—though it has declined to address—the original purpose of the works. And as the copyright holder, it is best positioned to explain how the use at issue might affect a licensing market. While the copyright holder cannot see into the mind of the user, the user’s purpose is easy to surmise here, and the same is likely to be true in any 512(h) case involving expressive uses. With respect to the nature of the work, any party can adequately address that factor. Indeed, both Bayside and Twitter have done so.

The filing also notes that this is an obvious fair use situation, and the judge can recognize that:

While courts often reserve fair use determinations for summary judgment or trial, in appropriate circumstances it is possible to make the determination based on the use itself. See In re DMCA Section 512(h) Subpoena to YouTube (Google, Inc.), No. 7:18-MC-00268 (NSR), 2022 WL 160270 (S.D.N.Y. Jan. 18, 2022) (rejecting the argument that fair use cannot be determined during a motion to quash proceeding). In Burnett v. Twentieth Century Fox, for example, a federal district court dismissed a copyright claim—without leave to amend—at the pleading stage based on a finding of fair use. 491 F. Supp. 2d 962, 967, 975 (C.D. Cal. 2007); see also Leadsinger v. BMG Music Pub., 512 F.3d 522, 532–33 (9th. Cir. 2008) (affirming motion to dismiss, without leave to amend, fair use allegations where three factors “unequivocally militated” against fair use). See also, e.g., Sedgwick Claims Mgmt. Servs., Inc. v. Delsman, 2009 WL 2157573 at *4 (N.D. Cal. July 17, 2009), aff’d, 422 F. App’x 651 (9th Cir. 2011); Savage v. Council on Am.-Islamic Rels., Inc., 2008 WL 2951281 at *4 (N.D. Cal. July 25, 2008); City of Inglewood v. Teixeira, 2015 WL 5025839 at *12 (C.D. Cal. Aug. 20, 2015); Marano v. Metro. Museum of Art, 472 F. Supp. 3d 76, 82–83, 88 (S.D.N.Y. 2020), aff’d, 844 F. App’x 436 (2d Cir. 2021); Lombardo v. Dr. Seuss Enters., L.P., 279 F. Supp. 3d 497, 504–05 (S.D.N.Y. 2017), aff’d, 729 F. App’x 131 (2d Cir. 2018); Hughes v. Benjamin, 437 F. Supp. 3d 382, 389, 394 (S.D.N.Y. 2020); Denison v. Larkin, 64 F. Supp. 3d 1127, 1135 (N.D. Ill. 2014).

These ruling are possible because many fair uses are obvious. A court does not need to consult a user to determine that the use of an excerpt in a book review, the use of a thumbnail photograph in an academic article commenting on the photographer’s work, or the inclusion of an image in a protest sign are lawful uses. There is no need to seek a declaration from a journalist when they quote a series of social media posts while reporting on real-time events.

And the uses by Money Bags were pretty obviously fair use:

First, the tweets appear to be noncommercial, transformative, critical commentary—classic fair uses. The tweets present photographs of a woman, identified as “the new Mrs. Brian Sheth” as part of commentary on Mr. Sheth, the clear implication being that Mr. Sheth has used his wealth to “invest” in a new, young, wife. As the holder of rights in the photographs, Bayside could have explained the original purpose of the photographs; it has chosen not to do so. In any event, it seems unlikely that Bayside’s original purpose was to illustrate criticism and commentary regarding a billionaire investor. Hence, the user “used the [works] to express ‘something new, with a further purpose or different character, altering the first with new expression, meaning, or message.’” In re DMCA Subpoena to Reddit, Inc., 441 F. Supp. 3d at 883 (quoting Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569, 579 (1994)). While undoubtedly crass, the user’s purpose is transformative and, Bayside’s speculation notwithstanding, there is nothing to suggest it was commercial.

The filing also calls out the magistrate judge's unwillingness to consider Twitter's own arguments:

Of course, there was a party in court able and willing to offer evidence and argument on fair use: Twitter. The Magistrate’s refusal to credit Twitter’s own evidence, Order at 7-8, sends a dangerous message to online speakers: either show up and fully litigate their anonymity—risking their right to remain anonymous in the process—or face summary loss of their anonymity when they do not appear. Order at 7. That outcome inevitably “impose[s] a considerable price” on internet users’ ability to exercise their rights to speak anonymously. Highfields, 385 F. Supp. 2d at 980-81. And “when word gets out that the price tag of effective sardonic speech is this high, that speech will likely disappear.”

Hopefully the court reconsiders the original ruling...

Mike Masnick

Former Employees Say Mossad Members Dropped By NSO Officers To Run Off-The-Books Phone Hacks

2 years 2 months ago

Oh, NSO Group, is there anything you won't do? (And then clumsily deny later?). If I were the type to sigh about such things, I surely would. But that would indicate something between exasperation and surprise, which are emotions I don't actually feel when bringing you this latest revelation about the NSO's shady dealings.

The Mossad used NSO’s Pegasus spyware to hack cellphones unofficially under the agency’s previous director, Yossi Cohen, several NSO Group employees said.

The employees, who asked to remain anonymous because of their confidentiality agreements with the company, said that Mossad officials asked NSO on several occasions to hack certain phones for them. The employees didn’t know why these hacks were requested.

There's plenty that will shock no one about these allegations. First off, NSO Group has an extremely close relationship with the Israeli government. Top-level officials have paved the way for sales to countries like Saudi Arabia and the UAE, leveraging powerful spyware to obtain diplomatic concessions.

Second, NSO -- like other Israeli malware merchants -- recruits heavily from the Israeli government, approaching military members and analysts from intelligence agencies Shin Bet and the Mossad. Given this incestuous relationship, it's unsurprising visiting Mossad members would feel comfortable asking for a few off-the-books malware deployments.

It appears these alleged hacking attempts were requested to obscure the source of the hackings, eliminating any paper trail linking the Mossad to the information obtained as a result of these malware deployments. As the Haaretz article points out, the Mossad doesn't really need NSO's tools or expertise. It had the capability to compromise cellphones well before NSO brought tools like Pegasus to market.

A generous reading of these informal requests would be that the Mossad was having problems compromising a target and wanted to see if NSO had any recent exploits that could help. A more realistic reading is that these requests were meant to evade the Mossad's oversight.

Experts in the field of phone exploitation are still trying to verify these claims and ascertain whether or not NSO could actually do what was requested. Evidence of these allegations has yet to be discovered. But it's apparent NSO's hard rules about who could or couldn't be targeted were actually portable goal posts.

NSO has sold plenty of spyware to governments with the understanding it can't be used to target US numbers. But then it showed up in the United States with a version of Pegasus called "Phantom" that could be used to target US numbers. It pitched this to FBI (with live demonstrations using dummy phones purchased by the agency) but left empty-handed when DOJ counsel couldn't find some way to use this malware without violating the Constitution or (far more likely) keeping the particulars of the hacking tool from being discussed in open court.

NSO also claims malware cannot be deployed against Israeli numbers. This, too, has been shown to be false. So, there's really no reason to believe NSO when it claims everything about its malware products is so compartmentalized Mossad officials would not be able to waltz into the building and ask for unregulated malware deployments.

Indeed, the answer given by an NSO spokesperson is so ridiculous it may prompt a sudden burst of laughter from all but the most credulous readers.

When asked what prevents an executives from spying on, say, a competitor by using an in-house server, the NSO representative stressed that even if such a system existed, the legal risks posed by such a scenario would serve as a serious deterrent.

They added that the question is tantamount to asking what prevents workers in a munitions factory from stealing guns and using them illegally, or what stops a police officer from abusing their power.

On one hand, I can see this is NSO saying you have to trust your employees and that no policy is capable of eliminating all wrongdoing. On the other hand, it offers no meaningful denial about alleged wrongdoing. The answer is at least as meaningless as the question. It basically says NSO can't really prevent malfeasance, which is definitely not a direct denial of the allegations made in this report.

NSO Group is in an unenviable position: it can't disprove allegations without opening up scrutiny of its operations and its clients. On the other hand, it can't do that without risking existing contracts or future sales. But as much as I'd like to express sympathy, the company has spent years making itself unsympathetic by selling to human rights violators and blowing off legitimate criticism of its business model. It made itself millions by selling to authoritarians and getting super cozy with Israel's government. Now it has to pay the piper. And it seriously looks like it will be as bankrupt as its morals by the time this is all said and done.

Tim Cushing

'Peaky Blinders' Production Company Working With Bushmills On A Themed Whiskey

2 years 2 months ago
Nearly a year ago, we talked about a trademark battle between Caryn Mandabach Productions, the company that produces Netflix’s Peaky Blinders hit show, and Sadler’s Brewhouse, a combined distillery that applied for a “Peaky Blinders” trademark for several spirits brands. Important to keep in mind is that “Peaky Blinders” isn’t some made up gang in […]
Dark Helmet

No, Creating An NFT Of The Video Of A Horrific Shooting Will Not Get It Removed From The Internet

2 years 2 months ago

Andy Parker has experienced something that no one should ever have to go through: having a child murdered. Even worse, his daughter, Alison, was murdered on live TV, while she was doing a live news broadcast, as an ex-colleague shot her and the news station's cameraman dead. It got a lot of news coverage, and you probably remember the story. Maybe you even watched the video (I avoided it on purpose, as I have no desire to see such a gruesome sight). Almost none of us can even fathom what that experience must be like, and I can completely understand how that has turned Parker into something of an activist. We wrote about him a year ago, when he appeared in a very weird and misleading 60 Minutes story attacking Section 230.

While Parker considers himself an "anti-big tech, anti-Section 230" advocate, we noted that his story actually shows the benefits of Section 230, rather than the problems with it. Parker is (completely understandably!) upset that the video of his daughter's murder is available online. And he wants it gone. As we detailed in our response to the 60 Minutes story, Parker had succeeded in convincing various platforms to quickly remove that video whenever it's uploaded. Something they can do, in part, because of Section 230's protections that allow them to moderate freely, and to proactively moderate content without fear of crippling lawsuits and liability.

The 60 Minutes episode was truly bizarre, because it explains Parker's tragic situation, and then notes that YouTube went above and beyond to stop the video from being shared on its platform, and then it cuts to Parker saying he "expected them to do the right thing" and then says that Google is "the personification of evil"... for... doing exactly what he asked?

Parker is now running for Congress as well, and has been spouting a bunch of bizarre things about the internet and content moderation on Twitter. I'd link to some of them, but he blocked me (a feature, again, that is aided by Section 230's existence). But now the Washington Post has a strange article about how Parker... created an NFT of the video as part of his campaign to remove it from the internet.

Now, Andy Parker has transformed the clip of the killings into an NFT, or non-fungible token, in a complex and potentially futile bid to claim ownership over the videos — a tactic to use copyright to force Big Tech’s hand.

So... none of this makes any sense. First of all, Parker doesn't own the copyright, as the article notes (though many paragraphs later, even though it seems like kind of a key point!).

Parker does not own the copyright to the footage of his daughter’s murder that aired on CBS affiliate WDBJ in 2015.

But it says he's doing this to claim "ownership" of the video, because what appear to be very, very bad lawyers have advised him that by creating an NFT he can "claim ownership" of the video, and then use the DMCA's notice-and-takedown provisions instead. Everything about this is wrong.

First, while using copyright to takedown things you don't want is quite common, it's not (at all) what copyright is meant for. And, as much as Parker does not want the video to be available, there is a pretty strong argument that many uses of that video are covered by fair use.

But, again, he doesn't hold the copyright. So, creating an NFT of the video does not magically give him a copyright, nor does it give him any power under the DMCA to demand takedowns. That requires the actual copyright. Which Parker does not have. Even more ridiculously, the TV station that does hold the copyright has apparently offered to help Parker use the copyright to issue DMCA takedowns:

In a statement, Latek said that the company has “repeatedly offered to provide Mr. Parker with the additional copyright license” to call on social media companies to remove the WDBJ footage “if it is being used inappropriately.”

This includes the right to act as their agent with the HONR network, a nonprofit created by Pozner that helps people targeted by online harassment and hate. “By doing so, we enabled the HONR Network to flag the video for removal from platforms like YouTube and Facebook,” Latek said.

So what does the NFT do? Absolutely nothing. Indeed, the NFT is nothing more than basically a signed note, saying "this is a video." And part of the ethos of the NFT space is that people are frequently encouraged to "right click and save" the content, and to share it as well -- because the content and the NFT are separate.

Hell, there's an argument (though I'd argue a weak one -- though others disagree) that by creating an NFT of a work he has no copyright over, Parker has actually opened himself up to a copyright infringement claim. Indeed, the TV station is quoted in the article noting that, while it has provided licenses to Parker to help him get the video removed, "those usage licenses do not and never have allowed them to turn our content into NFTs."

I understand that Parker wants the video taken down -- even though there may be non-nefarious, legitimate reasons for those videos to remain available in some format. But creating an NFT doesn't give him any copyright interest, or any way to use the DMCA to remove the videos and whoever told Parker otherwise should be disbarred. They're taking advantage of him and his grief, and giving him very, very bad legal advice.

Meanwhile, all the way at the end of the article, it is noted -- once again -- that the big social media platforms are extremely proactive in trying to remove the video of her murder:

“We remain committed to removing violent footage filmed by Alison Parker’s murderer, and we rigorously enforce our policies using a combination of machine learning technology and human review,” YouTube spokesperson Jack Malon said in a statement.

[...]

Facebook bans any videos that depict the shooting from any angle, with no exceptions, according to Jen Ridings, a spokesperson for parent company Meta.

“We’ve removed thousands of videos depicting this tragedy since 2015, and continue to proactively remove more,” Ridings said in a statement, adding that they “encourage people to continue reporting this content.”

The reporter then notes that he was still able to find the video on Facebook (though all the ones he found were quickly removed).

Which actually goes on to highlight the nature of the problem. It is impossible to find and block the video with perfect accuracy. Facebook and YouTube employ some of the most sophisticated tools out there for finding this stuff, but the sheer volume of content, combined with the tricks and modifications that uploaders try, mean that they're never going to be perfect. So even if Parker got the copyright, which he doesn't, it still wouldn't help. Because these sites are already trying to remove the videos.

Everything about this story is unfortunate. The original tragedy, of course, is heartbreakingly horrific. But Parker's misguided crusade isn't helping, and the whole NFT idea is so backwards that it might lead to him potentially facing a copyright claim, rather than using one. I feel sorry for Parker, not only for the tragic situation with his daughter, but because it appears that some very cynical lawyers are taking advantage of Parker's grief to try to drive some sort of policy outcome out of it. He deserves better than to be preyed upon like that.

Mike Masnick

San Francisco Cops Are Running Rape Victims' DNA Through Criminal Databases Because What Even The Fuck

2 years 2 months ago

There are things people expect the government to do. And then there are the things the government actually does. The government assumes many people are comfortable with things it does that are technically legal, but certainly not how the average government user expects the system to behave.

Some of this can be seen in the Third Party Doctrine, which says people who knowingly share information with third parties also willingly share it with the government. But very few citizens are actually cool with this extended sharing, no matter what the Supreme Court-created doctrine says. This tension between people's actual expectations and the government's portrayal of the people's expectations is finally being addressed by the nation's top court. Recent rulings have shifted the balance back towards actual reasonable expectations of privacy, but there's still a whole lot of work to be done.

So, when rape victims report sexual assaults to law enforcement, they certainly don't expect their DNA samples will be run through crime databases to see if these victims of crimes have committed any crimes. But that's exactly what the San Francisco PD has been doing, according to this report from Megan Cassidy of the San Francisco Chronicle.

The San Francisco police crime lab has been entering sexual assault victims’ DNA profiles in a database used to identify suspects in crimes, District Attorney Chesa Boudin said Monday, an allegation that raises legal and ethical questions regarding the privacy rights of victims.

Boudin said his office was made aware of the purported practice last week, after a woman’s DNA collected years ago as part of a rape exam was used to link her to a recent property crime.

Shocking to the conscience, as the courts say? You'd better believe it. No one reporting a crime expects to be investigated for a different crime. And there are already enough logistical and psychological barriers standing between rape victims and justice. Knowing their rape kit might be processed in hopes of finding the accuser guilty of other crimes isn't going to encourage more victims to step forward.

On top of that, it might be illegal. California has pretty robust protections for crime victims. The state has a "Victims' Bill of Rights" that guarantees several things to those reporting crimes. Nothing explicitly forbids police from running victim DNA through crime lab databases, but this clause directly addresses the outcome of successful searches, which would result in publicly available records as police move forward with arresting and prosecuting the crime victim for crimes they allegedly committed.

To prevent the disclosure of confidential information or records to the defendant, the defendant’s attorney, or any other person acting on behalf of the defendant, which could be used to locate or harass the victim or the victim’s family or which disclose confidential communications made in the course of medical or counseling treatment, or which are otherwise privileged or confidential by law.

Prosecuting a crime creates plenty of paperwork and arrest records are public records. A defendant could easily access records about their accuser -- records that wouldn't have existed without the assistance of this completely extraneous search.

Fortunately, this revelation has prompted an internal investigation by the SFPD. Unfortunately, an internal investigation is also the easiest way to bury incriminating documents, stiff-arm outsiders seeking information, stonewall requests from city officials for more information, and, most importantly, find some way to clear anyone involved of wrongdoing.

SFPD police chief Bill Scott at least has the presence of mind to comprehend the problem this practice poses.

Scott said, “We must never create disincentives for crime victims to cooperate with police, and if it’s true that DNA collected from a rape or sexual assault victim has been used by SFPD to identify and apprehend that person as a suspect in another crime, I’m committed to ending the practice.”

Good. And: whatever. Don't be "committed" to "ending the practice." Just fucking do it. You're the police chief. There's no reason you can't issue a mandate immediately forbidding running DNA searches on rape victims. I'm no expert on police protocol, but it seems like a memo beginning with "EFFECTIVE IMMEDIATELY" would end the practice, um, immediately and inform future violators of the potential consequences of their action. A wishy-washy "commitment" that's accompanied by no action tells the rank-and-file they're free to do whatever until the internal investigation is completed and its results handed over to city officials. Waiting until the facts are in (and thoroughly massaged) is a blank check for months or years of abuse.

And this sort of thing may not be an anomaly localized entirely within the SFPD. Other law enforcement agencies may be doing the same thing. The only difference is the SFPD was the first to successfully hit the middle of the Venn diagram containing rape victims and alleged criminals. Any other agency doing the same shady searching should probably knock it the fuck off. While it may seem like good police work to run searches on any DNA samples willingly handed to them, the optics -- if nothing else -- should be all the deterrent they need, especially when it comes to victims of sexual assault who are already treated with something approaching disdain by far too many law enforcement officers.

Tim Cushing

Daily Deal: The Complete 2022 Java Coder Bundle

2 years 2 months ago

The Complete 2022 Java Coder Bundle has 9 courses to help you kick-start your Java learning, providing you with the key concepts necessary to write code. You'll learn about Java, Oracle, Apache Maven, and more. From applying the core concepts of object-oriented programming to writing common algorithms, you'll foster real, employable skills as you make your way through this training. It's on sale for $40.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Daily Deal

As Expected, Trump's Social Network Is Rapidly Banning Users It Doesn't Like, Without Telling Them Why

2 years 2 months ago

Earlier this week we took a look at Donald Trump and Devin Nunes' Truth Social's terms of service, noting that they -- despite claiming that Section 230 should be "repealed" -- had explicitly copied Section 230 into their terms of service. In the comments, one of our more reliably silly commenters, who inevitably insists that no website should ever moderate, and that "conservatives" are regularly removed for their political views on the major social networks (and refusing to provide any evidence to support his claims, because he cannot), insisted that Truth Social wouldn't ban people for political speech, only for "obscenity."

So, about that. As Mashable has detailed, multiple people are describing how they've been banned from Truth Social within just the first few days -- and not for obscenity. The funniest one is someone -- not the person who runs the @DevinCow account on Twitter -- tried to sign up for a @DevinCow account on Truth Social. As you probably know, Devin Nunes, as a congressman, sued the satirical cow account for being mean to him (the case is still, technically, ongoing). You may recall that the headline of my article about Devin Nunes quitting Congress to run Truth Social announced that he was leaving Congress to spend more time banning satirical cows from Truth Social.

And apparently that was accurate. Matt Ortega first tried to register the same @DevinCow on Truth Social, only to be told that the username was not even allowed (which suggests that Nunes or someone else there had already pre-banned the Cow). Ortega then tried other varieties of the name, getting through with @DevinNunesCow... briefly. Then it, too, was banned:

This is censorship. pic.twitter.com/Ih6odqlsJh

— Matt Ortega (@MattOrtega) February 22, 2022

Note that the ban email does not identify what rules were broken by the account (another point that Trumpists often point to in complaining about other websites' content moderation practices: that they don't provide a detailed accounting).

So, it certainly appears that it's not just "obscenity" that Nunes and Trump are banning. They seem to be banning accounts that might, possibly, make fun of them and their microscopically thin skins.

The Mashable article also notes that Truth Social has also banned a right wing anti-vaxxer, who you might expect to be more welcome on the site, but no such luck:

Radical anti-vax right-wing broadcaster Stew Peters complains that he's "being censored on Truth Social" simply for demanding that those responsible for the COVID-19 vaccine "be put on trial and executed." pic.twitter.com/Uf9WXA793A

— Right Wing Watch (@RightWingWatch) February 22, 2022

And here's the thing: this is normal and to be expected, and I'm glad that Truth Social is doing the standard forms of content moderation that every website needs to do to be able to operate a functional service. It would just be nice if Nunes/Trump and their whiny sycophants stopped pretending that this website is somehow more about "free speech" than other social media sites. It's not. Indeed, so far, they seem more willing to quickly ban people simply because they don't like them, than for any more principled reason or policy.

Mike Masnick

ACLU & EFF Step Up To Tell Court You Don't Get To Expose An Anonymous Tweeter With A Sketchy Copyright Claim

2 years 2 months ago
In November, we wrote about a very bizarre case in which someone was using a highly questionable copyright claim to try to identify an anonymous Twitter user with the username @CallMeMoneyBags. The account had made fun of various rich people, including a hedge fund billionaire named Brian Sheth. In some of those tweets, Money Bags […]
Mike Masnick

Comcast Continues To Bleed Olympics Viewers After Years Of Bumbling

2 years 2 months ago

NBC (now Comcast NBC Universal) has enjoyed the rights to broadcast the US Olympics since 1998. In 2011, the company paid $4.4 billion for exclusive US broadcast rights to air the Olympics through 2020. In 2014, Comcast NBC Universal shelled out another $7.75 billion for the rights to broadcast the summer and winter Olympics in the US... until the year 2032.

Despite years of experience Comcast/NBC still struggles to provide users what they actually want. For years the cable, broadband, and broadcast giant has been criticized for refusing to air events live, spoiling some events, implementing annoying cable paywall restrictions, implementing heavy handed and generally terrible advertising, often sensationalizing coverage, avoiding controversial subjects during broadcasts, and streaming efforts that range from clumsy to scattershot.

Not too surprisingly, years of this continues to have a profound drag on viewer numbers, which are worse than ever:

"Through Tuesday, an average of 12.2 million people watched the Olympics in prime-time on NBC, cable or the Peacock streaming service, down 42 percent from the 2018 Winter Olympics in South Korea. The average for NBC alone was 10 million, a 47 percent drop, the Nielsen company said."

And this was with Comcast/NBC's attempt to goose ratings by jumping right to Olympics coverage before the Super Bowl postgame celebrations had barely started. This year's ratings were also impacted by doping scandals, COVID, an Olympics location that barely had any snow, and disgust at the host country's human rights abuses:

"One woman on Twitter proclaimed the Olympics were “over for me. My lasting impression will be fake snow against a backdrop of 87 nuclear reactors in a country with a despicable human rights record during a pandemic. And kids who can look forward to years of therapy.”

While the Olympic veneer might not be what it used to be, you still have to think Comcast could boost viewership by exploiting the internet to broaden and improve coverage and provide more real-time live coverage of all events, while bundling it in a more coherent overall presentation. After all, they've only had two decades to perfect the formula.

Karl Bode

Apple Finally Defeats Dumb Diverse Emoji Lawsuit One Year Later

2 years 2 months ago

Roughly a year ago, we discussed a wildly silly lawsuit brought against Apple by a company called Cub Club and an individual, Katrina Parrott. At issue were "diverse emojis", which by now are so ubiquitous as to be commonplace. Parrott had created some emojis featuring more diverse and expansive color/skin tones. And, hey, that's pretty cool. The problem is that, after she had a meeting with Apple about her business, Apple decided to simply incorporate diverse skin tones into its existing emojis. The traditional yellow thumbs up hand suddenly came with different coloration options. Cub Club and Parrott sued, claiming both copyright and trademark infringements.

We said at the time we covered Apple's motion to dismiss that there was very, very little chance of this lawsuit going anywhere. The trademark portion was completely silly, given that Apple wasn't accused of any direct copying, but merely of copying the idea of diverse emojis. Since ideas aren't afforded copyright protection, well, that didn't seem like much of a winner. The trade dress claims made even less sense, since they were levied over the same content: Apple's diverse emojis. The argument from Parrott was that Apple having diverse emojis would confuse the public into thinking it had contracted with Cub Club. But that isn't how the law works. The thing you're suing over can't be a functional part of the actual product. In this case, that's literally all it was.

And so it is not particularly surprising that I'm able to up date you all that the court has dismissed the case a year later.

Apple Inc convinced a California federal judge on Wednesday to throw out a lawsuit accusing the tech giant of ripping off another company's multiracial emoji and violating its intellectual property rights.

Cub Club Investment LLC didn't show that Apple copied anything that was eligible for copyright protection, U.S. District Judge Vince Chhabria said.

Chhabria gave Cub Club a chance to amend its lawsuit but said he was "skeptical" it could succeed based on several differences between its emoji design and Apple's.

The analysis you'll see in the order embedded below basically follows our previous analysis. On the copyright claim, the judge points out that the idea of diverse emojis cannot be copyrighted and, since the accusation about similarity between the emojis themselves is made in an area where very little differences could exist, this doesn't amount to copyright infringement.

Chhabria said in a Wednesday order that even if the complaint was true, Apple at most copied Cub Club's unprotectable "idea" of diverse emoji.

"There aren't many ways that someone could implement this idea," Chhabria said. "After all, there are only so many ways to draw a thumbs up."

Exactly. As to the trade dress portion of this, well, there again the court found that the trade dress accusation concerned non-protectable elements.

To state a claim for trade dress infringement, a plaintiff must allege that “the trade dress is nonfunctional, the trade dress has acquired secondary meaning, and there is substantial likelihood of confusion between the plaintiff’s and defendant’s products.” Art Attacks Ink, LLC v. MGA Entertainment Inc., 581 F.3d 1138, 1145 (9th Cir. 2009). The trade dress alleged in the complaint is functional. The asserted trade dress consists of “the overall look and feel” of Cub Club’s “products,” including “the insertion of an emoji into messages . . . on mobile devices by selecting from a palette of diverse, five skin tone emoji.” This is functional in the utilitarian sense...

Again, right on point.

At the end of the day, while it's true that's it's easy to point at any civil lawsuit and call it a money grab, it's hard to see how this one isn't. There's simply nothing in any of this that's particularly unique or novel, even though I grant that it's a good thing there is more representation options in emojis.

Timothy Geigner

Clearview Pitch Deck Says It's Aiming For A 100 Billion Image Database, Restarting Sales To The Private Sector

2 years 2 months ago

Clearview AI -- the facial recognition tech company so sketchy other facial recognition tech companies don't want to be associated with it -- is about to get a whole lot sketchier. Its database, which supposedly contains 10 billion images scraped from the internet, continues to expand. And, despite being sued multiple times in the US and declared actually illegal abroad, the company has expansion plans that go far beyond the government agencies it once promised to limit its sales to.

A Clearview pitch deck obtained by the Washington Post contains information about the company's future plans, all of which are extremely concerning. First, there's the suggestion nothing is slowing Clearview's automated collection of facial images from the web.

The facial recognition company Clearview AI is telling investors it is on track to have 100 billion facial photos in its database within a year, enough to ensure “almost everyone in the world will be identifiable,” according to a financial presentation from December obtained by The Washington Post.

As the Washington Post's Drew Harwell points out, 100 billion images is 14 images for every person on earth. That's far more than any competitor can promise. (And for good reason. Clearview's web scraping has been declared illegal in other countries. It may also be illegal in a handful of US states. On top of that, it's a terms of service violation pretty much everywhere, which means its access to images may eventually be limited by platforms who identify and block Clearview's bots.)

As if it wasn't enough to brag about an completely involuntary, intermittently illegal amassing of facial images, Clearview wants to expand aggressively into the private sector -- something it promised not to do after being hit with multiple lawsuits and government investigations.

The company wants to expand beyond scanning faces for the police, saying in the presentation that it could monitor “gig economy” workers and is researching a number of new technologies that could identify someone based on how they walk, detect their location from a photo or scan their fingerprints from afar.

Clearview is looking for $50 million in funding to supercharge its collection process and expand its offerings beyond facial recognition. That one thing it suggests is more surveillance of freelancers, work-from-home employees, and already oft-abused "gig workers" is extremely troubling, since it would do little more than give abusive employers one more way to mistreat people they don't consider to be "real" employees.

Clearview also says its surveillance system compares favorably to ones run by the Chinese government… and not the right kind of "favorably."

[Clearview says] that its product is even more comprehensive than systems in use in China, because its “facial database” is connected to “public source metadata” and “social linkage” information.

Being more intrusive and evil than the Chinese government should not be a selling point. And yet, here we are, watching the company wooing investors with a "worse than China" sales pitch. Once again, Clearview has made it clear it has no conscience and no shame, further distancing it from competitors in the highly-controversial field who are unwilling to sink to its level of corporate depravity.

Clearview may be able to talk investors into parting with $50 million, but -- despite its grandiose, super-villainesque plans for the future -- it may not be able to show return on that investment. A sizable part of that may be spent just trying to keep Clearview from sinking under the weight of its voluminous legal bills.

Clearview is battling a wave of legal action in state and federal courts, including lawsuits in California, Illinois, New York, Vermont and Virginia. New Jersey’s attorney general has ordered police not to use it. In Sweden, authorities fined a local police agency for using it last year. The company is also facing a class-action suit in a Canadian federal court, government investigations in Canada, Sweden and the United Kingdom and complaints from privacy groups alleging data protection violations in France, Greece, Italy and the U.K.

As for its plan to violate its promise to not sell to commercial entities, CEO Hoan Ton-That offers two explanations for this reversal, one of which says it's not really a reversal.

Clearview, he told The Post, does not intend to “launch a consumer-grade version” of the facial-search engine now used by police, adding that company officials “have not decided” whether to sell the service to commercial buyers.

Considering the pitch being made, it's pretty clear company officials will decide to start selling to commercial buyers. That's exactly what's being pitched by Clearview -- something investors will expect to happen to ensure their investment pays off.

Here's the other… well, I don't know what to call this exactly. An admission Clearview will do whatever it can to make millions? That "principles" is definitely the wrong word to use?

In his statement to The Post, Ton-That said: “Our principles reflect the current uses of our technology. If those uses change, the principles will be updated, as needed.”

Good to know. Ton-That will adjust his company's morality parameters as needed. Anything Clearview has curtailed over the past two years has been the result of incessant negative press, pressure from legislators, and multiple adverse legal actions. Clearview has done none of this willingly. So, it's not surprising in the least it would renege on earlier promises as soon as it became fiscally possible to do so.

Tim Cushing

Peloton Outage Prevents Customers From Using $2,500 Exercise Bikes

2 years 2 months ago

Peloton hasn't been having a great run lately. While business boomed during the pandemic, things have taken a sour turn of late on a bizarre host of fronts. In just the last month or two the company has seen an historic drop in company valuation, fired 20 percent of its workforce, shaken up its executive management team, been forced to pause treadmill and bike production due to plummeting demand, been the subject of several TV shows featuring people having heart attacks, and now has been caught up in a new scandal for trying to cover up a rust problem to avoid a recall.

Some of the issues have been self-inflicted, while others are just the ebb and flow of the pandemic. Most users still generally love the product, and a lot of these issues are likely to fade away over time. But adding insult to injury, connectivity issues this week prevented Peloton bike and treadmill owners from being able to use their $2000-$5000 luxury exercise equipment for several hours Tuesday morning. The official Peloton Twitter account tried to downplay the scope of the issues:

We are currently investigating an issue with Peloton services. This may impact your ability to take classes or access pages on the web.

We apologize for any impact this may have on your workout and appreciate your patience. Please check https://t.co/Dxcht2tQB0 for updates.

— Peloton (@onepeloton) February 22, 2022

For much of Tuesday morning the pricey equipment simply wouldn't work. I have a Peloton Bike+, and while the pedals would physically spin, I couldn't change the resistance or load into my account; you just were stuck staring at a loading wheel in perpetuity. Some app users say they had better luck, but many Bike, Bike+, and Peloton Tread owners not only couldn't ride in live classes, they couldn't participate in recorded classes because there's no way to download a class to local storage (despite the devices being glorified Android tablets).

The outage (which occurred at the same time as a major Slack outage) was ultimately resolved after several hours, but not before owners got another notable reminder that dumb tech can often be the smarter option. Your kettlebells will never see a bungled firmware update or struggle to connect to the cloud.

Karl Bode

The GOP Knows That The Dem's Antitrust Efforts Have A Content Moderation Trojan Horse; Why Don't The Dems?

2 years 2 months ago

Last summer, I believe we were among the first to highlight that the various antitrust bills proposed by mainly Democratic elected officials in DC included an incredibly dangerous trojan horse that would aid Republicans in their "playing the victim" desire to force websites to host their disinformation and propaganda. The key issue is that many of the bills included a bar on self-preferencing a large company's own services against competitors. The supporters of these bills claimed it was to prevent, say, an Apple from blocking a competing mapping service while promoting Apple Maps, or Google from blocking a competing shopping service, while pushing Google's local search results.

But the language was so broad, and so poorly thought out, that it would create a massive headache for content moderation more broadly -- because the language could just as easily be used to say that, for example, Amazon couldn't kick Parler off it's service, or Google couldn't refuse to allow Gab's app in its app store. You would have thought that after raising this issue, the Democratic sponsors of these bills would fix the language. They have not. Bizarrely, they've continued to issue more bills in both the House and the Senate with similarly troubling language. Recently, TechFreedom called out this problematic language in two antitrust bills in the Senate that seem to have quite a lot of traction.

Whatever you think of the underlying rationale for these bills, it seems weird that these bills, introduced by Democrats, would satisfy the Republicans' desire to force online propaganda mills onto their platforms.

Every “deplatformed” plaintiff will, of course, frame its claims in broad terms, claiming that the unfair trade practice at issue isn’t the decision to ban them specifically, but rather a more general problem — a lack of clarity in how content is moderated, a systemic bias against conservatives, or some other allegation of inconsistent or arbitrary enforcement — and that these systemic flaws harm competition on the platform overall. This kind of argument would have broad application: it could be used against platforms that sell t-shirts and books, like Amazon, or against app platforms, like the Google, Apple and Amazon app stores, or against website hosts, like Amazon Web Services.

Indeed, as we've covered in the past, Gab did sue Google for being kicked out of the app store, and Parler did sue Amazon for being kicked of that company's cloud platform. These kinds of lawsuits would become standard practice -- and even if the big web services could eventually get such frivolous lawsuits dismissed, it would still be a tremendous waste of time and money, while letting grifters play the victim.

Incredibly, Republicans like Ted Cruz have made it clear this is why they support such bills. In fact, Cruz introduced an amendment to double down on this language and make sure that the bill would prohibit "discriminating on the basis of a political belief." Of course, Cruz knows full well this doesn't actually happen anywhere. The only platform that has ever discriminated based on a political belief is... Parler, whose then CEO once bragged to a reporter how he was banning "leftist trolls" from the platform.

Even more to the point, during the hearings about the bill and his amendment, Cruz flat out said that he was hoping to "unleash the trial lawyers" to sue Google, Facebook, Amazon, Apple and the like for moderating those who violate their policies. While it may sound odd that Cruz -- who as a politician has screamed about how evil trial lawyers are -- would be suddenly in favor of trial lawyers, the truth is that Cruz has no underlying principles on this or any other subject. He's long been called "the ultimate tort reform hypocrite" who supports trial lawyers when convenient, and then rails against them when politically convenient.

So no one should be surprised by Cruz's hypocrisy.

What they should be surprised by is the unwillingness of Democrats to fix their bills. A group of organizations (including our Copia Institute) signed onto another letter by TechFreedom that laid out some simple, common-sense changes that could be made to one of the bills -- the Open App Markets Act -- to fix this potential concern. And, yet, supporters of the bill continue to either ignore this or dismiss it -- even as Ted Cruz and his friends are eagerly rubbing their hands with glee.

This has been an ongoing problem with tech policy for a while now -- where politicians so narrowly focus on one issue that they don't realize how their "solutions" mess up some other policy goal. We get "privacy laws" that kill off competition. And now we have "competition" laws that make fighting disinformation harder.

It's almost as if these politicians don't want to solve actual issues, and just want to claim they did.

Mike Masnick

Hertz Ordered To Tell Court How Many Thousands Of Renters It Falsely Accuses Of Theft Every Year

2 years 2 months ago

It all started with Hertz being less than helpful when a man was falsely accused of murder. Michigan resident Herbert Alford was arrested and convicted for a murder he didn't commit. He maintained his innocence, claiming he was at the airport in Lansing, Michigan during the time the murder occurred. And he could have proven it, too, if he had just been able to produce the receipt showing he had been renting a car at Hertz twenty minutes away from the crime scene.

It wasn't until Alford had spent five years in prison that Hertz got around to producing the receipt. Three of those years can be laid directly at Hertz's feet. The receipt was requested in 2015. Hertz handed it over in 2018. Alford sued.

That's not the only lawsuit Hertz is facing. It apparently also has a bad habit of accusing paying customers of theft, something that has resulted in drivers being accosted by armed officers and/or arrested and charged.

Nine months later, another lawsuit rolled in. A proposed class action suit -- covering more than 100 Hertz customers -- claimed the company acts carelessly and engages in supremely poor recordkeeping. The lawsuit, (then) representing 165 customers, contains details of several customers who have been pulled over, arrested, and/or jailed because Hertz's rental tracking system is buggier than its competitors'. Hertz takes pain to point out these incidents only represent a very small percentage of its renters. But that's essentially meaningless when this small error rate doesn't appear to occur at other car rental agencies.

This lawsuit is forcing Hertz to disclose exactly what this error rate is and how many renters it affects. It's a much larger number than the 165 customers the lawsuit started with last November.

In a ruling Wednesday, a federal judge in Delaware sided with the request from attorneys for 230 customers who say they were wrongly arrested.

The total still depends on whom you ask. Hertz said it reports to police 0.014% of its 25 million annual rental transactions - or 3,500 customers. Attorneys for the renters said they believe the number is closer to 8,000.

It may look like only a rounding error to Hertz, but each of these 3,500-8,000 incorrect reports represents a possible loss of liberty, if not a possible loss of life. Law enforcement officers treat auto thieves as dangerous criminals. Being falsely accused by a rental company's software doesn't alter the threat matrix until long after the guns have been drawn.

Sometimes the problem has a human component. If a rental agent does not see a vehicle they thought was returned, they may file a report. And when humans aren't involved, it's Hertz's computer system doing the dirty work.

Other times, [the attorney representing Hertz customers, France Malofiy] said, the confusion is caused by a customer swapping cars during their rental period or extending the time frame. If the credit or debit card charge fails to process correctly, he said Hertz's system generates a theft report.

Malofiy said the company does not update its police reports if a payment ultimately processes - leaving customers to flounder in the criminal justice system. In 2020, a spokesperson for Hertz told the Philadelphia Inquirer that a stolen-vehicle report "was valid when it was made" and that it was "up to law enforcement to decide what to do with the case."

And there's another data point to add to Hertz's perhaps inadvertent but very fucking real infliction of misery on thousands of renters every year. A man who has spent over $15,000 with Hertz since 2020 is currently sitting in jail thanks to yet another bogus Hertz theft alert.

All of this is at odds with Hertz's repeated claim it only issues stolen vehicle notices to law enforcement following "extensive investigations." If it did actually engage in thorough investigations of every generated theft report, it would not be currently facing a lawsuit from hundreds of drivers who've been arrested and jailed over bogus theft allegations. And the problem it claims isn't really a problem wouldn't still be getting people locked up for crimes they didn't commit.

Tim Cushing

Even As Trump Relies On Section 230 For Truth Social, He's Claiming In Lawsuits That It's Unconstitutional

2 years 2 months ago

With the launch of Donald Trump's ridiculous Truth Social offering, we've already noted that he's so heavily relying on Section 230's protections to moderate that he's written Section 230 directly into his terms of service. However, at the same time, Trump is still fighting his monstrously stupid lawsuits against Twitter, Facebook, and YouTube for banning him in the wake of January 6th.

Not surprisingly (after getting the cases transferred to California), the internet companies are pointing the courts to Section 230 as to why the cases should be dismissed. And, also not surprisingly (but somewhat hilariously), Trump is making galaxy brain stupid claims in response. That's the filing in the case against YouTube which somehow has eight different lawyers signed onto a brief so bad that all eight of those lawyers should be laughed out of court.

The argument as to why Section 230 doesn't apply is broken down into three sections, each dumber than the others. First up, it claims that "Section 230 Does Not Immunize Unfair Discrimination," which claims (falsely) that YouTube is a "common carrier" (it is not, has never been, and does not resemble one in any manner). The argument is not even particularly well argued here. It's three ridiculous paragraphs, starting with Packingham (which is not relevant to a private company choosing to moderate), then claiming (without any support, since there is none) that YouTube is a common carrier, and then saying that YouTube's terms of service mean that it "must carry content, irrespective of any desire or external compulsion to discriminate against Plaintiff."

Literally all of that is wrong. It took EIGHT lawyers to be this wrong.

The second section claims -- incorrectly -- that Section 230 "does not apply to political speech." They do this by totally misrepresenting the "findings" part of Section 230 and then ignoring basically all the case law that says, of course Section 230 applies to political speech. As for the findings, while they do say that Congress wants "interactive computers services" to create "a true diversity of political discourse" as the authors of the bill themselves have explained, this has always been about allowing every individual website to moderate as they see fit. It was never designed so that every website must carry all speech, but rather by allowing websites to curate the community and content they want, there will be many different places for different kinds of speech.

Again. Eight lawyers to be totally and completely wrong.

Finally, they argue that "Section 230(c) Violates the First Amendment as Applied to This Matter." It does not. Indeed, should Trump win this lawsuit (he won't) that would violate the 1st Amendment in compelling speech on someone else's private property who does not wish to be associated with it. And this section goes off the rails completely:

The U.S. contends that Section 230(c) does not implicate the First Amendment because “it “does not regulate Plaintiff’s speech,” but only “establishes a content- and viewpoint-neutral rule prohibiting liability” for certain companies that ban others’ speech. (U.S. Mot. at 2). Defendants’ egregious conduct in restraining Plaintiff’s political speech belies its claims of a neutral standard.

I mean, the mental gymnastics necessary to make this claim are pretty impressive, so I'll give them that. But this is mixing apples and orangutans in making an argument that, even if it did make sense, still doesn't make any sense. Section 230 does not regulate speech. That's why it's content neutral. The fact that the defendant, YouTube, does moderate its content -- egregiously or not -- is totally unrelated to the question of whether or not Section 230 is content neutral. Indeed, YouTube's ability to kick Trump off its platform is itself protected by the 1st Amendment.

The lawyers seem to be shifting back and forth between the government "The U.S." and the private entity, YouTube, here, to make an argument that might make sense if it were only talking about one entity, but doesn't make any sense at all when you switch back and forth between the two.

Honestly, this filing should become a case study in law schools about how not to law.

Mike Masnick

Medical, Home Alarm Industries Warn Of Major Outages As AT&T Shuts Down 3G Network

2 years 2 months ago

It was only 2009 that AT&T heralded its cutting edge 3G network as it unveiled the launch of the iPhone (which subsequently crashed AT&T's cutting edge 3G network). Fast forward a little more than a decade and AT&T is preparing to shut that 3G network down, largely so it can repurpose the spectrum it utilizes for fifth-generation (5G) wireless deployments. While the number of actual wireless phone users still using this network is minimal, the network is still being heavily used as a connectivity option for some older medical devices and home alarm systems.

As such, the home security industry is urging regulators to delay the shutdown to give them some more time to migrate home security users on to other networks:

"The Alarm Industry Communications Committee said in a filing posted Friday by the FCC that more time is needed to work out details. A delay of at least 60 to 70 days could help some customers who have relied on AT&T’s 3G network, although arrangements remain to be negotiated, the group said.

“It would be tragic and illogical for the tens of millions of citizens being protected by 3G alarm radios and other devices to be put at risk of death or serious injury, when the commission was able to broker a possible solution but inadequate time exists to implement that solution,” the group said.

If you recall, part of the T-Mobile Sprint merger conditions involved trying to make a viable fourth wireless carrier out of Dish Network (that's generally not going all that well). T-Mobile's ongoing feud with Dish has resulted in T-Mobile keeping its 3G network alive a bit longer than AT&T. So the alarm industry is asking both the FCC and AT&T for a little more time, as well as some help migrating existing home security gear temporarily on to T-Mobile's 3G network so things don't fall apart when AT&T shuts down its 3G network (currently scheduled for February 22).

Nothing more comforting than a hidden, systemic failure of the communications elements of multiple alarm systems that does not truly reveal itself until the alarms fail in a moment of cascading crisis https://t.co/2pxuvmdhLR

— Michael Weinberg (@mweinberg2D) February 18, 2022

AT&T gave companies whose technology still use 3G three full years to migrate to alternative solutions. And it's not entirely clear how many companies, services, and industries will be impacted by the shut down. But there's an awful lot of different companies and technologies that still use 3G for internet connectivity, including a lot of fairly important medical alert systems. Nobody seems to actually know how prepared we truly are, so experts suggest the problems could range anywhere from mildly annoying to significantly disruptive:

So how bad could #Alarmageddon be? Hard to say. Lots of personal medical alerts ("Help, I've fallen and can't get up!"), DUI locks on cars, ankle bracelets for home confinement, school bus GPS system. So potentially pretty severe. (see Docket No. 21-304) /20

— (((haroldfeld))) (@haroldfeld) February 18, 2022

Again, this is all something that could have been avoided if we placed a little less priority on freaking out about various superficial issues and a put a little more attention on nuanced, boring policy issues that actually matter.

Karl Bode