Read time: 19 minutes
Thu, Jul 6th 2023 09:33am
The Good, The Bad, And The Incredibly Ugly In The Court Ruling Regarding Government Contacts With Social Media
from the that's-not-how-most-of-this-works dept
One has to think that Donald Trump judicial appointee Judge Terry Doughty deliberately waited until July 4th (when the courts are closed) to release his ruling on the requested preliminary injunction preventing the federal government from communicating with social media companies. The results of the ruling are not a huge surprise, given Doughty’s now recognized pattern of being willing to bend over backwards as a judge in support of Trumpist culture war nonsense in multiple cases in his short time on the bench. But, even so, there are some really odd things about the ruling.
As you’ll recall, Missouri and Louisiana sued the Biden administration, arguing that it had violated the 1st Amendment by having Twitter block the NY Post story about the Hunter Biden laptop. But that happened before Joe Biden took office, and it’s also completely false. While it remains a key Trumpist talking point that this happened, every bit of evidence from the Twitter Files has revealed that the government had zero communications with Twitter regarding the NY Post’s story.
Still, Doughty does what Doughty does, and in March rejected the administration’s motion to dismiss with a bonkers, conspiracy-theory laden ruling. Given that, it wasn’t surprising that he would then grant the motion for a preliminary injunction. But, even so, there are some surprising bits in there that deserve attention.
There are elements of the ruling that are good and could be useful, some that are bad, and some that are just depressingly ugly. Let’s break them down, bit by bit.
The Good
There are legitimate concerns about government intrusions into private companies and their 1st Amendment protected decisions. I still think that the best modern ruling on this is Backpage v. Dart, in which then appeals court Judge Richard Posner smacked Cook County Sheriff Thomas Dart around for his threats to credit card companies that resulted in them refusing to accept transactions for Backpage.com. There are some elements of that kind of ruling here, but the main difference was in that case, the coercive elements by Dart were clear, and here, many (but not all) are made up fantasyland stuff.
There were some examples in the lawsuit that did seem likely to cross the line, including having officials in the White House complaining about certain tweets and even saying “wondering if we can get moving on the process of having it removed ASAP.” That’s definitely inappropriate. Most of the worst emails seemed to come from one guy, Rob Flaherty, the former “Director of Digital Strategy,” who seemed to believe his job in the White House made it fine for him to be a total jackass to the companies, constantly berating them for moderation choices he disliked.
I mean, this is just totally inappropriate for a government official to say to a private company:
Things apparently became tense between the White House and Facebook after that, culminating in Flaherty’s July 15, 2021 email to Facebook, in which Flaherty stated: “Are you guys fucking serious? I want an answer on what happened here and I want it today.”
So having a ruling that highlights that the government should not be pressuring websites over speech is good to see.
Also, the ruling highlights that lawmakers threatening to revoke or modify Section 230 as part of the process of working the refs at these social media companies is a form of retaliation. This is a surprising finding, but a good one. We’ve highlighted in the past that politicians threatening to punish companies with regulatory changes in response to speech should be seen as a 1st Amendment violation, and had people yell at us (on both sides) about that. But here, Judge Doughty agrees, and highlights 230 reform as an example (though he’s a lot more credulous that 230 reform attempts between Republicans and Democrats are aligned).
With respect to 47 U.S.C. § 230, Defendants argue that there can be no coercion for threatening to revoke and/or amend Section 230 because the call to amend it has been bipartisan. However, Defendants combined their threats to amend Section 230 with the power to do so by holding a majority in both the House of Representatives and the Senate, and in holding the Presidency. They also combined their threats to amend Section 230 with emails, meetings, press conferences, and intense pressure by the White House, as well as the Surgeon General Defendants. Regardless, the fact that the threats to amend Section 230 were bipartisan makes it even more likely that Defendants had the power to amend Section 230. All that is required is that the government’s words or actions “could reasonably be interpreted as an implied threat.” Cuomo, 350 F. Supp. 3d at 114. With the Supreme Court recently making clear that Section 230 shields socialmedia platforms from legal responsibility for what their users post, Gonzalez v. Google, 143 S. Ct. 1191 (2023), Section 230 is even more valuable to these social-media platforms. These actions could reasonably be interpreted as an implied threat by the Defendants, amounting to coercion.
Cool. So, government folks, both in Congress and in the White House, should stop threatening to remove Section 230 as punishment for disagreeing with the moderation choices of private companies. That’s good and it’s nice to have that in writing, even if I’d be hard pressed to believe that most of the discussions on 230 are actual threats.
The Bad
Doughty seems incredibly willing to include perfectly reasonable conversations about how to respond to actually problematic content as “censorship” and “coercion,” despite there being little evidence of either in many cases (again, in some cases, it does appear that some folks in the administration crossed the line).
For example, it’s public information (as we’ve discussed) that various parts of the government would meet with social media not for “censorship” but to share information, such as about foreign trolls seeking to disrupt elections with false information, or about particular dangers. These meetings were not about censorship, but just making everyone aware of what was going on. But conspiracy-minded folks have turned those meetings into something they most definitely are not.
Yet Doughty assumes all these meetings are nefarious.
In doing so, Doughty often fails to distinguish perfectly reasonable speech by government actors that is not about suppressing speech, but rather debunking or countering false information — which is traditional counterspeech. Now, again, when government actors are doing it, their speech is actually less protected (Posner’s ruling in the Dart case details this point), but so long as their speech is not focused on silencing other speech, it’s perfectly reasonable. For example, the complaint detailed some efforts by social media companies to deboost the promotion of the Great Barrington Declaration. One of the points in the lawsuit was that Francis Collins had emailed Anthony Fauci about how much attention it was getting, saying “there needs to be a quick and devastating published take down of its premises.” And Fauci responded:
The same day, Dr. Fauci wrote back to Dr. Collins stating, “Francis: I am pasting in below a piece from Wired that debunks this theory. Best, Tony.”
Doughty ridiculously interprets Collins saying “there needs to be a… take down of its premises” to mean “we need to get this taken off of social media.”
However, various emails show Plaintiffs are likely to succeed on the merits through evidence that the motivation of the NIAID Defendants was a “take down” of protected free speech. Dr. Francis Collins, in an email to Dr. Fauci told Fauci there needed to be a “quick and devastating take down” of the GBD—the result was exactly that.
But that’s clearly not what Collins meant in context. By a “quick and devastating published take down” he clearly meant a response. That is: more speech, debunking the claims that Collins worried were misleading. That’s why he said a “published take down.” Note that Doughty excises “published” from his quote in order to falsely imply that Collins was telling Fauci they needed to censor information.
And then Fauci continued to talk publicly about his concerns about the GBD, not urging any kind of censorship. And Doughty repeats all of those points, and still pretends the plan was “censorship”:
Dr. Fauci and Dr. Collins followed up with a series of public media statements attacking the GBD. In a Washington Post story run on October 14, 2020, Dr. Collins described the GBD and its authors as “fringe” and “dangerous.” Dr. Fauci consulted with Dr. Collins before he talked to the Washington Post. Dr. Fauci also endorsed these comments in an email to Dr. Collins, stating “what you said was entirely correct.”
On October 15, 2020, Dr. Fauci called the GBD “nonsense” and “dangerous.” Dr. Fauci specifically stated, “Quite frankly that is nonsense, and anybody who knows anything about epidemiology will tell you that is nonsense and very dangerous.” Dr. Fauci testified “it’s possible that” he coordinated with Dr. Collins on his public statements attacking the GBD.
Social-media platforms began censoring the GBD shortly thereafter. In October 2020, Google de-boosted the search results for the GBD so that when Google users googled “Great Barrington Declaration,” they would be diverted to articles critical of the GBD, and not to the GBD itself. Reddit removed links to the GBD. YouTube updated its terms of service regarding medical “misinformation,” to prohibit content about vaccines that contradicted consensus from health authorities. Because the GBD went against a consensus from health authorities, its content was removed from YouTube. Facebook adopted the same policies on misinformation based upon public health authority recommendations. Dr. Fauci testified that he could not recall anything about his involvement in seeking to squelch the GBD
Nothing in that shows coercion. It shows Fauci expressing an opinion on the accuracy of the statements in the GBD. That social media companies later chose to remove some of those links is wholly disconnected from that.
Indeed, under this theory, if a social media company wants to get government officials in trouble, all it has to do is remove any speech that a government official tries to respond to, enabling a lawsuit to claim that it was removed because of that response. That… makes no sense at all.
I mean, the conversation about the CDC is just bizarre. Whatever you think of the CDC, the details show that social media companies chose to rely on the CDC to try to understand what was accurate and what was not regarding Covid and Covid vaccines. That’s because a ton of information was flying back and forth and lots of it was inaccurate. As social media companies were hoping for a way to understand what was legit and what was not, it’s reasonable to ask an entity like the CDC what it thought.
Much like the other Defendants, described above, the CDC Defendants became “partners” with social-media platforms, flagging and reporting statements on social media Defendants deemed false. Although the CDC Defendants did not exercise coercion to the same extent as the White House and Surgeon General Defendants, their actions still likely resulted in “significant encouragement” by the government to suppress free speech about COVID-19 vaccines and other related issues.
Various social-media platforms changed their content-moderation policies to require suppression of content that was deemed false by CDC and led to vaccine hesitancy
Yeah, the companies did this because they (correctly) figured that the CDC — whose entire role is about this very thing — is going to be better at determining what’s legit and what’s dangerous than their own content moderation team. That’s a perfectly rational decision, not “censorship”. But Doughty doesn’t care.
Similarly, regarding the Hunter Biden laptop story — which we’ve debunked multiples times here — it’s now well established that the government had no involvement in the decision by social media companies to lower the visibility of that story for a short period of time. Incredibly, Doughty argues that the real problem was that the FBI didn’t tell social media companies that their concerns were wrong. Really:
The FBI’s failure to alert social-media companies that the Hunter Biden laptop story was real, and not mere Russian disinformation, is particularly troubling. The FBI had the laptop in their possession since December 2019 and had warned social-media companies to look out for a “hack and dump” operation by the Russians prior to the 2020 election. Even after Facebook specifically asked whether the Hunter Biden laptop story was Russian disinformation, Dehmlow of the FBI refused to comment, resulting in the social-media companies’suppression of the story. As a result, millions of U.S. citizens did not hear the story prior to the November 3, 2020 election. Additionally, the FBI was included in Industry meetings and bilateral meetings, received and forwarded alleged misinformation to social-media companies, and actually mislead social-media companies in regard to the Hunter Biden laptop story. The Court finds this evidence demonstrative of significant encouragement by the FBI Defendants.
So… despite so many parts of this lawsuit complaining about the government having contacts with social media, here the court says the real problem was that the FBI should have told the companies not to moderate this particular story? So, basically “don’t communicate with social media companies, except if your communication boosts the storylines that will help Donald Trump.”
Also, the idea that what social media companies did resulted in “millions of U.S. citizens” not hearing the story prior to the election is bullshit. As we’ve covered in the past, actual analysis showed that the attempts by Facebook and Twitter to deboost that story (very briefly — only for one day in the case of Twitter) actually created a Streisand Effect that got the story more attention than it was likely to get otherwise.
Over and over again in the ruling, Doughty highlights how the social media companies often explained to White House officials that they would not remove or otherwise take action on various accounts because they did not violate policies. That is consistent with everything we’ve seen, showing that the companies did not feel coerced, and if anything, often mocked the government officials for over-reacting to things online.
Indeed, as we’ve detailed, the actual evidence shows that the companies very, very rarely did anything in response to these flags. The report from Stanford showed that they only took action on 35% of flagged content, and those numbers were skewed by TikTok being much more aggressive. So Twitter/Facebook/YouTube took action on way less than 35%. And, by “take action,” they mostly just added more context (i.e., more speech, not suppression). The only things that were removed were obviously problematic content like phishing and impersonation.
But Doughty basically ignores all that and insists there’s evidence of coercion, because some companies took action. And now he’s saying that the government basically can’t flag any of this info.
This also means that in situations where useful information sharing to prevent real harm could occur, this preliminary injunction now blocks it. And we’re already seeing some of that with the State Department canceling meetings with Facebook in response to this ruling (I’ve heard that other meetings between the government and companies have also been canceled, including ones that are deliberately focused on harm reduction, not on “censorship.”)
Again, so much of this seems to be based on a very, very broad misunderstanding of the nature of investigating the flow of mis- and disinformation online, and the role of government in dealing with that. As we’ve discussed repeatedly, much of the information sharing that was set up around these issues involved things where government involvement made total sense: helping to determine attempts to undermine elections through misinformation regarding the time and place of polling stations, phishing attempts, and other such nonsense.
But, this ruling seems to treat that kind of useful information sharing as a nefarious plan to “censor conservatives.”
The Ugly
Judge Doughty seems to believe every nonsense conspiracy around regarding the culture war and false claims of social media deliberately stifling “conservatives.” This is despite multiple studies showing that they actually bent over backwards to allow conservatives to regularly break the rules to avoid claims of bias. I mean, this is just nonsense:
What is really telling is that virtually all of the free speech suppressed was “conservative” free speech. Using the 2016 election and the COVID-19 pandemic, the Government apparently engaged in a massive effort to suppress disfavored conservative speech. The targeting of conservative speech indicates that Defendants may have engaged in “viewpoint discrimination,” to which strict scrutiny applies
First of all, this isn’t true. The court is only aware of such speech being moderated because that’s all the plaintiffs in this case highlighted (often through exaggeration). Second, many of the contested actions happened under the Trump administration, and it would make no sense that a Republican administration would be seeking to suppress “conservative” speech. Third, the whole issue is that the companies were choosing to hold back dangerous false information that they feared would lead to real world harms. If it was true that such speech came more frequently from so-called “conservatives,” that’s on them. Not the government.
And that results in the details of the injunction, which are just ridiculously broad and go way beyond reasonable limits on attempts by the government to impact social media content moderation efforts.
Again, here, Doughty twists reality by viewing it through a distorted, conspiracy-laden prism. Take, for example, the following:
According to DiResta, the EIP was designed to “get around unclear legal authorities, including very real First Amendment questions” that would arise if CISA or other government agencies were to monitor and flag information for censorship on social media.
So, this part is really problematic. DiResta DID NOT SAY that EIP was an attempt to “get around” unclear legal authorities. Her full quote does not say that at all:
So, as with pretending that Collins told Fauci they had to “take down” content, when he meant provide more info that responds to it, here Doughty has put words in DiResta’s mouth. Where she’s explaining the reasons why the government can’t be in the business of flagging content, as there are “very real First Amendment questions,” Doughty, falsely, claims she said this was an attempt to “get around” those questions. But it’s not.
This is actually showing that those involved were being careful not to violate the 1st Amendment and to be cognizant of the limits the Constitution placed on government actors. Given the “very real First Amendment questions” that would be raised by having government officials highlighting misinformation to social media companies, groups like Stanford IO could make their analysis and pass it off to social media companies without the natural concerns of that information coming from government actors. In other words, Stanford’s involvement was not as a “government proxy,” but rather to provide useful information to the companies without the problematic context of government (and, again, Stanford’s eventual report on this stuff showed that the companies took action on only a tiny percentage of flagged content, and most of those were things like phishing attempts and impersonation — not anything to do with political speech).
It’s not “getting around” anything. It’s recognizing what the government is forbidden from doing.
If you look at the full context of DiResta’s quote, she’s actually making it clear that the reason Stanford decided to set up the EIP project was because the government shouldn’t be in that business, and that it made more sense for an academic institution to be tracking and highlighting disinformation for the sake of responding to it (i.e., not suppress it, but respond to it).
Yet, Doughty goes off on some nonsense tangent, winding himself up about how this is just the tip of the iceberg of some giant censorship regime, which is just laughable:
Plaintiffs have put forth ample evidence regarding extensive federal censorship that restricts the free flow of information on social-media platforms used by millions of Missourians and Louisianians, and very substantial segments of the populations of Missouri, Louisiana, and every other State. The Complaint provides detailed accounts of how this alleged censorship harms “enormous segments of [the States’] populations.” Additionally, the fact that such extensive examples of suppression have been uncovered through limited discovery suggests that the censorship explained above could merely be a representative sample of more extensive suppressions inflicted by Defendants on countless similarly situated speakers and audiences, including audiences in Missouri and Louisiana. The examples of censorship produced thus far cut against Defendants’ characterization of Plaintiffs’ fear of imminent future harm as “entirely speculative” and their description of the Plaintiff States’ injuries as “overly broad and generalized grievance[s].” The Plaintiffs have outlined a federal regime of mass censorship, presented specific examples of how such censorship has harmed the States’ quasi-sovereign interests in protecting their residents’ freedom of expression, and demonstrated numerous injuries to significant segments of the Plaintiff States’ populations.
Basically everything in that paragraph is bullshit.
Anyway, all that brings us to the nature of the actual injunction. And… it’s crazy. It basically prevents much of the US government from talking to any social media company or to various academics and researchers studying how information flows or how foreign election interference works. Which is quite a massive restriction.
But, really, the most incredible part is that the injunction pretends that it can distinguish the kinds of information the government can share with social media companies from the kinds it can’t. So, for example, the following is prohibited:
specifically flagging content or posts on social-media platforms and/or forwarding such to social-media companies urging, encouraging, pressuring, or inducing in any manner for removal, deletion, suppression, or reduction of content containing protected free speech;
urging, encouraging, pressuring, or inducing in any manner social-media companies to change their guidelines for removing, deleting, suppressing, or reducing content containing protected free speech;
emailing, calling, sending letters, texting, or engaging in any communication of any kind with social-media companies urging, encouraging, pressuring, or inducing in any manner for removal, deletion, suppression, or reduction of content containing protected free speech;
But then, it says the government can communicate with social media companies over the following:
informing social-media companies of postings involving criminal activity or criminal conspiracies;
contacting and/or notifying social-media companies of national security threats, extortion, or other threats posted on its platform;
contacting and/or notifying social-media companies about criminal efforts to suppress voting, to provide illegal campaign contributions, of cyber-attacks against election infrastructure, or foreign attempts to influence elections;
informing social-media companies of threats that threaten the public safety or security of the United States;
exercising permissible public government speech promoting government policies or views on matters of public concern;
informing social-media companies of postings intending to mislead voters about voting requirements and procedures;
informing or communicating with social-media companies in an effort to detect, prevent, or mitigate malicious cyber activity;
But here’s the thing: nearly all of the examples actually discussed fall into this exact bucket, but the plaintiffs (AND JUDGE DOUGHTY) pretend they fall into the first bucket (which is now prohibited). So, is sharing details of some jackass posting fake ways to vote “informing social media companies of posting intended to mislead voters about voting requirements” or is it “specifically flagging content or posts on social-media platforms and/or forwarding such to social-media companies urging, encouraging, pressuring, or inducing in any manner for removal, deletion, suppression, or reduction of content containing protected free speech“?
It seems abundantly clear that nearly all of the conversations were about legitimate information sharing, but nearly all of it is interpreted by the plaintiffs and the judge to be nefarious censorship. As such, the risk for anyone engaged in activities on the “not prohibited” list is that this judge will interpret them to be on the prohibited list.
And that’s why government officials are now calling off important meetings with these companies where they were sharing actual useful information that they can no longer share. I’ve even heard some government officials say they’re even afraid to post to social media out of a fear that that would violate this injunction.
Also, this is completely fucked up. Among the prohibited activities is having people in the government talk to a wide variety of researchers who aren’t even parties to this lawsuit.
collaborating, coordinating, partnering, switchboarding, and/or jointly working with the Election Integrity Partnership, the Virality Project, the Stanford Internet Observatory, or any like project or group for the purpose of urging, encouraging, pressuring, or inducing in any manner removal, deletion, suppression, or reduction of content posted with social-media companies containing protected free speech
That should be a real concern, as (again) a key thing that the EIP did was connect with election officials who were facing bogus election claims, giving them the ability to share that info and move to debunk false information and provide more accurate information. But, under this ruling, that can’t happen.
If you wanted to set up a system that is primed to enable foreign interference in elections, you couldn’t have picked a better setup. Nice work, everyone.
Anyway, it’s no surprise that the US government has already moved to appeal this ruling. But, if you think the appeals court is going to save things, remember that Louisiana federal rulings go up to the 5th Circuit, which is the court that decided that Texas’s compelled speech law was just dandy.
Of course, in many ways, this ruling conflicts with that one, in that Texas’s social media law is actually a much more active attempt by government to force social media companies to moderate in the manner it wants. But the one way they are consistent is that both rulings support Trumpist delusions, meaning there’s a decent chance the 5th Circuit blesses the nonsense parts of this one.
Again, the good parts of the ruling shouldn’t be ignored. And many government officials do need a clear reminder of the boundaries between coercion and persuasion. But, all in all, this ruling goes way too far, interprets things in a nonsense manner, and creates an impossible-to-comply-with injunction that causes real harm not just for the users of social media, but actual 1st Amendment interests as well.
Filed Under: 1st amendment, anthony fauci, cisa, coercion, content moderation, free speech, information sharing, jawboning, joe biden, louisiana, missouri, rob flaherty, terry doughty
Companies: facebook, google, twitter
If you liked this post, you may also be interested in...
×
Email This Story
This feature is only available to registered users.
You can register here or sign in to use it.
Tools & Services
Company
Contact
More
Techdirt
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.