How the CDC Became the Speech Police
This is a follow up to the prior topic; “Government Keeps Meddling With Private Company Decisions”.
How the CDC Became the Speech Police
Secret internal Facebook emails reveal the feds' campaign to pressure social media companies into banning COVID "misinformation."
(Illustration: Joanna Andreasson; Source image: tomozina/iStock)
Anthony Fauci, the federal government's most prominent authority on COVID-19, had his final White House press conference two days before Thanksgiving 2022. The event served as a send-off for the longserving director of the National Institute of Allergy and Infectious Diseases, who was finally stepping down after nearly four decades on the job.
Ashish Jha, the Biden administration's coronavirus response coordinator, hailed Fauci as "the most important, consequential public servant in the United States in the last half century." White House Press Secretary Karine Jean-Pierre described him as a near-constant "source of information and facts" for all Americans throughout the pandemic.
Indeed, the U.S. public's understanding of COVID-19—its virality, how to prevent its spread, and even where it comes from—was largely controlled by Fauci and bureaucrats like him, to a greater degree than most people realize. The federal government shaped the rules of online discussion in unprecedented and unnerving ways.
This has become much more obvious over the past few months, following Elon Musk's acquisition of Twitter. Musk granted several independent journalists access to internal messages between the government and the platform's moderators, which demonstrate concerted efforts by various federal agencies—including the FBI, the Centers for Disease Control and Prevention (CDC), and even the White House—to convince Twitter to restrict speech. These disclosures, which have become known as the Twitter Files, are eye-opening.
But Twitter was hardly the only object of federal pressure. According to a trove of confidential documents obtained by Reason, health advisers at the CDC had significant input on pandemic-era social media policies at Facebook as well. They were consulted frequently, at times daily. They were actively involved in the affairs of content moderators, providing constant and ever-evolving guidance. They requested frequent updates about which topics were trending on the platforms, and they recommended what kinds of content should be deemed false or misleading. "Here are two issues we are seeing a great deal of misinfo on that we wanted to flag for you all," reads one note from a CDC official. Another email with sample Facebook posts attached begins: "BOLO for a small but growing area of misinfo."
These Facebook Files show that the platform responded with incredible deference. Facebook routinely asked the government to vet specific claims, including whether the virus was "man-made" rather than zoonotic in origin. (The CDC responded that a man-made origin was "technically possible" but "extremely unlikely.") In other emails, Facebook asked: "For each of the following claims, which we've recently identified on the platform, can you please tell us if: the claim is false; and, if believed, could this claim contribute to vaccine refusals?"
The platforms may have thought they had little choice but to please the CDC, given the tremendous pressure to stamp out misinformation. This pressure came from no less an authority than President Joe Biden himself, who famously accused social media companies of "killing people" in a July 2021 speech.
Combating misinformation has remained a top goal for Fauci. The day after his final White House press conference, he sat for a seven-hour deposition conducted by Eric Schmitt and Jeff Landry, the Republican attorneys general of Missouri and Louisiana (Schmitt was elected to the U.S. Senate in November). While the proceedings were closed to the public, courtroom participants say Fauci insisted that misinformation and disinformation were grave threats to public health, and that he had done his best to counteract them. (He also demanded that the court reporter wear a mask in his presence. Her allergies had given her the sniffles, she claimed.)
The deposition was part of Schmitt v. Biden, a lawsuit that accuses the federal government of improperly pushing private social media companies to restrict so-called misinformation relating to COVID-19. Jay Bhattacharya and Martin Kulldorff, professors of medicine at Stanford University and Harvard University, respectively, have claimed that social media platforms repeatedly muzzled their opposition to lockdowns, mask requirements, and vaccine mandates. The New Civil Liberties Alliance, a public interest law firm that has joined the lawsuit, thinks the federal government's campaign to squelch contrarian coronavirus content was so vast as to effectively violate the First Amendment.
"What's at stake is the future of free speech in the technological age," says Jenin Younes, the group's litigation counsel. "We've never had a situation where the federal government at very high levels is coordinating or coercing social media to do its bidding in terms of censoring people."
These concerns are well-founded, as the emails obtained by Reason make clear. Over the course of the pandemic, CDC officials exchanged dozens of messages with content moderators. Most of these came from Carol Crawford, the CDC's digital media chief. She did not respond to a request for comment.
"If you look at it in isolation, it looks like [the CDC and the tech companies] are working together," says Younes. "But you have to view it in light of the threats."
Facebook is a private entity, and thus is within its rights to moderate content in any fashion it sees fit. But the federal government's efforts to pressure social media companies cannot be waved away. A private company may choose to exclude certain perspectives, but if the company only takes such action after politicians and bureaucrats threaten it, reasonable people might conclude the choice was an illusion. Such an arrangement—whereby private entities, at the behest of the government, become ideological enforcers—is unacceptable. And it may be illegal.
Jawboned
There is a word for government officials using the threat of punishment to extort desired behaviors from private actors. It's jawboning.
The term arose from the biblical story of Samson, who is said to have slain a thousand enemies with the jawbone of a donkey. According to the economist John Kenneth Galbraith, the word's public-policy use began with the World War II–era Office of Price Administration and Civilian Supply, which primarily relied on "verbal condemnation" to punish violators. President John F. Kennedy jawboned steel manufacturers in the 1960s when he threatened to have the Department of Justice investigate them if they raised prices; President Jimmy Carter did the same to try to fight inflation in the 1970s. During the 2000 presidential campaign, Republican candidate George W. Bush explicitly stated that he would "jawbone" Saudi Arabia to secure lower energy prices.
While jawboning has generally referred to economic activity—to attempts to intimidate other entities into changing prices or policies—there is a history of speech-related jawboning too. One of the first legal theorists to apply the term this way was Derek Bambauer, a professor of law at the University of Arizona. In a 2015 article for the Minnesota Law Review, he argued that libertarian trends in internet regulation provide unique protection from government actors, who would be likely to resort to threats and demands.
"State regulators wielding seemingly ineffectual weapons—informal enforcement based on murky authority—appear outgunned," he wrote. "Yet like Samson, they achieve surprisingly-effective results once the contest begins."
This has been the case throughout the pandemic. With encouragement from government health advisers, congressional leaders, and White House officials—including Biden—multiple social media companies have suppressed content that clashes with the administration's preferred narratives.
Bambauer says that while Biden clearly has the right to complain about material on social media, the administration's actions are probably blurring the line between counterspeech and jawboning.
"I think all of this is of real concern," he says. "It's also a useful reminder that the government innovates in how it applies information pressures, so researchers need to stay up to date on new tactics."
One illustrative case concerns Alex Berenson, a former reporter for The New York Times who became a leading opponent of the coronavirus vaccines. Berenson contends that the mRNA technology undergirding some of the vaccines is "dangerous and ineffective," and he has called for them to be pulled from the market. His claims about vaccine safety are widely rejected; most experts say the vaccines significantly reduce severe disease and death among vulnerable populations, including the elderly and immunocompromised. His prediction that the vaccines would fail to meaningfully eliminate coronavirus case counts has held up better. Most health officials now concede that the disease is quite capable of evading vaccine-acquired protection against infection.
Berenson obviously has a First Amendment right to express his views, even if they're wrong. Nevertheless, federal officials became concerned that anti-vax content on social media would dissuade Americans from getting the jab. They were particularly worried about Berenson. In April 2021, White House advisers met with Twitter content moderators. The moderators believed the meeting had gone well, but noted in a private Slack discussion that they had fielded "one really tough question about why Alex Berenson hasn't been kicked off from the platform."
Andy Slavitt, a White House senior adviser, was especially alarmed, and raised red flags about Berenson's content throughout summer 2021. (He left the administration around that same time, in June 2021, but remained in contact with other officials.)
"Andy Slavitt suggested they had seen data viz [visualization] that had showed he was the epicenter of disinfo [disinformation] that radiated outwards to the persuadable public," wrote a Twitter employee in another Slack conversation.
By that time, White House officials had begun slamming social media companies for failing to deplatform vaccine skepticism. U.S. Surgeon General Vivek Murthy released a report titled "Confronting Health Misinformation" that included advice for social media companies; Murthy wanted the platforms to prioritize the elimination of misinformation "super-spreaders." Then–White House Press Secretary Jen Psaki referenced research by the Center for Countering Digital Hate, a British nonprofit that called out 12 Facebook accounts for spreading disinformation on that platform.
Murthy's missives were phrased as requests. Psaki's, not so much.
"Facebook needs to move more quickly to remove harmful, violative posts," she said at a July 15 press conference. "Posts that would be within their policy for removal often remain up for days, and that's too long. The information spreads too quickly."
On July 20, White House Communications Director Kate Bedingfield appeared on MSNBC. Host Mika Brzezinski asked Bedingfield about Biden's efforts to counter vaccine misinformation; apparently dissatisfied with Bedingfield's response that Biden would continue to "call it out," Brzezinski raised the specter of amending Section 230—the federal statute that shields tech platforms from liability—in order to punish social media companies explicitly.
"When Facebook and Twitter and other social media outlets spread false information that cause Americans harm, shouldn't they be held accountable in a real way?" asked Brzezinski. "Shouldn't they be liable for publishing that information and then open to lawsuits?"
Bedingfield responded by stating that Biden, who had previously expressed support for scrapping Section 230, would be reviewing just that.
"Certainly, they should be held accountable," she said. "You've heard the president speak very aggressively about this. He understands this is an important piece of the ecosystem."
Indeed, Biden had accused social media companies of literally "killing people." And on July 16, as the president prepared to board Marine One, a reporter asked him what he would say to social media companies that take insufficient action against vaccine misinformation. His response indicated that he held Facebook and Twitter responsible.
"That was the public face of the pressure Twitter and other companies came under," says Berenson.
Throughout 2020 and 2021, Berenson had remained in contact with Twitter executives and received assurances from them that the platform respected public debate. These conversations gave Berenson no reason to think his account was at risk. But four hours after Biden accused social media companies of killing people, Twitter suspended Berenson's account.
It is important to keep in mind that while Biden and his team railed against social media companies in public, federal bureaucrats held constant, private conversations with the platforms, giving advice on which statements were false, which ones needed fact-checking, and which ones could theoretically promote vaccine hesitancy. Small wonder the platforms adopted an overly deferential posture.
Demonstrating that this phenomenon isn't confined to the executive branch, Sen. Elizabeth Warren (D–Mass.) praised Twitter's decision to jettison Berenson. In a letter to Amazon, she implied that the online retailer should do something similar to his books.
"Given the seriousness of this issue, I ask that you perform an immediate review of Amazon's algorithms and, within 14 days, provide both a public report on the extent to which Amazon's algorithms are directing consumers to books and other products containing COVID-19 misinformation and a plan to modify these algorithms so that they no longer do so," she wrote.
Right To Remain Silent?
Will Duffield, a policy analyst at the libertarian Cato Institute, thinks the federal government's jawboning on COVID-19 misinformation might violate the First Amendment.
"Multiple arms of the administration delivered the jawboning effort together," Duffield says. "Each one component wouldn't rise to something legally actionable, but when taken as a whole administration push, it might."
In a recent paper on social-media jawboning, Duffield pointed to two very different Supreme Court precedents that could provide insight: Bantam Books v. Sullivan and Blum v. Yaretsky. In the 1963 Bantam decision, the Court held 8–1 that a Rhode Island commission had unconstitutionally violated the rights of book distributors when it advised them against publishing obscene content. In the Court's view, the implicit threat of prosecution under obscenity law was an act of intimidation.
Richard Posner, a widely cited former judge of the U.S. Court of Appeals for the 7th Circuit, referenced the Bantam decision in a 2015 case, Backpage v. Dart. Tom Dart, an Illinois sheriff, had attempted to throttle the advertising of adult services on internet platforms by threatening credit card companies that do business with them. Ruling against Dart and in favor of the platforms, Posner wrote that "a public official who tries to shut down an avenue of expression of ideas and opinions through 'actual or threatened imposition of government power or sanction' is violating the First Amendment." If that standard were the law of the land, it would be difficult to view the Biden administration's jawboning as constitutional.
In the 1982 Blum case, unfortunately, the Supreme Court took a much more dismissive view of informal government pressure. That decision held that government jawboning is only illegal when the state "has exercised coercive power" or has provided "significant encouragement, either overt or covert."
There's also the problem of granting lasting relief. Even if a court rules that a government actor impermissibly jawboned a private entity, that doesn't mean the court can compel the private entity to reverse course. Duffield points to a 1987 decision, Carlin Communications Inc. v. Mountain States Telephone and Telegraph, in which the U.S. Court of Appeals for the 9th Circuit ruled that an Arizona deputy attorney had wrongly jawboned a telephone company for running a phone sex hotline. Obviously, the company was still free to drop the hotline of its own accord; to rule otherwise would be to restrict the company's First Amendment rights.
"Courts can prohibit and even punish jawboning but they may not be able to dispel the lasting effects of official threats," wrote Duffield in his paper.
A better solution would be to explicitly prohibit government officials from engaging in jawboning. Rep. Cathy McMorris Rodgers (R–Wash.) has introduced a bill, the Protecting Speech from Government Interference Act, that would penalize federal employees who use their positions to push for speech restrictions. Enforcement would be akin to the Hatch Act, which prohibits federal employees from using their positions to engage in campaign activities. If this bill were to become law, federal officials would have to be much more careful about advising social media platforms to censor speech, or risk loss of pay or even termination. This is the superior approach: Legislators should regulate government employees' encouragement of censorship on social media platforms, rather than the platforms themselves.
Unfortunately, national lawmakers in both parties have expressed boundless enthusiasm for regulating the platforms. Reforming or eliminating Section 230, the federal statute that protects internet websites from speech-related liability, is an idea with tremendous bipartisan support: Biden, former President Donald Trump, Warren, Sen. Bernie Sanders (I–Vt.), Sen. Josh Hawley (R–Mo.), and Sen. Ted Cruz (R–Texas) have all signed on.
The Democrats' critique of Section 230 is in direct conflict with Republicans' grievances; Democrats want to punish social media companies for censoring too little speech, while the GOP wants to punish social media companies for censoring too much speech. Abolishing Section 230 would likely force the platforms to moderate content much more aggressively. And it would essentially punish them for being the victims of jawboning.
"It's a sort of victim-blaming approach," says Duffield. "'Oh, you didn't stand up hard enough against the federal government, so now we're going to harm you again?'"
Legislators have signaled persistent interest in exactly that approach. The very loudest jawboners are the nation's senators and congressional representatives, who frequently inveigh against the tech industry and its leaders. Democratic lawmakers routinely accuse Facebook of subverting American democracy by allowing too many Russian bots on the site, and then they threaten to use antitrust action to break up the company. Republicans have said virtually the same thing, except they think American democracy was subverted by Big Tech's mishandling of the New York Post's Hunter Biden laptop story.
Prohibiting lawmakers from demanding censorship is legally thornier than prohibiting federal employees' demands. The Speech or Debate Clause of the Constitution gives members of Congress fairly broad latitude to say whatever is on their minds. Ultimately, it is up to voters to punish congressional jawboning.
Beyond COVID
In October 2022, The Intercept published a report on the Department of Homeland Security's plans to monitor misinformation on social media. These plans make it clear the CDC is far from the only government agency to take an active interest in jawboning. According to the department's quadrennial review, Homeland Security aims to combat misinformation relating not only to "the origins of the COVID-19 pandemic and the efficacy of COVID-19 vaccines," but also "racial justice, the U.S. withdrawal from Afghanistan, and the nature of U.S. support to Ukraine."
While it's undoubtedly true social media users have deployed inaccurate information when discussing these issues, they are policy questions. People have a right to scrutinize U.S. funding of the Ukraine war effort, and to reach conclusions about it that clash with the views of the Biden administration. National security officials frequently make mistakes. Dozens of so-called experts wrongly branded Hunter Biden's laptop as a Russian plot. Nina Jankowicz, the civil servant who was briefly tapped to head a Homeland Security project dedicated to identifying misinformation, incorrectly described the laptop as such.
This speaks to a larger problem with the discourse: Government officials and journalists who claim to specialize in the spread of online misinformation are often just as gullible as everyone else. Even federal health experts get stuff wrong. Fauci initially downplayed the importance of masks for general use due to his private fears that hospitals would run out of them; he also deliberately lied about the herd immunity threshold because he didn't think the public could handle the truth. (In that case, the mutating nature of COVID-19 meant that Fauci was wrong about herd immunity at any level.)
On November 30, 2022, Twitter announced that it would no longer enforce any policies against COVID-19 misinformation. This change was implemented under new management. Musk, who purchased the company in fall 2022, has given every indication that he thinks the platform was too deferential to government censorship demands. Whether he will stand firm—and whether other platforms will copy his lead—remains to be seen, though his decision to release the Twitter Files is an encouraging sign that he intends to stop capitulating.
Musk's new policies have already attracted jawboning. Sen. Ed Markey (D–Mass.) accused the billionaire of failing to satisfactorily address concerns about misinformation, declaring that "Congress must end the era of failed Big Tech self-regulation." Free speech would be better served by an era of Big Government self-regulation, and liberation for tech platforms and their users.
Post a Comment