Wednesday, January 31, 2024

The Case for Disqualification. By Sean Wilentz


February 8, 2024
Current Issue
Image of the February 8, 2024 issue cover.
The Case for Disqualification
Sean Wilentz
The Supreme Court must decide if it will honor the original meaning of the Fourteenth Amendment and bar Donald Trump from holding public office or trash the constitutional defense of democracy against insurrections.
February 22, 2024 issue

Illustration by Anthony Russo
Facebook
Twitter
Mail to
Print page
Submit a letter:
Email us letters@nybooks.com
Even as Donald Trump roars and intimidates with ever more violent threats, even as his lawyers warn that kicking him off the ballot in November would “unleash chaos and bedlam,” even as it becomes evident that we are not in the midst of a normal national election but an ongoing coup d’état by a charismatic despot, it is taking a long time for the public to understand the enormity of the events of January 6, 2021, and all that precipitated them.
In the moment, American historians were better equipped to grasp their profound political implications. Less than a week after the attack on the Capitol, Eric Foner, the preeminent authority on Reconstruction, pointed to Section 3 of the Fourteenth Amendment, ratified three years after the end of the Civil War, which bars anyone who has sworn to uphold the Constitution and who has engaged in insurrection from ever holding office again. Plainly, Foner said, then-president Donald Trump, along with other public officials, had sworn “an oath to defend the Constitution and, on Jan. 6, they violated it.” To bar them from public office, as the Constitution mandates, “would be the mildest of punishments” for “an uprising that left five people dead, threatened the lives of members of Congress, caused havoc in the Capitol, and sought to overturn the results of the presidential election.” Upholding the law of the land, Foner remarked, “would be an affirmation of the vitality of our wounded democracy.”
1
Three years later the Supreme Court will now decide whether to sustain the recent decisions of the Colorado Supreme Court and the Maine secretary of state to follow the Constitution’s mandate, much as Foner suggested. Unsurprisingly, Trump’s lawyers and defenders, when not unsubtly raising the specter of mass violence, have groped for any escape route they can find. George W. Bush’s attorney general Michael B. Mukasey has floated the strange reading that the relevant section of the Fourteenth Amendment covers only persons appointed to office.
2
 In its filing contesting the Maine disqualification, Trump’s legal team tries to peddle the claim that the amendment bars persons “from holding specified offices, not from running for them or from being elected to them.”
No less risible, if somewhat more surprising, has been the alarm at Trump’s disqualification expressed by some law school academics and political pundits. By their reasoning, Trump’s misdeeds aside, enforcement of the Fourteenth Amendment poses a greater threat to our wounded democracy than Trump’s candidacy. In the name of defending democracy, they would speciously enable the man who did the wounding and now promises to do much more.
0:00 / 0:00
Audio: Listen to this article.
Samuel Moyn of Yale Law School, for example, writing in both The New York Times and, less guardedly, the Trumpist magazine Compact, has described invocation of the Fourteenth Amendment as essentially a conspiracy, part of a plot by liberals to stifle dissent against their shoddy elitist hegemony, the latest “quick fix or short cut that would save liberals the trouble of winning” at the polls. The New Yorker’s Amy Davidson Sorkin cites a Colorado Supreme Court dissenter’s citation of a solitary, eccentric 1869 court ruling—a decision debunked and even ridiculed by historians and constitutional scholars across the ideological spectrum—to suggest that disqualifying Trump would deny him due process under the law. In any event, Sorkin writes, it would be much “sounder” to defeat Trump at the polls than to risk the chaos she presumes his disqualification would cause. To avoid political consequences, she would toss the law and leave the matter to politics.
3
Kurt Lash of the University of Richmond claims, in a Times op-ed, that the constitutional basis for disqualification is at best iffy. Lash imputes ambiguity to the Fourteenth Amendment by placing great weight on the fact that during the congressional debate on its adoption in 1866, Senator Reverdy Johnson of Maryland, a former attorney general, wondered whether its disqualification section applied to the president. He downplays how, once corrected, Johnson told the Senate, “Perhaps I am wrong as to the exclusion from the Presidency; no doubt I am.” Meanwhile, in New York, Jonathan Chait, while raising issues of due process and the optics of disqualification so close to the election, also frets over whether Trump engaged in insurrection at all because he “was not trying to seize and hold the Capitol nor declare a breakaway republic,” as if that were the proper definition of the term. In fact, the federal indictment against Trump for January 6 covers his attempt to stage a coup over a three-month period.
4
Finally, Ruth Marcus of The Washington Post, who also fears a Trumpist frenzy, has offered three rationales for the Supreme Court to overrule the Colorado court: that it is unclear whether Trump engaged in insurrection; that Section 3 of the Fourteenth Amendment doesn’t cover US presidents; and that Congress must pass enabling legislation before Section 3 can be enforced. But the historical and constitutional basis for these claims is at best flimsy and for the most part nonexistent.
5
Whether motivated by a clutching fear of Trump’s base, a perverted sense of democratic evenhandedness, a reflexive hostility toward liberals, or something else, these confident disavowals betray a basic ignorance of the relevant history and thus a misconception of what the Fourteenth Amendment actually meant and means. That history, meanwhile, has placed the conservative members of the Supreme Court in a very tight spot.
Over the past forty years the doctrine of originalism (along with its sibling, textualism) has been the cornerstone of the jurisprudence of the conservative majority that now dominates the Court. Concocted in the 1980s to roll back the constitutional precedents of the New Deal and Great Society eras, supposedly in the name of judicial restraint, originalism purports to divine the original intentions of the framers by presenting tendentious renderings of the past as a kind of scripture. This bad-faith invocation of the framers has become a ploy to justify overturning Roe v. Wade, gutting the Voting Rights Act of 1965, eliminating commonsense gun regulation, and more. But now this originalist petard is exploding in the majority’s face. No degree of cherry-picking or obfuscation can deny the historical record of the Fourteenth Amendment, which is unequivocal: if Donald Trump engaged, in any way, in the insurrection of January 6, he is automatically barred from holding any public office, federal or state.
6
Contrary to some of the pundits, the Fourteenth Amendment, including its third section, was not adopted to conciliate the North and South or, in Moyn’s words, to “stabilize the country after a civil war.” Along with the Thirteenth and Fifteenth Amendments, it was an attempt to formalize and consolidate a social revolution—the abolition of slavery—and, as far as possible, to crush, in national law, the implacable efforts of the defeated Confederates to undo that revolution.
7
 The amendment’s third section grew from debates initiated by the congressional Joint Committee on Reconstruction on how best to ensure that the chief Confederates would not recreate an oligarchic regime based on black subjugation and disenfranchisement.
As the legal historian Mark Graber has demonstrated, however, the third section’s framers and supporters also pointedly stated that they were not aiming its disqualification provision simply at those who had participated in what they called “the late rebellion.” Graber cites, among others, Senator John Henderson of Missouri, who observed that “the language of this section is so framed as to disenfranchise from office the leaders of the past rebellion as well as the leaders of any rebellion hereafter to come.” Anyone who had violated a solemn vow to uphold the Constitution could never be trusted not to do so again; hence disqualification was the only reasonable course. The reasoning pertained not to any one era but to all time, providing the nation, Senator Waitman T. Willey of West Virginia declared, with a “measure of self-defense.”
8
The need for that self-defense in the current moment belies the argument of Sorkin and others quaking before Trump and his followers that, in Chait’s words, to disqualify him “would be seen forever by tens of millions of Americans as a negation of democracy.” The fact is that Trump has already attempted to negate American democracy and come perilously close to doing so; and he has stated publicly that he intends to do it again, up to and including, if returned to power, suspending parts of the Constitution that he is supposed to preserve, protect, and defend. It would certainly be a “sounder” solution (Sorkin’s word) if Trump were defeated in 2024, after which he graciously returned to Mar-a-Lago. But he and his supporters—most recently Representative Elise Stefanik of New York, who is eagerly auditioning to be his running mate—have openly declared that this simply will not happen, no matter what the voters decide. They are no less implacable than the die-hard Confederate insurrectionists were after Appomattox. They represent precisely the kind of clear and present danger that the framers of the Fourteenth Amendment knew they were facing, a danger against which the normal mechanisms of electoral democracy are powerless. Only a constitutional remedy, those framers knew, would suffice. Section 3 of the Fourteenth Amendment is that remedy.
Trump’s ever-inventive lawyers, to be sure, have tried to confuse the issue by claiming that disqualifying their client amounts simultaneously to lèse-majesté and deep-state persecution. At Trump’s behest, they are throwing any argument into the mix to cause delay and to use the courts as a forum for his campaign. Although it persuaded a Denver judge, the claim that presidents are not covered by the disqualification clause does not pass the smell test, especially on originalist grounds, as the framers and the supporters of the amendment recognized that its phrase encompassing “any office, civil or military, under the United States” included the president and vice-president. Trump’s attorneys have the audacity to claim that their client was not an officer of the United States. Have they read the constitutional oath every president takes to “faithfully execute the Office” of the presidency? Do they really expect anyone to believe that the presidency is the one federal office whose occupant is not an officer of the United States?
The United States Court of Appeals for the D.C. Circuit has yet to rule on Trump’s claims to presidential immunity from prosecution for any alleged crimes for which he was not impeached, a twisted reading of the Constitution that special counsel Jack Smith has reasonably said “threatens the democratic and constitutional foundation of our Republic.” Yet even if the appeals court were to rule in Trump’s favor, it would have no effect on his disqualification, as the constitutional bar is not based on any conviction for any criminal offense. Although insurrection is indeed a federal crime under 18 US Code § 2383, it became one only in 1948, and it is in any case irrelevant to disqualification under the Constitution. None of the ex-Confederates excluded from office after the amendment went into effect were prosecuted for insurrection. One public official has thus far been removed from office under the Fourteenth Amendment for participating in the January 6 events: Couy Griffin, a New Mexico county commissioner who was earlier convicted of trespassing, sentenced to fourteen days in jail, and fined $3,000. But one need not have been proved beyond a reasonable doubt to have engaged in insurrection in order to be disqualified from public office. Disqualification is a constitutionally imposed disability, not a punishment for a criminal offense.
The Trumpists would have it otherwise, claiming that unless Congress passes a law implementing the disqualification clause, it is inoperative. If successful, this line of argument would have to rest on a single strange ruling, the one cited by Sorkin, made by Chief Justice Salmon P. Chase in the case in re Griffin in 1869.
A year earlier, presiding over the treason trial of Jefferson Davis, Chase counseled Davis’s attorney that as Section 3 of the recently ratified Fourteenth Amendment was self-executing—meaning that it required no additional legislation to come into effect—its exclusion of insurrectionists effectively vacated Davis’s treason indictment. Just as no additional law was needed to abolish slavery after the Thirteenth Amendment, none was needed to disqualify insurrectionists from public office. What evidence survives suggests that the framers and supporters of the Fourteenth Amendment in 1866–1868 assumed that it was self-ratifying in its entirety; there is no evidence suggesting they did not. Affirmation by a court of engagement in insurrection was sufficient for a Section 3 disqualification, as happened immediately after the Civil War and has happened as recently as 2022 in the New Mexico case. The involvement of Congress was not mandatory; to rule that it is now would raise serious and possibly devastating questions about this Supreme Court’s legitimacy.
In the Griffin case, however, involving a Virginia convict attempting to disqualify the judge who had presided over his trial because he had served in the Virginia legislature during the Civil War, Chase, in his capacity as circuit justice, suddenly changed his mind, asserting that Section 3 was moot, absent enabling legislation. No other judge or justice ever ruled this way again, whereas state courts pursued disqualifications under Section 3, assuming that the entire Fourteenth Amendment, and not just its other four sections, was self-executing. Congress never countermanded these disqualifications.
The attention recently given the disqualification clause has focused new attention on in re Griffin, and most experts have judged Chase’s revised ruling a botch, filled with contradictions and quite likely politically motivated and self-serving. Among other critics, the two conservative legal scholars who have argued most forcefully for Trump’s disqualification on originalist grounds, William Baude of the University of Chicago and Michael Stokes Paulsen of the University of St. Thomas, regard Chase’s decision as a joke that “should be hooted down the pages of history.” The Colorado Supreme Court ruling, in coming to the same conclusion, put the issue bluntly: if any of the nearly identically structured Reconstruction Amendments, including the Fourteenth, required additional legislation to go into force, “then Congress could nullify them by simply not passing enacting legislation. The result of such inaction would mean that slavery remains legal.”
9
 For the current Supreme Court to nullify Trump’s disqualification on so feeble and exceptional a precedent as in re Griffin would make another high court ruling based on the Fourteenth Amendment, Bush v. Gore, look like a paragon of dispassionate jurisprudence.
With the law and the facts against them, the Trumpists and the apologetic pundits alike have started pounding the table, trying to raise doubts about whether Trump engaged in any kind of insurrection at all. This has led to some diverting speculation and oddball debates over what, exactly, constitutes an insurrection. Chait claims that nothing less than a full-fledged revolution or Confederate-style secession fits the bill. Adam Serwer of The Atlantic demurs, reminding us of the Whiskey Rebellion, Fries’s Rebellion, Gabriel’s and Nat Turner’s uprisings, not to mention John Brown’s Raid, all described as insurrections in their time. Ross Douthat, in his Times newsletter, denies that January 6 matches any of those lesser examples, though he does hold up John Ganz’s identification of a fascist riot in Paris on February 6, 1934, as a possible instance of one. So much for American exceptionalism.
10
To satisfy the Supreme Court majority, an originalist inquiry would be in order; but once more, originalism ends up working to Trump’s disadvantage. Graber has again done the essential work. Upon close examination of Anglo-American legal texts on treason and insurrection dating back to Edward III in the fourteenth century, but with special attention to American law from the founding through the start of Reconstruction, he has identified four elements that define an insurrection: 1) an assemblage of people; 2) engaged in resisting a federal law; 3) using force or the threat of force with intimidating numbers; 4) with a public purpose or, in the words of Justice Samuel Chase in 1800, an “object of a great public nature, or of public and general (or national) concern.” Engaging in insurrection need not mean actually being present to commit the violence or intimidation. On all four counts, the well-established facts of Trump’s activities and spoken words on January 6 and over the preceding weeks squarely fit an originalist definition.
In fact, there is a clear consensus on the basic facts of January 6, not least in the findings of the congressional January 6 Committee, though Jack Smith may well present more shocking details in his federal case against Trump. Moyn sees no such consensus, a major reason why he thinks the Supreme Court should reject disqualification out of hand. On December 22 he wrote:
What actually happened on Jan. 6—and especially Mr. Trump’s exact role beyond months of election denial and entreaties to government officials to side with him—is still too broadly contested.
The claim is bizarre. “Broadly contested” when bipartisan majorities in both houses of Congress voted to impeach and remove Trump from office? When two Colorado courts concurred that Trump had engaged in an insurrection? When even Trump’s lawyers in those proceedings did not contest the facts about the insurrection? When the remarks, soon after the insurrection, of Senator Mitch McConnell still resound? “It was a violent insurrection for the purpose of trying to prevent the peaceful transfer of power after a legitimately certified election from one administration to the next.”
With oral arguments before the Supreme Court set for February 8, Trump and his advocates have outdone themselves, serving up the sophistry and chicanery contained in the amicus brief prepared on behalf of Senator Ted Cruz and 178 other MAGA members of Congress and filed on January 18. Seemingly a road map for the conservative justices to stop disqualification, the brief reads more like a game of three-card monte. After swiftly noting that Chase’s discredited ruling in Griffin is “not directly binding,” it then shuffles into treating the ruling’s “longstanding precedent” as if it were absolutely binding and claims that “Congress must pass authorizing legislation to enforce Section 3.” The brief twists Congress’s express authority to enforce the amendment to mean that the amendment itself is not self-enforcing.
The brief bids the Supreme Court to rule that because the presidential oath of office does not contain the words “support the Constitution” (the president swears to “preserve, protect and defend the Constitution”), Trump is exempt from disqualification under Section 3. In claiming that the Colorado decision denies Congress’s authority to undo a Section 3 disqualification, it distorts the wording of the Twentieth Amendment, on presidential succession, to reach a conclusion for the ages: “A candidate may be elected President even if he is not qualified to hold the office.” In dealing that card, the brief’s authors appear not to notice that it gives away their entire game.
11
These lawyers—indeed, all the academics and pundits quailing at enforcement of the Constitution—would profit from the words of Abraham Lincoln at the outset of the Civil War. The American people, Lincoln said, had established that they could successfully create and administer a democratic government. They had yet to establish, however, whether they could maintain that government “against a formidable internal attempt to overthrow it.” Now they were left “to demonstrate to the world that those who can fairly carry an election can also suppress a rebellion.”
12
The conservative majority of the Supreme Court—and the historical legacy of the Roberts Court—have reached a point of no return. The law, no matter the diversions and claptrap of Trump’s lawyers and the pundits, is crystal clear, on incontestable historical as well as originalist grounds. So are the facts of the case, which in any event the Supreme Court is powerless to review. The conservatives face a choice between disqualifying Trump or shredding the foundation of their judicial methodology.
But the choice is far more profound than the Court’s consistency. In 2000 it disgraced itself by manipulating the Fourteenth Amendment to produce Bush v. Gore, a ruling that changed the course of history and was later described by Justice Antonin Scalia, who concurred in it, this way: “As we say in Brooklyn, a piece of shit.”
13
 Now the Court must decide whether it will honor the original meaning of the Fourteenth Amendment and disqualify Donald Trump. If it does so, it may redeem in part the terrible judicial malpractice of 2000. If it does not, it will trash the constitutional defense of democracy designed following slavery’s abolition; it will guarantee, at a minimum, political chaos no matter what the voters decide in November; and it will quite possibly pave the way for a man who has vowed that he will, if necessary, rescind the Constitution in order to impose a dictatorship of revenge.
—January 25, 2024

Injustice Can Make You Crazy As A Bedbug. By Ken White


www.popehat.com
Injustice Can Make You Crazy As A Bedbug
Ken White at Popehat Report
9 - 12 minutes

Years ago I worked on the indigent defense panel serving federal courts here in Los Angeles. For very little money, I represented federal defendants who could not afford a lawyer. They were accused of immigration crimes, drug crimes, violent crimes — the whole family of blue-collar federal criminality.

Their attitude about their circumstances was very different than the attitude of my paying clients. My paying clients are usually accused of white collar crimes, usually college-educated and raised in upper-middle-class or better environments, and usually have no prior contact with the justice system. They tend to experience that system in a conspiratorial light. The criminal justice system is so perverse, so Kafkaesque, so indifferently brutal, that it seems inexplicable that what is happening to them happens to everybody. Instead, they usually believe that someone — an investigator, a prosecutor, a judge — had a grudge and is singling them out for especially brutal treatment, usually at the secret instigation of their enemies. They often believe that the case might be made better by complaining about the prosecutor targeting them for unfair or unusual treatment.

My indigent clients didn’t express that feeling at all. They had no expectations of fairness or courtesy or reason. Most of them had been through the system before, or had family who had been through the system. They expected Kafka, and got him. They might say that witnesses were lying, that the case was bullshit, or that the sentence was unfair, but they never thought they were being singled out. They knew this was how it worked.

This gulf between people with a fantastical view of the justice system drawn from myth and people who have been on one end of it or the other has been particularly gaping for the last five or so years. The Robert Mueller investigations and prosecutions, the January 6 prosecutions, and the cases against Donald Trump and his entourage have all produced outrage about selective prosecution and biased treatment. In the vast majority of cases, the outrage has been directed at the system doing what it does all of the time, but doing it against powerful, rich, or famous people, who usually escape such treatment.

So, defending Trump is the last thing I want to do, but here goes: I think the Carroll suit is outrageous, and the award she’s been granted even more so. I hope it’s overturned on appeal.

Leave Trump out of it for a moment. We now have a situation in which a socially unpopular figure can be accused — in the jurisdiction where he is hated the most — of a heinous crime without having any realistic means of defending himself, because the alleged crime dates back for decades and any kind of forensic evidence is long gone. This is a civil case, and the “preponderance of evidence” standard of proof is much lower than the “beyond a reasonable doubt” standard in a criminal trial, which makes it even harder to defend. If the person continues to defend himself — or denounce his accuser — he can be subject to ruinous financial penalties.

Leave Trump out of it for a moment. We now have a situation in which a socially unpopular figure can be accused — in the jurisdiction where he is hated the most — of a heinous crime without having any realistic means of defending himself, because the alleged crime dates back for decades and any kind of forensic evidence is long gone. This is a civil case, and the “preponderance of evidence” standard of proof is much lower than the “beyond a reasonable doubt” standard in a criminal trial, which makes it even harder to defend. If the person continues to defend himself — or denounce his accuser — he can be subject to ruinous financial penalties.

This is a classic example of an elite person — here, a New York Times columnist born to good fortune and educated at the University of Chicago and the London School of Economics — suddenly becoming outraged at the injustices of the system when it is used against another elite person — here, a famous and powerful billionaire who was the President of the United States and might be again.

There are plenty of revolting aspects of our justice system, criminal and civil. There are many things that could, or should, outrage Bret Stephens about American justice. This is an odd place to start.

First, Stephens complains that a “socially unpopular figure” can be accused of a “heinous crime” in the “jurisdiction where he is hated the most.” Generally, people who are accused of awful things are hated in the places where they are tried. Take the Central Park Five. They were accused of horrific crimes and tried in a city that reviled them. In fact Donald Trump — the man who arouses Bret Stephens’ civic sympathy — bought a full page ad demanding their execution. Let me tell you: people accused of crime, particularly violent crime, are generally reflexively despised in America. All it takes to be “socially unpopular” is to be accused.

Second, Bret Stephens complains that Trump has been tried “without any realistic chance of defending himself” because the case involves an accusation of decades-old sexual assault. This is nonsense, based on the common-from-dudebros mantra that it is illegitimate and unreliable to try someone for something based on one person’s word against another, as opposed to based on a panoply of CSI-style evidence.

But as both criminal lawyers and civil litigators will tell you, cases routinely turn on key issues that come down to one person’s word against another. That might be a cop’s word about what a suspect said, or it might be a CEO’s word about whether a fact was disclosed in a pitch meeting. But it is absolutely mundane for juries to be asked to weigh one person’s version of events against another’s. Usually one of those people isn’t a billionaire former President with unending access to top lawyers and political connections. Poor people are generally not sued for millions of dollars, because poor people don’t have even hundreds of dollars. But poor people routinely suffer in our system based on a single person’s word. They are denied bail, convicted of crimes, their probation revoked. They are evicted and their benefits are cut and they lose custody of kids. The word of a single person — a cop, a landlord, a bureaucrat — is commonly treated as inherently more believable than theirs. In happens every second.

Unstated but implied in Bret Stephens’ gripe is the premise that women accusing men of sexual assault are particularly unworthy of belief when uncorroborated by physical evidence, or that some sort of hysteria about sexual assault has addled our ability to weigh credibility. This is a worldview, to be sure. It’s just odd to see it raised on behalf of Trump, and not someone far less able to defend themselves. Donald John Trump is no Scottsboro boy. Donald Trump is someone with the maximum possible capacity to defend himself. And he did: represented by very capable counsel at the first Carroll trial, if not the second, and he used every argument and technique possible to convince the jury not to believe Ms. Carroll’s account, and that her delay in reporting rape and lifestyle choices made her unbelievable. Well, every technique except two — he didn’t present any defense witnesses and he didn’t testify. Perhaps Bret Stephens means that, in a he-said she-said case, it’s impossible for “he” to win if “he” is so transparently narcissistic, unlikable, and incredible that it would be suicidal to testify. Well, no system is perfect.

Bret Stephens is also upset that in civil cases, the jury decides who won using the standard of preponderance of evidence, rather than the beyond a reasonable doubt standard applicable to criminal cases. Does Stephens think that the higher standard should apply when billionaires are accused? When women accuse men? It’s not clear. But his quarrel should be with the legal minds of the 18th Century, when the standard was developed.

As a criminal defense attorney, and a commentator on criminal justice, I spend a lot of time thinking about what arguments will move an audience. I am a realist. I know that some audiences begin with their arms folded against me. In Bret Stephens, I have the audience of a man who is inspired to outrage by a large judgment against Donald Trump, a billionaire who deliberately treated his trial like a circus. Stephens’ analysis resolves every doubt, every question, every issue in his favor. But that same audience’s sympathy goes in the other direction if the protagonist in question is, for example, a 13-year-old Latino:

Maybe there’s a lesson in this, simple and old-fashioned as it may seem. When bad guys walk free and brave cops have to fear for their jobs for doing their jobs, crime tends to go up. And when the national conversation about the Adam Toledo tragedy revolves around the officer’s split-second, life-or-death decision instead of the question “What is a 13-year-old child doing with a 21-year-old criminal firing a gun at 2:30 a.m.?” then we are deeply confused about the nature of our problems, to say nothing of the way to a solution.

Well, Bret Stephens thinks, if we cannot save the 13-year-old poor boys, at least we should try to save the millionaires.

“But the precedent will eventually come to haunt someone who doesn’t deserve this kind of treatment,” Bret finishes.

Oh Bret. They might not teach you this at the London School of Economics. But deserve’s got nothing to do with it. And the precedent was set long ago, on the backs of far more humble people than Donald Trump.

Tuesday, January 30, 2024

Banning natural gas exports. By Matthew Yglesias

Read time: 11 minutes


Banning natural gas exports

Does this even reduce emissions? Nobody seems to know or care.


The Biden administration announced last week that it would pause approval of 17 planned liquid natural gas export facilities, pending a review of the greenhouse gas emission implications of their construction.


Their plan, stated in this way, is pretty unobjectionable. Increasing LNG exports will generate modest economic benefits for the United States of America by increasing natural gas production and natural gas exports, which will mean some jobs in the gas extraction sector. The exports will also improve America’s terms of trade and make some imported goods cheaper. LNG exports will also presumably lower the global price of natural gas, which will increase the amount of gas that is burned globally. To the extent that cheaper gas displaces coal and oil, that will make global greenhouse gas emissions lower. But to the extent that it leads to more aggregate energy use, it could make climate change worse.



So there are a lot of important questions to ask about LNG exports and climate change:


How much does blocking the terminal actually reduce global natural gas consumption rather than redirecting it from the US to Qatar or Russia?


How much does increased gas consumption displace coal and oil? Or is it simply burned in addition to coal and oil?


Based on the answers to the questions above, what is the net impact of these LNG terminals on greenhouse gas emissions?


What is the economic value to the United States of increased natural gas exports?


What is the economic value to the world of cheaper natural gas?


If you had answers to those questions, you could then do two different cost-benefit analyses. One is what is the social cost to the United States of America of the higher emissions (if any) versus the economic benefits? The other is what is the social cost to the world of the higher emissions (if any) versus the economic benefits?


One certainly can’t object, in principle, to the idea that the government should run these numbers and try to reach a decision.


That said, I think it is clear that the climate advocacy community is not asking for a sober-minded calculation. I’ll just quote Jillian Goodman’s writeup for Heatmap, which I think is extremely telling in two respects:


“Um, I think we all just won,” wrote Bill McKibben — perhaps the project’s staunchest foe — in a newsletter sent out just a few hours later. “Yes,” he wrote, “there are always devils in the details. And it doesn’t guarantee long-term victory — it sets up a process where victory is possible (to this point, the industry has gotten every permit they’ve asked for). But I have a beer in my hand.”


That possible breaking of historical precedent partially explains why McKibben is so exhilarated. Another reason has a lot to do with an analysis of the climate effects of U.S. LNG exports, released in November by energy analyst Jeremy Symons. Among his most incendiary findings was that, if all 17 export terminals were approved, the emissions related to the fuel that would flow through them would exceed the annual greenhouse gas emissions of the entire European Union.


Two points on this:


McKibben defines victory as blocking LNG exports. He knows in advance which way he wants the analysis to turn out.


The movement has reached its decision on which way it should turn out based on Symons’ analysis, even though Symons doesn’t even purport to estimate the net impact on emissions.


I think this is bad. And generally speaking, it’s a bad idea to spend a lot of time listening to the ideas and priorities of activities who are this cavalier about their policy analysis.


I also have to say that any time I write a piece like this one from earlier in the month about how I wish Democrats would get real about the politics of climate change, I get a ton of immediate pushback. The climate advocates tell me that everyone knows the voters don’t want to make big sacrifices for climate change, that’s why the whole climate agenda was reoriented around jobs, industrial policy, energy security, innovation, and so forth. And I think that’s great. The Inflation Reduction Act really was centered around those things (thanks to Joe Manchin).


But this kind of thing is the tell. The actual revealed preference of the climate movement is to take an arbitrary hammer to any fossil fuel project that they see, in such an arbitrary way that they want to do it before even knowing whether the project raises emissions. And when I say that this kind of thinking is a problem politically, that’s what I mean: A group of people who know that they should be trying to create politically sustainable decarbonization via jobs, growth, innovation, and energy just can’t restrain themselves from seizing on regulatory loopholes and prejudging the outcomes of complicated policy analysis.


There’s a strong public interest in gas exports

I do think it’s important to be rational here.


The climate movement may have decided that blocking these facilities is a good idea without checking to see whether the sign on the global emissions impact is positive or negative, but that doesn’t mean that blocking them would fail a rigorous cost-benefit test. But I will note that the review process they have asked for, and won, doesn’t require a cost-benefit test.


It recapitulates McKibben’s successful campaign against the Keystone XL pipeline, which benefitted from the fact that rules on cross-border pipelines don’t require a quantified cost-benefit analysis, just a fuzzy “public interest” determination.


But I think we ought to be clear about this.


If you take the public interest concept seriously, that ought to be a higher bar for blocking a project than a mere economic cost-benefit analysis. Consider the Keystone pipeline. Whatever the impact of blocking this pipeline on global emissions or the American economy, it also pissed off the government of Canada. Canada is a neighbor to the United States of America, it’s a major trade partner, and it’s a fellow member of the NATO alliance. From time to time, we ask Canada to help us out with things, like during the September 11, 2001 crisis when tons of airliners were diverted to land in Canada. That’s not to say “do what Canada wants” should be the guiding principle of American policy. But all else being equal, it’s nice to help your friends out. So the right way to make a “public interest” calculus about Keystone would be to take a conventional economic cost-benefit analysis, and then put a further foreign policy thumb on the scale in favor of approving it.


In practice, though, that’s not what happened. The reason Keystone in particular became such a target of activism is that it was within the executive branch’s power to block the pipeline without saying exactly what the environmental benefits or economic costs were. It was a perfect piece of activist chum — bad policy pushed by people who simply don’t do policy analysis.


And something similar is happening with natural gas exports. On the one hand, there’s the economic cost-benefit analysis. But over and above that, there are the foreign policy benefits of LNG exports. We make it harder for America’s critical diplomatic partners in Europe to follow America’s policy on the Russia-Ukraine war if we make it harder to supply them with natural gas. Conversely, across the global south, it is better for America’s foreign policy and diplomatic posture if countries buy American gas than Russian or Qatari gas. I’m not really sure how you’d put the foreign policy benefits of LNG exports into a cost-benefit analysis equation. But they clearly should count as a consideration in favor of approval, not against it.


Energy is important to the global economy

Climate activists want the world to burn less fossil fuels.


One way to do this is to promote new low-carbon technologies, which is what the climate provisions of the Inflation Reduction Act do. It’s a politically sound approach to mitigating climate change, and it also makes sense on the merits.


Another approach is to raise the price of fossil fuels, which is extremely politically dicey, but could make sense on the merits. A carbon tax, for example, would reduce emissions while also raising revenue. The problem is that it’s unpopular, so politicians don’t want to do it and climate advocates, wisely, don’t ask them to. Where the advocates keep running aground, though, is that their preferred strategy of stymying fossil fuel production and distribution has the same downside (higher prices) but without the revenue upside.


Imagine the polling on two ideas:


A tax on carbon dioxide emissions, with the revenue used to reduce the deficit and lower interest rates so mortgages are more affordable.


A tax on carbon dioxide emissions, with the revenue put into sacks and tossed into the ocean for no reason.


Whatever the political strength of (1), the strength of (2) is going to be less. So if you think (1) is not a sustainable strategy, then (2) is also not a sustainable strategy. The cute thing about trying to block exports is you can say “well, sure, I’m reducing emissions by raising energy prices, but they’re not American energy prices so it’s fine.”


But is it fine? As reported in this Euronews story from back when the Russia-Ukraine war spiked natural gas prices, “record-breaking gas prices have driven the cost of fertilizers up by 151% on an annual basis, putting producers and farmers under enormous financial strain.” Of course that’s bad for Europe. But the market for things like fertilizer and agricultural commodities is global, so higher global natural gas prices do come around to bite American consumers in the ass, even if our domestic natural gas price stays low. In June there was a Bloomberg story titled “Gas Crisis Rages On for the Poorest Nations” about how Europe solved its gas problems by sourcing from elsewhere, which simply created new shortages for the global poor.


Again, this is not just an economic problem, it raises the question of whether there are any climate benefits at all to blocking natural gas exports. On September 6 of last year, for example, Reuters reported that India was looking for more natural gas to help avoid blackouts. Then on November 29, Reuters reported that “India aims to add 17 gigawatts of coal-based power generation capacity in the next 16 months, its fastest pace in recent years, to avert outages due to a record rise in power demand, according to government officials and documents.”


I’m not here to tell you that blocking LNG exports will raise emissions on net. But it’s obvious that more than zero percent of the missing American gas will be replaced by coal, and exactly what that percent is matters. But looking around at the coverage of the issue, this question is shockingly absent. The frame is entirely about climate activists demanding this versus industry interests or potential foreign policy concerns. It’s taken for granted that this is a major step toward achieving global climate goals. But nobody is presenting any modeling that actually shows that.


Demand more from the climate movement

I was going to say something about the politics here, but honestly, I’m not sure what to say. I don’t think it’s politically wise for Biden to spend more time kowtowing to climate activists, but I also acknowledge that LNG exports are not a salient issue for anyone other than climate activists.


But I do hope other stakeholders in the coalition will note a few things:


The climate movement has not actually done the thing it says it has done and adopted a framework that emphasizes jobs and investment. They can be forced to do that when centrist Democrats insist, but their default posture is exactly what they’re doing here.


The climate movement also doesn’t bother to do real policy analysis before formulating their demand. Everything is reasoning backwards from fake climate targets. For the world to reach net zero by 2050, we need it to be the case that India is not making massive new investments in fossil fuels, therefore we don’t need to worry about selling them gas. But, in fact, India is making massive investments in fossil fuels, so the coal-gas margin matters!


The climate movement is out here making new policy demands in an election year, even though they already got to be the Democratic Party’s top legislative priority.


We have, essentially, a movement that is addicted to activist chum. There are three things that would actually make a big difference in addressing climate change. One is targeted deregulatory efforts aimed at making it easier to deploy zero carbon electricity — this would require getting environmentalists to compromise on other goals for the sake of climate. A second is persuading people to care more about climate change, because if they cared, you could do pricing that would accelerate all kinds of useful consumer change. A third is developing new technology that would solve hard problems like fertilizer production, high-temperature manufacturing, aviation, and maritime shipping.


There are people working on all of these things, which is great. But there’s this mass of people (and money) working in the amorphous chum space where they ask for a “climate emergency” or get really worked up about the exact wording of COP statements or want to block LNG exports. And they can’t really explain why any of this makes a difference or is helpful, it just kind of shambles forward as a donor perpetual motion machine.


So I hope the Biden administration does its review in a rigorous way and really tries to determine the net emissions impact of building these facilities, as well as the benefits and under what implicit Social Cost of Carbon it would make sense to block them. The advocate reaction makes me suspect that the fix is in. But I hope that whatever happens in 2024, the powers that be — in politics, in media, in philanthropy — will start asking harder questions about who they’re listening to on energy issues and why.


Monday, January 29, 2024

Marvel World - Dissent Magazine. By Sam Adler-Bell

Read time: 19 minutes


Marvel Studios has managed to recruit fans into rooting not just for its superheroes, but for the company’s business plan.



Marvel action figures (Hannaford/Flickr)

As a kid, my favorite superhero was Batman. For me, superheroes are all about their origin story—not necessarily how they got their superpowers (Batman, after all, doesn’t have any), but why they’re driven to super-heroism. And Batman’s origin story is great: billionaire Bruce Wayne uses the financial bequest of his murdered parents to become strong enough to save them; only he can’t do that, because they’re already dead. So he spends his time compulsively regenerating the traumatic situation of their passing, inviting villainy and menace into his life, where it endangers whomever he loves. Batman is stuck in a loop, ever recreating the conditions for his primal failure, so that he may fail again. Freudians call this a “repetition compulsion.” We might just call it misery; the only love he knows is failure. 


But that’s grownup bullshit. The real reason I loved Batman, as a kid, was the toys. And let me tell you, these were awesome toys. I accumulated dozens of Batman action figures, along with various Batmobiles, Batplanes, and a much-cherished, delightfully intricate Batcave, sized appropriately for my army of plastic, crime-fighting orphans.


My abiding preference for figurines depicting Batman himself, rather than any of his friends or foes, created story problems, however. Why, in this universe, were there so many Batmen? Were they all the same guy in different outfits? That would mean playing with only one at a time! And who would he face off against? Luke Skywalker? That didn’t make any sense! (I was a neurotic kid.) My solution was elegant: I imagined scenarios involving evil imposter Batmen, terrorizing Gotham in his name—and team-ups between doppelganger Batmen from alternative universes. Kid logic is a flexible thing, but it demands satisfaction. Armies of android Batmen controlled by a demonic super-computer? It played. 


I was reminded of this boyhood conundrum—and my solution to it—while reading MCU: The Reign of Marvel Studios, a highly competent history of Marvel’s rise to Hollywood supremacy by entertainment reporters Joanna Robinson, Dave Gonzales, and Gavin Edwards. (NB, nerds: I am aware that Batman is not part of the Marvel Cinematic Universe.) Since 2008, Marvel Studios has made thirty-three films, earned nearly $30 billion, and reshaped the movie business in its image, inspiring a feeding frenzy for superhero content and dormant intellectual property (IP) out of which complex, interconnected “cinematic universes” can be built. Disney, an IP powerhouse infamous for jealously guarding its roster of beloved characters, acquired Marvel in 2009 for $4 billion; three years later, it purchased George Lucas’s Star Wars universe as well. 


But in the 1990s, when Marvel emerged from bankruptcy under the stewardship of toy magnate Ike Perlmutter, its goals were considerably humbler. As Becca Rothfeld writes in her Washington Post review of MCU, “Before they became products in their own right, Marvel movies were unusually expensive and elaborate advertisements for action figures.”  


In 1993, Israeli-born toy-maker Avi Arad was appointed chief executive of Marvel’s fledgling visual entertainment division, which sold the rights to Marvel heroes to individual film and TV productions. “Putting a toy designer in charge of Marvel Films,” the authors write, “made clear what Marvel wanted out of Hollywood: shows and movies that would help them sell more toys. In industry argot, they wanted to make entertainment that was ‘toyetic.’” When Marvel founded an in-house studio in 2004, “toyeticism” was its raison d’être. Since the 1990s, Marvel IP had yielded several successful films, but these, including the Blade movies starring Wesley Snipes and Bryan Singer’s X-Men franchise, were seen as needlessly dark and adult by Marvel’s toy-focused c-suite. If instead Marvel made its own films, they reasoned, “it could keep the on-screen tone toy-friendly and ensure that each movie starred whatever lineup of heroes would move the most action figures.” 


Marvel chose arms dealer Tony Stark to star in the first MCU film (2008’s Iron Man, with Robert Downey Jr.) because a focus group of kids reported that he was the hero they’d “most want to play with as a toy.” (And to be fair to those kids, he flies and shoots lasers from his hands.) For years, Perlmutter refused to approve stand-alone films starring female heroes because, he believed, the toys wouldn’t sell. Black heroes were also thought insufficiently toyetic. Marvel’s corporate brain trust was relieved when a change to the storyboarding for Captain America: The First Avenger, set during the Second World War, placed more emphasis on HYDRA, a syndicate of long-time Marvel baddies, because, as the authors note, “the resulting toys would be more interesting and—technically—not Nazi action figures.”


It’s not uncommon for genre stories to be constructed in this way: baubles first. The cartoon and comic book superhero He-Man, for example, rides an armored green tiger because Mattel, the toy company that invented him, had several warehouses of unsold tiger toys to get rid of. There is obviously something a bit sordid about so crassly subordinating the creative instinct to the necessities of commerce (in this case, commerce in surplus plastic cats). But isn’t that Hollywood in a nutshell?


What seems to trouble Marvel’s detractors—the critics and auteurs who regularly inveigh against its reign—is not that Marvel prioritizes profit over creativity, diversion over art, repetition over novelty, or juvenile wish-fulfillment over adult travail; but that it does so shamelessly, without the obligatory pretense of past eras of Hollywood. 


Ultimately, it wasn’t action figures that made Marvel king; it was ticket sales. Four of the ten highest-grossing movies in history are Marvel Studios productions. Still, I can’t stop thinking about “toyeticism.” In a perverse way, it has made me more, not less, sympathetic to Marvel to imagine its movies being conceived in a process not unlike my boyhood Bat-reveries. I envision a group of kids in their dads’ business suits, sitting on the floor of a conference room, staring down at a pile of their favorite action figures—three Spider-Men, a Thor with no hair (somebody’s sister had cut it off), maybe an Iron Man or two—and asking themselves, “Well, why would all these guys be in the same movie? Why would they be fighting each other? Why three Spider-Men?” If someone comes up with a good enough answer—and it only has to satisfy kid logic—they get to pick up the toys and smash them into each other, over and over again. 


Greta Gerwig’s Barbie (like He-Man, a Mattel product) took seriously the problem of plastic. Gerwig incorporates into her film the idea that Barbies are toys, that the stories we tell about them, and the world they inhabit, reflect the imaginative preoccupations of children—girls, in particular—whose incipient Weltanschauung is conditioned and constrained, but not entirely dictated, by gender, patriarchy, and Mattel’s bottom line. Barbie World is a cruel but fabulous place where every disposition is sunny, every outfit is perfect, every woman is a success, and every foot is arched and pointed primly downward. 

If we think of the MCU in similar terms, as a bedazzling prison populated not by superheroes but superhero dolls, what, then, would we say are the attributes of Marvel World? 


Well, it’s certainly a place that needs a lot of saving, where the resolution of one crisis tends to generate the seeds of the next. Most MCU villains are themselves victims, often of collateral damage from the last time the Avengers (the MCU’s premier supergroup) saved the universe, usually leveling whole city blocks to do so. (The narrative momentum of Marvel World, you might say, relies on blowback.) There are nations in Marvel World, but scant geopolitics, except in the form of globalist schemes to fetter the Avengers. Our heroes are celebrities, but celebrities of the besieged type; they may wish to live their own lives, away from the limelight, but they are constantly being dragged back into service by a simultaneously needful and irksome public. They carry this burden with a discordant mix of grim resolve and self-effacing humor (the DNA of the comics); at times, the Avengers talk about world-saving like it’s a nine-to-five job. (“He’s a friend from work,” quips Thor, before engaging the Hulk in gladiatorial combat in 2017’s Thor: Ragnarok.)


This lunch-pail badinage is at odds with another recurring conceit: that each of the Avengers, like Batman, is bound to hero-work by an originary trauma of some sort, the details of which they closely guard and quietly bear, except in whispered bouts of self-disclosure between pairs of heroes—stagy little superhuman trust-building exercises—which feel more artificial than all the computer-generated aliens. And oh, there are aliens. Lots of aliens. 


Marvel World is not a good place to learn anything new about heroism, about love, grief, or responsibility—although these themes are explicit in every film. The films are also full of melodrama, big swells of emotion of the most compulsory type. (Adorno said popular music “hears for the listener”; Marvel feels for him.) But Marvel World is not without its charms. It is a good place, for example, to see what it looks like when a massive, metallic space-whale crashes through Grand Central Terminal. Likewise, Downey Jr.’s screwball patter with Gwyneth Paltrow in the early Iron Man movies is undeniably charming; James Gunn’s Guardians of the Galaxy films are far enough removed (several light-years) from the instrumental plotting and obedient house style of most Marvel films to earn their jukebox fight scenes and sentimentality; Zendaya and Tom Holland (a real-life couple) are winning and believable as teenage sweethearts in the John Hughes–inspired Spider-Man films.


And in at least one respect, Marvel movies are highly sophisticated texts. As the films accumulate, a creeping self-awareness—of the sort that brings chaos and, eventually, liberation to Gerwig’s Barbies—starts to bedevil the denizens of Marvel World as well. Watch enough of these movies (and God knows I have), and what they seem to be about is Marvel Studios itself. 


Other critics have noted this self-reflexivity. “MCU movies are often metaphors for themselves,” writes the New Yorker’s Michael Schulman, “In ‘The Avengers,’ the tense collaboration among superheroes with complementary powers and sizable egos resembles nothing so much as Hollywood filmmaking, with writers, directors, and producers wrangling for control.” Similarly, frequent handwringing within the movies about which heroes should comprise, or lead, this or that version of the Avengers stands in for the casting process. As Schulman notes, in Captain America: Civil War, the imposition of government oversight on the Avengers is “a handy analogy for creativity under corporate supervision.” 


But repressed anxieties are at least as pervasive as self-conscious allegory. Marvel Studios built its empire on characters and storylines generated, over decades, by an army of comic book writers and artists. In exchange for using their designs in billion-dollar movies, Marvel artists have received checks as low as $5,000 and invitations to a premiere. It’s notable, then, how frequently villains and heroes in the MCU are motivated by a desire to defend, hoard, or steal intellectual property. In Iron Man, Jeff Bridges’s Obadiah Stane rebukes Downey Jr.’s Tony Stark for keeping the Iron Man suit secret from his business partners: “You really think that just because you have an idea, it belongs to you?” 


In Iron Man 2 (2010), Mickey Rourke’s Ivan Vanko resolves to kill Stark because he believes Stark’s father stole his own father’s design for the “arc reactor,” which powers Stark’s suit. “You come from a family of thieves and butchers,” Vanko lectures Stark, “and now like all guilty men, you try to rewrite your own history. And you forget all the lives the Stark family has destroyed.” Hoarded IP derived from alienated labor represents the MCU’s primitive accumulation; Vanko’s speech—delivered in an over-the-top Russian accent by Rourke—is compelling despite itself.


Anxiety about corporate control and uniformity also animates the MCU’s most critically lauded film. Ryan Coogler’s Black Panther (2018) is set in the techno-utopian kingdom of Wakanda, an African nation unspoiled by European colonialism. Wakanda owes its distinctive visual grammar—its Afrofuturist costuming, artful lighting, and inventive set design—to Coogler’s decision to bypass Marvel’s in-house art leads “in favor of his own crew,” some of whom won Academy Awards for their work (the only Oscar wins for the franchise). 


Notably, no other MCU character appears in Black Panther before the credits run; in most respects, the film resists not only Marvel’s visual tyranny, but the instrumentalization of its plot for the purpose of advancing the larger, interconnected MCU saga. Black Panther is a movie about an isolated, self-sufficient Black civilization resisting interference from outsiders, including the Avengers, who would use its resources for their own aims. In a sense, Black Panther is the Wakanda of the MCU—a site of resistance against Marvel’s hostility to sovereign artistic ambition—which makes what happens in the next MCU film particularly galling. As critic Aaron Bady notes, in Avengers: Infinity War, Wakanda is bled dry, narratively, by the arrival of the Avengers, whose plot takes immediate precedence, and robbed of its visual distinctiveness by directors Joe and Anthony Russo. Situated within an Avengers tentpole, Wakanda serves as yet another wasted landscape for an interminable, computer-generated battle between superheroes and aliens, the same one we’ve seen dozens of times already. It is exhausting.


But Marvel’s self-awareness extends even to this exhaustion; the films seem to know they are testing our patience. In Spider-Man: Far From Home (2019), Jake Gyllenhaal plays an aggrieved former Stark employee who uses combat drones and holograms to trick humanity into believing he is an interdimensional superhero called Mysterio, fighting to save Earth from elemental monsters. The revelation of Mysterio’s deceit, halfway through the movie, lends a weightlessness to the entire MCU canon: the fight scenes we’ve seen thus far—including Spider-Man’s showdown with a massive water golem in the canals of Venice—were fake; but for the audience, they were no more or less fake than any other fight scene in a Marvel movie. In the end, there is something contemptuous about the relish with which the film brings attention to its own artifice. As Mysterio tells Spider-Man, “It’s easy to fool people when they’re already fooling themselves.” 


The hero of MCU is Kevin Feige, the plucky comic-book savant who rose through Hollywood’s ranks to become Marvel’s top producer and the creative architect of the Marvel Cinematic Universe. The authors invite us to root for Feige the way we root for characters like Steve Rogers, the skinny kid from Brooklyn who is transformed into a super-soldier to fight the Nazis. “So many big men fighting this war,” says Captain America’s inventor, a German-Jewish refugee played by Stanley Tucci. “Maybe what we need now is a little guy.” This is the essence of the Marvel power fantasy: regular people—skinny kids, Jews, outcasts, nerds—becoming strong enough to defeat their tormenters and, by dint of their own history of suffering, wielding their superpowers for good. (Captain America was created in 1941 by Jack Kirby and Joe Simon; he is seen punching out Hitler on the cover of Volume 1.) 

This perverse identification with power explains the “sore-winner” quality of Marvel fandom—the online armies of superhero fans who react with rancor every time a trendy actor or director criticizes the MCU. Marvel may be on top of the world, but some of its fans still feel like they’re trapped inside a high-school locker. 


Feige isn’t the son of immigrant garment workers like Kirby. (His origin story involves being rejected by the University of Southern California’s film school five times.) But the authors of MCU take pains to establish the unlikelihood of Feige’s astronomic success. “Feige’s vision for Marvel wasn’t linear, limited, or safe,” they report. Marvel Studios grew “by combining the improvisational bootstrap culture of a Silicon Valley start-up with a modern version of the studio system, signing up actors for long-term contracts, cultivating a coterie of staff writers, and bringing on a small army of visual artists who sometimes determined the look of a movie before a director was even hired.” 


In truth, of course, safety—in the sense of a guaranteed return on investment for shareholders—has been Marvel’s principle accomplishment, reviving a flailing blockbuster system by eliminating the risk associated with novelty. To do so, Feige merely supercharged what had already been working for Star Wars, Lord of the Rings, and Harry Potter: constituting a “paracosm” out of existing IP, an endlessly iterative fantasy world, with a locked-in, nostalgic audience. South Park satirized this enterprise, and the essentially conservative impulse underlying it, in its twentieth season, in which the adult townsfolk become addicted to “Member Berries”: little grape-like, sentient fruit who squeak IP-centric slogans like, “’member Chewbacca?” and “’member Ghostbusters?” before tossing off increasingly reactionary ones: “’member feeling safe?” “’member Reagan?” “’member when marriage was just between a man and a woman?” 


Robinson, Gonzales, and Edwards are clearly Marvel fans, but they’re too well-sourced to paint an exclusively flattering portrait.  We learn, in MCU, about contractual disputes with actors and directors; about the likely pervasiveness of HGH prescriptions on Marvel sets; and about the mistreatment of Marvel’s visual effects workers, who recently voted to unionize (a conflict also presaged by the Mysterio plot line, in which a disgruntled viz dev underling organizes a revolt against the Avengers). But even these darker moments are conveyed in a relentlessly sunny and over-awed tone. The cumulative effect of this dissonant boosterism is a sense of creeping dread, like perusing a glossy, trifold pamphlet only to gradually realize it advertises a concentration camp. “The MCU is inevitable,” the authors write, “as Thanos says of himself,” perhaps forgetting for a moment that Thanos, the arch-villain of Marvel World, planned to destroy half the universe in order to save it. 


What does seem unique about Feige’s accomplishment is that he managed to recruit fans of the MCU into rooting not merely for its superheroes, but for his own business plan. Like sports fans who scrutinize the machinations of their teams’ front offices as closely as the action on the field, Marvel fans debate and dissect the twists and turns of Feige’s content development pipeline, which is divided into numbered “phases,” as they would be on a corporate slide deck. Jason E. Squire, a professor emeritus at the film school that rejected Feige five times, recently told Variety, “Kevin Feige is the Babe Ruth of movie executives.” He calls his shots, and they (usually) leave the park. (Maybe rooting for Marvel is like rooting for the Yankees.) But just as our sympathy for skinny Steve Rogers wanes the longer we know him as Captain America, the thrill of rooting for super-charged winners in the c-suite may diminish too. (Somebody has to root for the Mets.)


Still, Feige is difficult to hate. He strikes the reader as so thoroughly a man for his moment, so naturally and plentifully endowed with the meager qualities needed for this endeavor, that it’s difficult to summon or maintain the appropriate resentment at what he and Marvel have wrought. As Rick and Morty creator Dan Harmon told the authors of MCU, “[Y]ou can’t fight Kevin Feige in the street. He’ll just say, ‘Oh, I love that you’re fighting me. This is so wonderful,’ and everyone will start booing you for being a bully.” He’s a slap-happy warrior, a fan himself. Arguing with Feige about artistic integrity, I imagine, would be like arguing with a beaver about why he builds his den with sticks instead of stucco.


Indeed, these debates—about Marvel, mass culture, and art—feel as stale and redundant as the movies themselves. Now as in the past, it’s difficult to discern whether the lover of high art is principally disturbed by the market or by the masses (consider, for example, Adorno’s disdain for big band jazz); likewise, it’s hard to tell whether apologists for mass culture are at war with elite snobbery and self-satisfaction or with taste, quality, and the very notion of artistic merit. Suffice it to say, even Marvel’s harshest critics usually admit the films are entertaining. And I suspect most MCU devotees know that entertainment is not all our souls require. 

“The March Hare explained to Alice that ‘I like what I get’ is not the same thing as ‘I get what I like,’” Dwight Macdonald wrote in one of his prickly takedowns of mass culture, “but March Hares have never been welcome on Madison Avenue.” I suspect March Hares aren’t welcome at Marvel Studios either. (Then again, Alice’s Adventures in Wonderland is public domain, so be careful what you wish for.) Despite frequent industry warnings of “superhero fatigue,” audiences continue, at least, to like what they get; Guardians of the Galaxy Vol. 3 pulled $845.6 million worldwide, just a smidge under the returns of Vol. 2 ($863 million). The other highest-grossing movies of 2023 included Barbie, The Super Mario Bros. Movie, the tenth entry in the Fast & Furious franchise, The Little Mermaid, an animated Spider-Man, and Mission: Impossible – Dead Reckoning Part One. No doubt, Hollywood’s love affair with iterative IP is far from over. 


That isn’t to say Marvel doesn’t have problems. The studio’s latest offering, The Marvels, was the worst performing MCU entry ever, grossing just under $200 million in its first month (a flop). The visual effects for Ant-Man and the Wasp: Quantumania (2023) were slapdash and widely mocked by critics and fans. Ratings for MCU television content seem to be flagging. And in December, Marvel cut ties with the actor who was supposed to shepherd the MCU into Phase Six, Jonathan Majors, after he was convicted of misdemeanor assault and harassment. Most of all, perhaps, standards are slipping amid a glut of content on the streaming service Disney+. “The quality is suffering,” one of the authors of MCU, Joanna Robinson, recently told Variety. “In 2019, at the peak, if you put ‘Marvel Studios’ in front of something, people were like, ‘Oh, that brand means quality.’ That association is no longer the case because there have been so many projects that felt half-baked and undercooked.”


The recent films have a narrative problem as well. In the early going, the principle story challenge for Marvel productions was keeping audiences invested in the stakes: how many times can the Avengers save the world before “saving the world” ceases to feel like such a big deal? The Thanos storyline was the apotheosis of this emotional arms race: in Infinity War, Thanos succeeds in disappearing half the universe’s population—along with dozens of beloved MCU heroes—with a snap of his fingers. But then, in Endgame, the Avengers succeed in reviving most of their dead friends by traveling to an alternative universe where Iron Man uses the “infinity gauntlet” to reverse the Thanos snap, while sacrificing himself: an emotionally satisfying triumph. 


But it can’t be replicated. From then on, the existence of the scarcely understood “multiverse” was supposed to provide narrative momentum for the films—and a handy justification for including Andrew Garfield and Tobey Maguire’s versions of Spider-Man in the Marvel/Sony co-production Spider-Man: No Way Home—but all the multiverse could really do was provide narrative indeterminacy, evacuating the stakes from any consequential event or loss. Tony Stark’s death in Endgame was tragic, but why should fans accept its permanence, when in the very same movie, dozens of other characters were revived? In 2021, as if to taunt Feige and co. for confining themselves to this metaphysical cul-de-sac, a group of Marvel fans paid for a billboard urging the studio to “#BringBackTonyStarkToLife.” And why not? (One reason why not: Downey Jr.’s salary had ballooned to $75 million for Endgame.) 


In the latest Guardians of the Galaxy movie, the last to be directed by James Gunn, Chukwudi Iwuji plays the High Evolutionary, a megalomaniacal alien geneticist who aims to build a utopian society inhabited by supreme beings of his own creation. A space-age Doctor Moreau, he evolves new species from the DNA of lower life-forms (raccoons, badgers, walruses), branding each as the IP of his company, OrgoCorp. In pursuit of perfection, he builds creature after creature, world after world, looking for signs of the “capacity for invention” that is the hallmark of civilization. But each iteration disappoints him. His experiments only ever replicate what is already known; they can’t make anything new for themselves; they are perfect, but perfectly predictable. (A race of man-animal hybrids, sequestered on an alternate Earth, rebuild 1950s suburbia, down to the linoleum floors and manual transmission cars; the High Evolutionary destroys them to start over again.)

It seems likely that Gunn intended some of this thematic resonance. After all, what is Disney if not a massive corporate zoo of super-beings and talking animals, made of recycled and remixed franchise DNA, which are frantically combined into flawed but functional worlds? (Gunn’s bitterness is a matter of record: Marvel/Disney fired him in 2018, over blue tweets from the aughts; he was rehired to finish the movie in 2019, after his cast revolted.) Notably, Gunn invites the audience to sympathize with all of the High Evolutionary’s creations—not just the ones we know and love. To be test subjects for OrgoCorp experiments is to be instrumentalized, enslaved. And so the Guardians free them all, facilitating an exodus of giant animals, toothy space squids, and gleeful star children onto a giant spaceship headed for the cosmos. It’s a moving moment. 


Where is this ark of liberated misfits headed? Well, if they could go where Gunn is going, then to Warner Bros. Discovery, where he’s been hired to revitalize the DCU, the shared universe inhabited by Superman, Wonder Woman, and Batman. Gunn’s message in Guardians 3 seems to be that the capricious world-builders of the MCU don’t deserve their progeny; that Feige and Disney have crushed the creative potential of their own creations, by exerting too much control and imposing their own definition of perfection. Like Prendick, Gunn has come to sympathize with Moreau’s abominations. (“I say I became habituated to the Beast Folk. . . . I suppose everything in existence takes its color from the average hue of our surroundings.”) Oddly, I find myself sympathizing with Moreau and the High Evolutionary; their lengthy, torturous experiments have failed to reproduce the human spark. Some abominations are not worth saving. 


Sam Adler-Bell is a freelance writer in New York. He co-hosts the Dissent podcast Know Your Enemy. 

Who’s Canceling Whom? | David Cole | The New York Review of Books. By David Cole

Read time: 15 minutes


Who’s Canceling Whom? | David Cole | The New York Review of Books

archived 18 Jan 2024 16:59:20 UTC


Conservatives often charge their opponents with “cancel culture,” but the right poses as significant a threat to free speech as the left.


Illustration by Michael Schmelling

by Greg Lukianoff and Rikki Schlott

Simon and Schuster, 443 pp., $29.99

The instantly notorious exchange in a congressional hearing on December 5 between Representative Elise Stefanik and the presidents of Harvard, the University of Pennsylvania, and the Massachusetts Institute of Technology laid bare once again the fragility of our collective commitment to free speech. Stefanik repeatedly asked the presidents whether a student calling for the genocide of Jews (which she equated with calling for “intifada”) would violate their institutions’ codes of conduct or constitute bullying or harassment. Each one replied, in effect, “It depends.”

Despite the outrage that followed, that’s actually the right answer if universities respect free speech principles. As a general matter, advocating for genocide or saying any number of other hateful things is protected by the First Amendment. If a woman stood on a street corner across from Congress holding a sign calling for the genocide of Jews, government officials could take no action against her. Even hateful speech calling for unconscionable acts of violence is protected by the First Amendment unless it falls within very narrow exceptions, such as genuine threats of violence or “incitement” that is both intended and likely to produce imminent violence. The sign would fit none of those categories.

That doesn’t mean there is nothing universities can do about hateful speech. On campus as in the workplace, denigrating speech can sometimes constitute discriminatory harassment, which is not protected by the First Amendment. Yelling such an epithet at a particular Jewish student or pinning such a sign to his dorm room door could be considered religious harassment, not free speech. Even when not directed at a particular individual, if such a statement were repeated so often that it pervaded the campus, it could create a “hostile” learning environment that would also amount to prohibited discrimination, not protected speech. And a professor in a classroom could forbid such a statement as interference with civil and robust discussion.

But a student at a campus protest against the Israel–Gaza conflict who chants “From the river to the sea” or even “Genocide to the Jews” without directing it at anyone in particular may not be punished by a school that respects free speech principles. Public universities are required to safeguard free speech, since they are directly governed by the First Amendment. Private universities are not, but many, including Harvard, Penn, and MIT, have committed to respect free speech on campus essentially as if they were bound by the First Amendment. So “It depends” was the right answer.

But it was not the right answer for the moment, evidently. Penn president Elizabeth Magill resigned under extraordinary pressure four days after her testimony. Harvard president Claudine Gay apologized and briefly held on to her position, despite donors’ and legislators’ calls for her ouster. But she, too, was forced to resign in January, after evidence emerged of plagiarism in many of her academic papers. Only MIT’s Sally Kornbluth remains in office. Meanwhile, the GOP-led House Committee on Education and the Workforce, which held the hearing, has launched an official investigation into the “learning environment” at all three schools. That’s the price of upholding free speech principles in today’s impassioned divide over Israel and Gaza.

Some commentators, such as The New York Times’s Bret Stephens, argued that the presidents’ unwillingness to unequivocally prohibit advocacy of genocide of Jews was hypocritical in view of previous decisions that were less than fully protective of free speech—such as MIT’s retraction of an invitation to the eminent geophysicist Dorian Abbot to deliver a public lecture after attention was drawn to his critique of some diversity initiatives.

The principal complaint, however, was not that the university presidents had been too censorious previously, but that they were not being censorious enough now. The critics insist that they should have explicitly stated that any call for genocide would violate their school policies. And while Stefanik, a staunch Republican, led the charge, she was joined by many prominent liberals, including Pennsylvania governor Josh Shapiro, Second Gentleman Doug Emhoff, and Harvard constitutional law scholar Laurence Tribe.

It is true that college campuses have not been paragons of tolerance and intellectual diversity in recent years, as amply illustrated by a timely book, The Canceling of the American Mind. Greg Lukianoff and Rikki Schlott, the president of and a research fellow at the nonprofit Foundation for Individual Rights and Expression (FIRE), offer persuasive evidence that students, professors, and administrators at many colleges and universities across the country have been too quick to punish or “cancel” those whose views contravene progressive orthodoxy on race, gender, sexuality, and other matters.

Many of the stories are familiar. A professor at Hamline University in Minnesota was sanctioned for showing a painting depicting the Prophet Muhammad in an art history class. The head of a Yale residential college and his wife were hounded by students for questioning a request from the administration that students not wear Halloween costumes that reinforce stereotypes. A conservative judge invited to speak at Stanford Law School was shouted down by students, as was a conservative lawyer at Yale Law School.

But FIRE, which specializes in defending free speech on campus, has learned of many more incidents, much less well known, and reading about them all in one place makes clear that these are not isolated instances. They include a professor at the University of Southern California who was pressured to stop teaching a class after he explained that Chinese speakers say nega (meaning “that”) as filler, much as English speakers use “like” or “you know,” and students objected that it sounded like a racial slur; a professor at UCLA who was suspended after citing Martin Luther King Jr. in a sarcastic email rejecting a request that he grade Black students’ exams more leniently following the police killing of George Floyd; and, in an instance of conservative canceling, three professors at Collin College in Texas who were terminated for complaining that the school’s Covid policies were insufficiently strict.

Lukianoff and Schlott, in short, have documented a serious problem. But like many advocates, at times they indulge in rhetorical excess. They assert, for example, that the past decade has seen repression of speech akin to or worse than that of the McCarthy era—a period when millions of Americans were required to swear loyalty oaths and endured official inquiries into their political views, and the full force of government was behind much of the repression. As disturbing as cancel culture is, it is just that: a culture of largely private intolerance, not a system of official repression. There is a huge difference. Among other things, cancel culture can’t land you in jail. And while the First Amendment prohibits the kind of government intolerance so prevalent in the McCarthy era, it affirmatively protects the right of private individuals and institutions to be intolerant. (That’s why Nazis had a right to march in Skokie, Illinois, in 1977, for example.) So while cancel culture is undeniably troubling, we are not reliving the McCarthy era.

Lukianoff and Schlott’s contention that cancel culture began in 2013 and is worse today than ever before also seems questionable. The sad reality is that intolerant efforts to silence those with whom we disagree have long been a staple of our culture. That’s why the First Amendment is so necessary. At various times and in various places during our nation’s history, Jeffersonian Republicans, Hamiltonian Federalists, Catholics, Jews, Jehovah’s Witnesses, atheists, labor organizers, anarchists, pacifists, socialists, communists, civil rights activists, white supremacists, women’s liberation advocates, LGBT rights proponents, and fundamentalist Christians have all been victims of the intolerance of substantial parts of American society, and often of government censorship as well. And while social media has undoubtedly enabled new modes of cancellation, its ready availability to all has simultaneously provided a megaphone to unpopular speakers, making it more difficult to cancel them effectively. So while the closed-minded behavior Lukianoff and Schlott catalog is profoundly disturbing, it’s not clear that things are worse today than in any previous period.

Indeed, as a legal matter, speech is freer today than at any point in our history. Shortly after the First Amendment was adopted, Congress in the Alien and Sedition Acts made it a crime to criticize the government; the Supreme Court never ruled that legislation unconstitutional. During World War I more than two thousand people were arrested and prosecuted for speaking out against the war. Many were sentenced to as much as twenty years in prison. For nearly half a century Communists were excluded or fired from government posts, deported, criminally prosecuted, and blacklisted for nothing more than their associations. In the civil rights era, state governments and private individuals, businesses, and groups targeted people advocating for equal rights, arresting them, refusing to serve them, and unleashing public and private violence against them.

It is largely because many of those targeted fought tenaciously for the right to speak and associate that First Amendment law evolved to provide robust protection of speech. Today, outside of a few very narrow categories of unprotected speech such as obscenity and incitement, the government cannot punish speech because of its content or viewpoint unless doing so is necessary to promote a compelling state interest, a standard that is nearly impossible to meet. Moreover, while no one would accuse the current Supreme Court of being especially rights-friendly, First Amendment freedoms find support across its often stark ideological divide.

As bad as the situation may be, it is also not obvious that intolerance on college campuses is worse today than before. Until the 1960s elite universities, a principal focus of Lukianoff and Schlott’s critique, were relatively homogeneous. There is little reason to believe that those communities were more tolerant than today’s more diverse student bodies. There may have been fewer conflicts when universities admitted only a small subset of the population, largely white, male, and privileged. The conformity of consensus is not the same thing as tolerance. You might even call this “structural cancellation.”

Still, Lukianoff and Schlott are right that on too many campuses today, there is a reigning progressive orthodoxy, and those who do not subscribe are likely to feel excluded or dismissed. The faculties and students at elite universities are overwhelmingly liberal to progressive in their views, and conservative voices are often scarce. According to one study Lukianoff and Schlott cite, only one in ten professors nationwide identifies as conservative. In my experience, the ratio is probably more extreme at the most elite schools.

For this reason, “cancel culture” is a charge that the right tends to invoke. But to their credit, Lukianoff and Schlott are equal opportunity critics of cancellation. As they demonstrate, the right can be just as intolerant. And in recent years, its unwillingness to hear opposing views has taken the form not just of private turning away but of official state censorship. Florida’s Stop WOKE Act, for example, has ushered in legislative micromanagement of what can and cannot be said in the classroom. Among other proscribed ideas, it prohibits state university faculty from endorsing any argument that “a person, by virtue of his or her race, color, national origin, or sex, should be discriminated against or receive adverse treatment to achieve diversity, equity, or inclusion.” The law appears to preclude any classroom statement supportive of affirmative action. In cases brought by the ACLU, the NAACP Legal Defense Fund, and FIRE, a federal court has declared the Florida law unconstitutional as an abridgment of academic freedom. This is not just cancel culture; it is government censorship. And many other states have passed similar laws.

Florida’s “Don’t Say Gay” law, which restricts grade school teachers’ ability even to discuss sexual orientation, similarly constitutes direct state suppression of speech, as do the many efforts across the country to ban books expressing liberal views on sexuality, race, and parenting from school and town libraries. And most recently, Florida denied recognition at its state college campuses to Students for Justice in Palestine because of state officials’ disapproval of comments made after the Hamas terrorist attacks on October 7 by the group’s national chapter—thereby simultaneously punishing protected speech and imposing guilt by association. (The ACLU is challenging this action.)

So the right’s passion for free speech seems less than universal. Where liberal or progressive views are concerned, the right has not only shown little tolerance, but has invoked state power to suppress them.

With so much cancellation from all sides, free speech is undoubtedly imperilled on college campuses. The academic enterprise demands a commitment to open debate and free inquiry. In the words of a 1974 Yale faculty committee report, written by the historian C. Vann Woodward in response to students shouting down speakers fifty years ago, “The history of intellectual growth and discovery clearly demonstrates the need for unfettered freedom, the right to think the unthinkable, discuss the unmentionable, and challenge the unchallengeable.”

This is why many if not most private universities have adopted free expression policies that largely mirror the rules that would apply to a public university directly bound by the First Amendment.* These policies are not the problem. Rather, the challenge has been to realize them in practice. The unwillingness of liberals and conservatives alike to be exposed to views with which they disagree has had a chilling effect on campuses. In annual surveys taken by FIRE, large percentages of students report self-censorship and a hesitation to voice their views for fear of being pilloried by their classmates. Many professors are understandably reluctant to address controversial topics, fearing that they or a student might say something that offends—and ends up in a viral social media post or official inquiry.

And the problem is hardly unique to universities; a broader culture in which people often get their news and opinion from outlets that express only one point of view means many have lost the habit of engaging seriously with ideas they find disturbing, wrong, or offensive. We ask a lot of students when we tell them to rise above all that. But as the 1974 Yale committee noted, that is precisely what intellectual growth requires.

The prevalent notion that hearing something one finds offensive inflicts harm that should be avoided, and concomitant demands for “trigger warnings” and “safe spaces,” make open conversation challenging. In response, it’s not sufficient to invoke the playground rhyme “Sticks and stones may break my bones, but words will never hurt me.” We have all at some point been hurt by someone’s words. Speech is undeniably powerful, and it can be used for good or bad ends. But just as playing soccer inescapably poses the risk of injury, so the free exchange of ideas will inevitably leave some feeling bruised. The costs must be acknowledged but cannot justify suppression if speech—and academic inquiry—are to be free.

Lukianoff and Schlott’s subtitle promises solutions, and they propose many. Yet as is often the case when addressing deep-rooted problems, this is the weakest part of the book. They suggest, for example, that parents “revive the golden rule” and “emphasize the importance of friendships,” that K-12 schools “emphasize curiosity and critical thinking,” and that universities adopt free speech policies, teach students about free speech “in orientation,” and “survey students and faculty about the state of free speech on campus.”

Lukianoff and Schlott do not acknowledge it, but in recent years universities and colleges have in fact undertaken substantial efforts to promote free speech on campus (no doubt in part because of FIRE’s and others’ persistent advocacy). The tide may well be turning. In 2023 alone faculty and administrators undertook an impressive range of initiatives to foster environments in which students and faculty feel free to express disagreement and to engage ideas with which they disagree.

Yale Law School has launched a Crossing Divides program designed to bring people from opposite sides of major issues to the law school community, and to feature speakers whose minds were changed by confronting ideas they initially opposed. Harvard faculty have formed a Council on Academic Freedom to promote intellectual diversity, free exchange, and civil discourse. Stanford has announced similar undertakings focused on applicants and first-year students, to reinforce these values at the outset of the college experience. A group of thirteen colleges, including Cornell, Claremont McKenna, Duke, Dartmouth, Wesleyan, and the University of Pittsburgh, issued a “Campus Call for Free Expression.” My own university, Georgetown, has formed a task force on free speech and campus culture (which I chair) charged with identifying concrete measures to promote tolerance, intellectual diversity, and civil engagement. Columbia announced a Dialogue Across Difference program to encourage just that. Even the University of Chicago, long lauded for its robust defense of free speech, saw fit to create a Forum for Free Inquiry and Expression to ensure that its free speech policies are matched by a free speech culture. It won’t be easy to change long-standing attitudes of intolerance or to counter the chilling effects they foster. But many universities are newly determined to try.

Representative Stefanik’s grilling of the three university presidents may give them pause. That exchange and its aftermath will undoubtedly tempt universities to police speech viewed as hateful, violent, or antisemitic. When a university president has to resign because she stands up for free speech principles, those principles are likely to bend. Universities have been down this road before. In the 1980s many enacted “hate speech” codes to prohibit speech that offended particular groups, even if the speech was not harassing, threatening, or an incitement to violence. The problems in doing so were legion; such codes afford too much unfettered discretion to university administrators and deter potentially controversial speech, so they have generally failed. In the academy, the fact that speech offends someone, or some group, cannot be a sufficient reason to prohibit it.

Committing to free speech means respecting everyone’s right to speak, even and especially those we deem most offensive. Advocating the genocide of Jews is unconscionable. None of the college presidents who responded to Stefanik’s hypothetical gotcha question thought otherwise. If and when anyone actually says that in the real world, they should be forcefully condemned. When used to target individuals, to create a hostile learning environment, or to undermine the mutual respect necessary for a robust classroom discussion, such calls can and should be prohibited. But outside those settings, advocating genocide is not, on its own, a justification for punishment. So yes, Representative Stefanik, it depends. And our commitment to free speech depends on that recognition.

Subscribe to our Newsletters

David Cole

David Cole is the National Legal Director of the ACLU and the Honorable George J. Mitchell Professor in Law and Public Policy at the Georgetown University Law Center. (February 2024)


Don’t Let Trump and Biden Abandon the Debates. By Matthew Yglesias


There is value to having the presidential candidate face off, even if it’s painful to watch.
January 28, 2024 at 1:00 PM UTC

By Matthew Yglesias
Matthew Yglesias is a columnist for Bloomberg Opinion. A co-founder of and former columnist for Vox, he writes the Slow Boring blog and newsletter. He is author of “One Billion Americans.”

Will this happen again this year?Photographer: Morry Gash/Getty Images


One of the few bipartisan traditions left in American politics is hating on the presidential debates. They’re never substantive enough, the moderators always intervene too much or too little, and they have little effect on voters. Who needs ‘em?
So reports that President Joe Biden and Donald Trump are contemplating skipping this year’s edition, put on by the Commission on Presidential Debates every four years since 1988, are hardly surprising. Trump didn’t participate in any Republican primary debates either, and the Republican National Committee withdrew from the debate commission two years ago. Biden has declined to commit to its 2024 schedule.

It is left to me to … well, if I can’t quite defend the debates, I can at least say this: We’ll miss them when they’re gone. The only thing worse than presidential debates may be a campaign without them.
Of course, American democracy long predates the tradition of televised presidential debates.
And the tradition itself had a rough start. Richard Nixon and John F. Kennedy famously faced off in 1960, but Lyndon Johnson saw no need to risk a debate in 1964. Nixon, with a commanding lead and embittered by his prior debate experience, likewise declined to debate in 1968 and 1972. It wasn’t until 1976, with a matchup between President Gerald Ford and challenger Jimmy Carter, that the modern debate era begins.
The tradition was truly entrenched eight years later by incumbent Ronald Reagan, who agreed to debate Walter Mondale in 1984. The debates had no real upside for Reagan, who was on his way to a landslide win, and in fact he was widely seen to have stumbled during the first debate. Once he established the norm, however, it was off to the races. The Commission on Presidential Debates was formed in 1987, and has sponsored debates in the last nine presidential elections. Now the burden is on presidents to explain why they can’t debate, instead of on the commission to say why they should.
None of this is to say that these debates have been grand exchanges of ideas in the tradition of Lincoln and Douglas in 1858.
Indeed, in a pedantic sense they are hardly “debates” at all. The candidates exchange talking points, deliver a handful of rehearsed quips, and the “winner” is often proclaimed on a somewhat arbitrary basis by the media.
And yet for all their flaws, the debates do offer something magical: They are a shared national political experience. Devoted partisans on both sides will watch, along with the tiny handful of high-information swing voters who actually pay close attention to political campaigns.

One fact often obscured by America’s highly polarized two-party politics is that the US is a very large and diverse country. Both party coalitions include lots of people who have significant disagreements with each other. The easiest way to manage those disagreements is to keep your partisans focused on the negative aspects of the other side, often by serving up highly caricatured portrayals of your opponents. At this point, it almost seems as if the majority of Democrats and Republicans are convinced that the other party’s nominee is senile.
There’s a way to gainsay that impression — and inform voters of the rivals’ actual positions on the issues: Put the two candidates side by side on a debate stage for an extended period of time. Biden partisans could watch Trump talk in uninterrupted stretches, and vice versa. That’s very unlikely to dramatically change anyone’s opinion. But it would be a small step toward a healthier society with something more resembling a consensus reality.
The problem is that the very media fragmentation that makes debates valuable also makes them increasingly vulnerable. There were significant downsides to the three-TV-network monopoly, but it gave national politics some grounding and focus. In today’s landscape, almost nothing short of a debate can provide that common focus.
At the same time, politicians have less to lose from ducking debates because they no longer need the cooperation of the mainstream media to get their message out. The difficulty of getting the tradition off the ground was always the fear that the front-runner would regard it as too risky. The flipside is that ducking a debate would also be a risk. Nobody wants to look chicken.
For Trump, in particular, to make hay out of his opponent’s alleged mental acuity and then hide from the cameras is a bad look. But it’s only a bad look in a world where people care what mainstream media has to say. A contemporary candidate can speak to his audience through his preferred social media channels and party-aligned media with or without the help of more mainstream outlets.
Beyond that, there are simply many more media outlets today that are hungry for content.
In any economic environment, returns accrue to the scarce factors of production, which in this case is the candidates themselves. If they want to appear on television, it will be at a time and a setting of their choosing. Presidents, of course, have become increasingly hesitant to do even this. Biden has done fewer press conferences than not only Trump but also Barack Obama. And it’s hard to blame them. Modern presidents can tweet, vlog, TikTok or whatever else if they want people to hear what they have to say rather than fielding hardballs from reporters trying to trip them up.
Debates, for all their flaws, are a rare opportunity to get out of those silos and make everyone who pays attention to the news watch and argue about more or less the same thing. That in and of itself obviously doesn’t end partisanship or polarization or anything else. But it’s something. And if it fades away, we’ll miss it.