Friday, September 15, 2023

Are Nate Silver, Nate Cohn, And Steve Kornacki “Helping To Destroy Left-Liberal Politics”? By Jesse Singal


jessesingal.substack.com

14 - 18 minutes

Are Nate Silver, Nate Cohn, And Steve Kornacki “Helping To Destroy Left-Liberal Politics”?

I predict, with 83.482382348% certainty, that you will find this article in The Nation weird

On Tuesday, The Nation ran an article by Leif Weatherby headlined “ ‘Stats Bros’ Are Sucking the Life Out of Politics.” Subhed: “In their attempt to serve as objective purveyors of fact and reason, Steve Kornacki, Nate Silver, and other data nerds are misleading the left-liberal electorate.”

My friends: this is a strange article. I’ve read it twice and I still don’t quite know exactly what its argument is, or how it could possibly be arguing what it seems to be arguing. I sent it to someone whose opinions I trust, without any priming, to ask him what he thought of it, and his text-messaged response was: “Why’d you send me that nation article I’m very confused by it.”

Weatherby, whose bioline after the article lists him as “associate professor of German, director of Digital Humanities, and founding director of the Digital Theory Lab at NYU,” begins by reminding us that for days after Election Day in 2020, Steve Kornacki was all over MSNBC, pointing at this Electoral College map and this bar graph, trying to help us navigate a tumultuous period in which it was unclear whether Joe Biden or Donald Trump had won. For those who blacked out due to the stress, the outcome in several states — and therefore the outcome of the whole election — was still in question during this period. While it was nowhere as suspenseful as Bush v. Gore in 2000, the big media players didn’t officially call it for Biden until November 7, four very long days after Election Day. 

Weatherby complains that while he was just as invested in this circus as everyone else, that’s what it was — a circus:

    That week plays on a reel like a sentimental movie in the memory of the American liberal: fear, loathing, and relief. But the irony is that while Kornacki was wildly gesticulating at pie charts, it was already over—Biden had won on Election Day. Kornacki in the end, was not predicting anything, but rather telling us a weird story about how vote-counting works in this country.

    Rather than a system wherein we learn the result of an election all at once, we have a numbers theater based on which counties in Iowa or Nevada are slowest to report. Don’t get me wrong: I was glued to cable news right through to the call on November 8, but the nervous and hopeful energy was all manufactured. In the United States, these types of election data performances are more histrionics than science. And they go beyond just generating Election Day anxiety; they’re helping to destroy left-liberal politics.

I found this confusing right off the bat. I agree completely that in an ideal world, it wouldn’t take days to know who won (though I don’t know enough about the system to offer solutions or critique specific aspects of it, and let’s also remember that some states are counting absentee votes days later, which can be decisive in close races). But I don’t really know what Weatherby means when he says that Kornacki “was not predicting anything” because the outcome had already been determined. We could get annoyingly philosophical here — can’t you attempt to “predict” an outcome that has already been set in stone but only partially revealed? — but in fact, information about the election trickled in for days, and Kornacki was trying to make sense of this (dynamic, fragmentary) data and translate its meaning for a lay audience. Which certainly seems like a valuable service! I’m not even sure Kornacki himself would describe what he was doing a “prediction” in quite so pat a manner.

Anyway, how is Weatherby going to connect the dots from Kornacki, who even Weatherby found compelling and worth watching for his ability to play-by-play the circus, to the “[destruction of] left-liberal politics”? Weatherby proceeds to introduce the “stats bro” — a label he affixes to Kornacki, Nate Silver (FiveThirtyEight), and Nate Cohn (The New York Times). “These men—and they’re always men; usually white, middle-aged, establishment liberals—cut a charming, yet sober, figure in a polarized political landscape.”

He continues, laying out his thesis:

    Stats bros claim to be the foil: “epistemically modest” purveyors of fact and reason. In reality, they conflate data and politics, taking their tools for precise descriptions of the actual world, all the while tending to the neuroses of the liberal electorate. In 2016, Silver’s statistical model gave Trump better chances than other predictors were willing to, grasping that working-class votes across the Midwest would cause a domino effect across the country. This fact is crucial to understanding our political landscape today, but it’s one part, not a whole: Silver’s model told us nothing about why Democrats fumbled the bag on the working class.

    Theodor Adorno accused philosophers called “positivists” of the same mistake: They posited a reality and could not distinguish their methodology from the real world. Stats bros need to understand that politics isn’t data; it’s passion, stories, and rhetoric.

The accusation seems to be that Nate Silver claimed to have a better method for understanding electoral politics than other, less “sober” figures in our “polarized political landscape,” and if we’re being fair it turned out that he sorta did (in the sense that his model at least recognized Trump was no long shot), but — and it’s a load-bearing but — “Silver’s model told us nothing about why Democrats fumbled the bag on the working class.”

I don’t know what this means. I’m not trying to be dismissive here. But I really don’t know what it means to accuse a statistical model, which aggregates a bunch of polls to try to predict the winner of a presidential election, of failing at a task it never claimed it could pull off. I’m sure Silver could point you at a hint here or there, but that’s not his job, or the job of his model. I don’t recall him ever claiming otherwise. 

There are other phrases here I don’t understand. What does it mean to “conflate data and politics”? Seriously, what? Maybe the answer is in the next paragraph: “Stats bros need to understand that politics isn’t data; it’s passion, stories, and rhetoric.”

Weatherby keeps acting as though “politics” is some neatly bounded term, when in fact it is an exceptionally messy human concept encompassing everything from the candidate themselves to their aides and lower-level workers to voters to TV pundits to analysts and forecasters and a ton of other people, too. The idea that politics isn’t this one thing but is this other bundle of things. . . this isn’t adult thinking. 

But that’s one way to sound profound without saying much of anything: make flat, declarative sentences without fully defining your terms. “Politics isn’t data — it’s passion, stories, and rhetoric.” Very impressive stuff.

Would any of these stats bros deny that politics includes a component of “passion, stories, and rhetoric,” anyway? Five seconds of googling found me a situation from last December in which Nate Cohn pointed out that while Republican turnout swelled in the 2022 election, it was still a bad showing for the GOP because so many R-leaning voters defected. To which Nate Silver responded, “ ‘It’s all about turnout’ is perhaps the biggest myth in electoral analysis. Persuasion generally matters more than turnout.”

Without getting into any of the merits of a very complicated political science debate, it seems pretty obvious that no one involved here is claiming that politics is just data, and that passion and storytelling and rhetoric don’t matter. Who would claim that? A robot? What would it even mean to say that politics is “just” data? 

Part of the problem, as is so often the case with these sorts of hit pieces, is that Weatherby seems uninterested in the actual views of the real-life (white, middle-aged, establishment liberal) men he’s critiquing. In a piece that expounds at length and in detail (albeit incoherently) on the supposed limpness of their supposed belief systems, Weatherby doesn’t quote any of them, with the exception of a couple brief instances of Silver simply explaining statistical ideas in his book The Signal and the Noise: Why So Many Predictions Fail—but Some Don’t.

Weatherby goes on this weird interlude about Bayesian reasoning, which is basically a form of probabilistic thinking in which one’s estimate of the likelihood of an event is based on one’s prior understanding of the world, and is updated as new information comes in. Brief example: this test says you have cancer: Do you? One way is simply to look at the test’s false positive rate. The more sophisticated Bayesian method is to also factor in the prior probability, even if it’s just a rough estimate, that someone of your age and level of overall health would have cancer in the absence of any diagnostic test pointing one way or another. That might provide a very different estimate as to how likely it is you have cancer. And then, if a month later you get some new information that the base rate of this cancer was significantly higher than was previously reported, you could plug in that new figure to update your probability accordingly.

Bayesians “update their priors” as they learn more and more about the world, meaning their predicted probabilities change. Weatherby stitches all these facts about Bayesean reasoning into a very strange and moth-eaten quilt and then attempts to smother Silver with it. It gets pretty wild. 

Take this passage:

    Bayesians tend to think that this refreshing [updating] action is what thinking in general is. Silver, who came to politics from baseball statistics, goes a step further, reducing thinking to gambling. In his 2012 book, The Signal and the Noise: Why So Many Predictions Fail—but Some Don’t, he claims that “the most practical definition of a Bayesian prior might simply be the odds at which you are willing to place a bet.” I think of this as “casino cognitivism.” Silver exemplifies this metaphor with recommendations for betting on politics, as when he tweeted that “shorting RFK nomination is the freest money on earth.” The stakes of prediction are clear here: They are not meant to foster a healthy political culture, but instead are thought to be accurate enough to put money down on outcomes.

This is so weird! Weatherby accuses Bayesians of a very reductive understanding of what thinking is (could Bayesians possibly have such a crimped understanding of thinking? Citations needed). Then Weatherby just quotes Silver’s layperson-friendly explanation of what a Bayesian prior is, which 1) is accurate and 2) of course can’t tell us anything, on its own, about Silver’s politics in any sense. He then describes this as “casino cognitivism” without defining what that means, and then, as evidence of how Silver “exemplifies this metaphor,” points to this one time when Silver noted that since RFK (Jr.) is obviously going to get the Democratic nomination for president, betting against him on one of the political prediction markets is basically free money. This, too, says nothing about Silver’s politics, and has nothing to do with anything, really; Silver is a professional gambler who writes about gambling and made a (rather obvious) observation about one particular gamble. This is like trying to find out about Nate Silver’s politics by reading the entrails of a chicken that once looked at him.

Having built a small pile out of bits of nothing, Weatherby concludes: “The stakes of prediction are clear here: They are not meant to foster a healthy political culture, but instead are thought to be accurate enough to put money down on outcomes.” Huh? I don’t even know if there’s a name for this logical fallacy, but it’s incredibly stupid. It’s like if you saw me buy ice cream one time and you said “Oh, so you think money should only be spent on fleeting pleasures, rather than on making the world a better place.” It’s nonsense.

More:

    In sports fandom as in election watching, being inside the stats machine is like driving with Google Maps. Digital maps update as you go, using real-time data collected mostly from other drivers. While it’s [sic] not actually using Bayesian techniques, it shares something in common: updating predictions at every point in your drive. This isn’t as much a “prediction” as a digital child shouting “Are we there yet?” every 30 seconds. A constantly updated prediction isn’t about the future, it’s about where we are in the present. Everyone knows what it’s like to see that predicted time of arrival slide later and later into the future while sitting in traffic caused by a new road accident. This type of “prediction” isn’t really what we mean by that word, or what we want from a forecast. I needed to know when I left how long it would take to get there.

This is almost beside the point given how little sense the rest of the paragraph makes, but why is he so sure Google Maps doesn’t use Bayesian techniques in some sense? You’re telling me Google doesn’t factor in previous differences between predicted and actual travel times over a given route to generate future estimates? Doubtful. 

But more broadly, it seems like Leif Weatherby doesn’t know what predictions are, doesn’t know what time is, or both: “A constantly updated prediction isn’t about the future, it’s about where we are in the present” is a very strange sentence. As for “I needed to know when I left how long it would take to get there.” I mean. . . yes? That would be great if Google Maps were an infallible predictor of travel times. But infallible prediction is impossible. So instead any prediction model takes a crack at it, and then, yes, updates its predictions as time passes and more information comes in. It’s very, very difficult even to understand what Weatherby is complaining about here.

It’s an inscrutable column, even by the sometimes shaky standards of The Nation. My guess is that it’s motivated, at root, by the fact that Weatherby is mad at the existence of forecasters who don’t see and talk about the world in exactly the way he would like. At one point he says: “Data abstractions can be valuable only if we put them in the context of the end of the welfare state and the degradation of workplace protections in favor of capital.” Again, I don’t really know what that means — it’s more of that declarative, rigid style of claiming. Then Weatherby says: “In their performance of objectivity, stats bros tend to disdain left populism and restrain the kind of ideas that we need to survive as a republic.” 

Did you catch that? In a column criticizing others for the “performance of objectivity,” Weatherby claims that one particular set of ideas (left-populist ones) are necessary to save the republic. None of this is hedged and there isn’t a hint of humility. The republic is doomed unless we listen to Leif and his friends, who know exactly what needs doing, rather than those stats bros. 

No thanks.

Questions? Comments? Even stranger columns than this one? I’m at singalminded@gmail.com. The image was generated by DALL-E 2 in response to my prompt, “a statistics nerd is plotting to destroy the earth, saturday morning cartoon style.”

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.