Tuesday, April 28, 2026

An American Sickness


A decade ago, there was a perception that political violence came mostly from right-wing extremists. Now that U.S. politics have degraded further, and social media–drawn battle lines have hardened, more Americans are putting the blame on the far left.

Donald Trump
There is an American sickness these days that just wasn’t in our lungs before Trump came around. Photo: Andrew Leyden/Getty Images

Peter Hamby
April 27, 2026



Given his usual instincts, Donald Trump showed an impressive degree of restraint on Saturday evening shortly after a California man named Cole Tomas Allen attempted to storm the White House Correspondents’ Dinner with two guns and a pair of knives. “This was an event dedicated to freedom of speech that was supposed to bring together members of both parties, with members of the press, and in a certain way it did,” Trump said, avoiding, for the moment, his typical partisan point-scoring. “I saw a room that was just totally unified. It was, in one way, very beautiful.”

But it was only a matter of time before Trump and his allies would use the latest attempt on his life to blame the American left—and the Democratic Party—for inspiring a climate of toxic politics. On Monday, White House Press Secretary Karoline Leavitt walked out to the podium in the Brady Briefing Room loaded for bear, with a list of Democrats who have called Trump a fascist or a dictator over the years. Ed Markey. Elizabeth Warren. Adam Schiff. “Those who constantly, falsely label and slander the president as a fascist and threat to democracy and compare him to Hitler to score political points are fueling this kind of violence,” Leavitt said.

We hope you are enjoying your article!
START 14 DAY FREE TRIAL
Many of Leavitt’s examples were out of context, or just cynically intentional misreadings of how people who work in politics (or sports, for that matter) commonly speak about their profession. Campaigns are battles. Opponents are targeted or “in the crosshairs.” Primary rivals slinging negative attacks against each other sometimes commit “murder-suicides” that kill off both candidacies (see: Howard Dean and Dick Gephardt in Iowa in 2004). Leavitt herself boasted on Saturday, previewing Trump’s speech, “There will be some shots fired tonight.”

These terms have been commonplace in politics for as long as I can recall. I’m old enough to remember when liberals—and the New York Times editorial page—lost their minds at a Sarah Palin staffer placing gun sights on an infographic aimed at various Democrats she was targeting with her super PAC in 2010. Months later, the pundit class rushed to blame her for somehow inspiring the googly-eyed weirdo who shot and nearly killed Gabrielle Giffords (and did kill six others) in Arizona. They were wrong. And even though they were wrong, they were also making those accusations in the United States of America, where the First Amendment exists.

Leavitt claimed that House Minority Leader Hakeem Jeffries called for “maximum warfare” against Trump. Well, no. Jeffries recently was talking about the congressional redistricting “wars” going on across the country. “We are in an era of maximum warfare, everywhere, all the time,” he said a few weeks ago. “And we are going to keep the pressure on Republicans at every single state in the union, to ensure at the end of the day, that there is a fair, national map.”

By claiming that words are violence, Leavitt sounds very much like the safe-space snowflakes that she and her MAGA cohorts love to mock. “The left criticizing Trump is not the same as an incitement to violence,” said Brian Tyler Cohen, the popular progressive YouTube host and creator. “I get that Trump would love to label all criticism of him as incitement in an effort to chill all speech against him, but we have a First Amendment right in this country that allows us to criticize our government, and we should use it.”

But despite the half-truths, Leavitt said something else on Monday that liberals need to reckon with. “The deranged lies and smears against the president, his family, his supporters have led crazy people to believe crazy things, and they are inspired to commit violence because of those words,” she said.

Yes, Donald Trump ushered the crazies into the political mainstream a decade ago. No need to argue with that. He broke the thin membrane that kept the online kooks away from the real world. Mail bombers, Nazi marchers, the scoundrels who broke into the U.S. Capitol on January 6—that’s on him. Add in a collective addiction to social media brain rot—along with a global pandemic, mass protests about race and policing, wars in the Holy Land—and things get even uglier. There is an American sickness these days that just wasn’t in our lungs before Trump came around.

Still, as savvy people in politics like to say, two things can be true at once. And it should not be controversial to state what’s becoming obvious: There is a rising miasma of conspiratorial thinking, dangerous fact-denying, and dehumanizing language that has taken hold on the American left. It’s a political coalition that has long held itself to a higher epistemological standard than the right. Yes, the schizophrenic ’70s had their violent leftist radicals—the Weather Underground, the Black Panthers, etcetera—but since at least the 1990s, liberals could plausibly claim the nutjobs and wackos were mostly on the other side. But now it feels like the libs have a screw loose once again—and a few of them are getting guns, too.

Shift Left
In November of 2018—after the synagogue mass shooting in Pittsburgh and the arrest of a Trump-supporting man who sent explosives through the mail to CNN and other critics of the president—NPR and Marist ran a poll asking voters about civility in politics and who was to blame for its decline. Clear pluralities of voters—and independents—blamed Trump and Republicans for the increasingly nasty tone of politics. Almost no one blamed Democrats. That echoed other NPR polling from 2017, which found that a massive 70 percent of Americans said the tone of politics had worsened in Trump’s first term.

Plenty of research from that era bore out a consensus: While political violence wasn’t as widespread as media coverage suggested, it was far more likely to come from right-wing extremists than left-, and Americans generally believed that the Republicans under Trump were more responsible for toxic politics than Democrats or people on the left. That consensus is now over. The number of Americans who blame the left as much as the right for political violence has skyrocketed over the last six years.

In October, Pew found that 53 percent of Americans see left-wing extremism as a major problem, basically tied with those who view right-wing violence the same way (52 percent). In September, Morning Consult asked voters, “Who commits more violence, left- or right-wing extremists?” More Americans named left-wing extremists (29 percent) than right wing-extremists (27 percent), while a quarter said “both sides” are responsible for violence. Again, a sea change from the early years of Trumpian politics, defined by violent campaign rallies and racists marching in Charlottesville.

What changed? Two Trump assassination attempts—now three—at least two of them by lefty social media addicts. The murder of right-wing activist Charlie Kirk by a supporter of trans rights who marked his bullet casings with anti-fascist memes. A handsome college grad named Luigi Mangione who murdered the C.E.O. of UnitedHealthcare in a fit of anti-corporate rage, and was celebrated for it in certain corners of the internet. And long before all that, civil rights protests in 2020 that turned violent, with plenty of voices on the left encouraging and celebrating vandalism and looting as an act of political resistance.

Right-wing violence still exists. The family of former Minnesota House Speaker Melissa Hortman knows this all too well. But it’s also a fact that left-wing political violence is on the rise. Democrats can pretend it isn’t, but scoreboards don’t lie. According to the Center for Strategic and International Studies, 2025 marked the first year in over three decades that the number of violent left-wing incidents and plots surpassed the number of right-wing ones.

Trump and the MAGA movement have amplified all of the above with their powerful network effects on the internet, blaming Antifa or Democrats or whoever for whatever violent incident shows up in our push alerts. It’s J.D. Vance’s favorite pastime. And while some Democratic politicians have said inflammatory things in recent years, the real problem isn’t elected officials, as Leavitt tried to claim. Democratic electeds and other party leaders vehemently condemn political violence all the time—and most did over the weekend once again.

The problem is a new generation of podcasters, blue clout-chasers, and TikTok commenters who have overtaken the mainstream media not just as purveyors of facts, but as self-appointed brokers of common decency. Like anyone who spends too much time online, slowly drained of human empathy, many on the left have become too comfortable celebrating violence or bad luck that befalls their Trumpian enemies. They are too loose with their language, too cozy with conspiracies that can lead to a dark place.

Trump allies have been complaining for a while now that the mainstream press isn’t covering the conspiracy creep on the left with the same passion they gave to the right in Trump’s first term. “There was a cottage industry of reporters writing six pieces a day on QAnon years ago, but now when mainstream liberals are absolutely nuts, it’s silence,” said Alex Pfeiffer, a Republican strategist and veteran of the Trump White House. “You don’t need to attend a D.S.A. meeting to hear the filth in the shooter’s manifesto, you can just check out Ted Lieu’s X feed.”

Even podcasters on the left have started to urge their listeners and followers to tone it down. On Pod Save America last year, after Kirk’s murder, host Jon Favreau pleaded with viewers to try to maintain some kind of intellectual and moral high ground after seeing far too many on the left praise Kirk’s death and find excuses to justify violence against the right. “This is horseshit,” Favreau said in an emotional viral video. “Just because politics has failed in the past to prevent violence—just because it seems to be failing now—doesn’t mean that we should give up on it. That we should give up on speaking and acting and fighting in a way that represents our best attempt to change people’s minds; to bring the rest of the country a little bit closer to our point of view.”

The Paranoid Style
Allen, the alleged attempted Trump assassin, seemed like the kind of guy who listens to Pod Save America. By all accounts, he was a Millennial normie Democrat, not the kind of person who dabbles in 9/11 truther content. What was striking about his alleged manifesto, written shortly before he attempted to rush the Washington Hilton basement, was how much it sounded like any of the pundits and anti-Trump content creators he reportedly followed on Bluesky. In addition to calling the president a “pedophile, rapist, and traitor,” he wrote that most of the people in the room—members of the media, mostly—were “complicit” in his crimes because they were attending the dinner. Not all left-wing podcast hosts and YouTube creators use this kind of language. But many of them absolutely do—and the manifesto sounded indistinguishable from paranoid online commentary that is regularly shared by Resistance pundits and grifters.

After I scooted out of the Hilton on Saturday, on my way to get a stiff drink with a pal at an Adams Morgan bar, I stopped on the street to post a little breaking news update on Snapchat, giving my followers there some color from inside the ballroom. The comments on my posts flooded in—and they were jarring. “Staged.” “False flag.” “Staged.” It was a fake operation, many said, a pretext for Trump to finish building his White House ballroom. Fortunately, I’m able to moderate comments on Snapchat, and I blocked them from public view. But the smooth brains were pretty easy to find everywhere else.

The actress January Jones—I know, not exactly a public intellectual—posted to her 1 million Instagram followers that the shooting was faked, “a small-scale low risk assassination attempt.” Silly celeb or not, the post was a symptom of a much larger problem of a disintermediated world where people get information from influencers with a lot of followers rather than credentialed reporters and news organizations. Here was a celebrity, a loud disciple of the Resistance, pumping 4chan-level slop, without discretion, into the world. She wasn’t alone. The New York Times reported on Sunday that the term “staged” had appeared in more than 300,000 posts on X.

It’s impossible to know whether all of these posts are from Democrats or liberals—or even real humans. Nor does posting a conspiracy theory on social media mean you’re going to suddenly grab a gun, hop a train across the country, and go hunting for Republicans at the Washington Hilton. Political violence remains rare relative to other forms. But dabbling in tinfoil hats can be a slippery slope. The great liberal thinkers of the past, I imagine, would look upon this moment with shame. I dug up a quote Sunday from Isaac Asimov, a proud Democrat during his lifetime. He wrote in 1980 about “a cult of ignorance in the United States”—his take on the anti-intellectual right—and said conservatives thrive on the belief that “my ignorance is just as good as your knowledge.” He would not be pleased to learn, today, that many of his fellow travelers have settled on fighting ignorance… with ignorance.


Monday, September 1, 2025

More sweltering days forecast for September after hottest summer on record

 

More sweltering days forecast for September after hottest summer on record
By the end of August, 8,341 people had been transported by ambulance for heatstroke in Tokyo alone.
By the end of August, 8,341 people had been transported by ambulance for heatstroke in Tokyo alone. | AFP-JIJI
By Jessica Speed
STAFF WRITER
 SHARE/SAVE
Sep 1, 2025

Listen to this article
3 min
This summer was Japan’s hottest on record, with average temperatures nationwide more than 2.36 degrees Celsius higher than usual, according to a report released Monday by the Meteorological Agency.
The agency said this summer (June through August) was the hottest since records began, surpassing highs set in 2023 and 2024, when the deviation from the norm was 1.76 C in both years.

The agency attributed the heat to global warming and an unusually strong Pacific high pressure system, bolstered by convective activity in the Indian Ocean and around the Philippines.

Aug. 5 saw the hottest temperature in Japan on record, with the city of Isesaki, Gunma Prefecture, reaching 41.8 C. Of the 153 meteorological stations nationwide, 132 recorded their highest summer temperatures ever. The cumulative number of extremely hot days observed at weather stations nationwide this summer reached 9,385, surpassing the previous record of 8,821 in 2024.

Temperatures reached 38 C or above Monday in the town of Hatoyama in Saitama Prefecture and the cities of Nagoya, Tajimi, Kuki, Kiryu and Toyama in Aichi, Gifu, Saitama, Gunma and Toyama prefectures, respectively.

The extreme heat is expected to continue into September, with a one-month forecast released by the agency Thursday putting the probability of above-average temperatures for the month at 80% nationwide.

According to the report, temperatures are projected to remain high across the country through Friday, with an 80% probability of exceeding seasonal norms. The Tohoku and Hokkaido regions have a 70% chance of higher-than-average temperatures from Sept. 6 to 12, while other regions face an 80% probability. Elevated temperatures are expected to persist into late September and October, although with slightly lower probabilities.

The agency also expects there to be less precipitation and more sunlight across the country, though the western part of the Hokkaido-Tohoku region is expected to be rainier in September.

The most recent three-month outlook issued by the agency predicts above-normal temperatures across the country through November, driven by continued high sea surface temperatures near the Philippines and westerly winds flowing farther north than usual.

The hotter-than-average temperatures are also taking a toll on emergency services. By the end of August, 8,341 people had been transported by ambulance for heatstroke in Tokyo alone, surpassing last year’s record of 7,996, according to the Tokyo Fire Department.

The department is urging residents to take necessary precautions against heat exhaustion and heatstroke, such as using air conditioning, wearing hats or parasols outside to avoid direct sunlight, and drinking small sips of water before beginning to feel thirsty.

Yes, Cash Transfers Work -- by Annie Lowrey

Yes, Cash Transfers Work

The Atlantic by Annie Lowrey / Aug 30


In 2023, the United States produced $28 trillion worth of goods and services. The average family had a net worth of $192,900. Shares in American companies accounted for more than half of global-market capitalization. Yet one in eight Americans lived in poverty, as did one in seven children.


The best way we have to help those people is to give them money. Year in and year out, Social Security lifts more than 20 million Americans above the poverty line; tax credits lift 6 million; and food stamps, housing subsidies, unemployment insurance, and Supplemental Security Income payments lift another 2 million to 4 million each. Expanding these programs would move the poverty rate lower, experts have long argued. Providing families with much-needed cash also tends to have a range of positive knock-on effects, such as keeping kids in school and improving health measures.     


But a new set of cash-transfer programs has had lackluster results. Writing in the new publication The Argument, Kelsey Piper notes that “multiple large, high-quality randomized studies are finding that guaranteed income transfers do not appear to produce sustained improvements in mental health, stress levels, physical health, child development outcomes or employment. Given the sobering results, politicians and policy makers should hesitate before pumping funds into these safety-net initiatives, she argues. If not, “money will be wasted on things that don’t work.”


Having a technocratic debate over how to spend the next marginal safety-net dollar feels a touch absurd at the moment. Republicans are gutting the Supplemental Nutrition Assistance Program and Medicaid to finance tax cuts for billionaires; Trump-administration officials are sending masked thugs to disappear people off the streets when they are not busy texting war plans to my boss; American democracy is fading; nobody is talking about instituting a universal basic income anytime soon. Still, policy design is important, and the analysis of these new studies seems to have convinced a number of Beltway wonks and denizens of econ Twitter that cash transfers might not be as good of an idea as we once thought.


Yet the argument has tended to overinterpret a limited and novel body of evidence while ignoring decades of sterling research showing that cash—particularly when targeted to infants and children—is near unmatched as a salve for poverty and its horrible consequences.


The new studies focused on programs that were launched over the past eight years. Each worked in a similar way. Researchers found people interested in receiving unconditional cash payments, divided participants into a control group and a treatment group, disbursed the money, and studied the differences between the two groups. The programs varied in the types of people they enrolled (Baby’s First Years targeted infants and mothers; the Denver Basic Income Project, the homeless; the Compton Pledge, low-income households) and the size and duration of transfers (the OpenResearch Unconditional Income Study offered $1,000 a month, Baby’s First Years, one-third that sum).


The results were disappointing in some respects. “Homeless people, new mothers and low-income Americans all over the country received thousands of dollars. And it’s practically invisible in the data,” Piper writes in her summary. Denver’s program did not lead to a material reduction in homelessness. Compton’s did not improve its participants’ psychological well-being or alleviate certain measures of financial distress. The OpenResearch initiative did not bolster health outcomes. Baby’s First Years did not advance child development or spur families to move to better neighborhoods. “On so many important metrics, these people are statistically indistinguishable from those who did not receive this aid.”


But people receiving aid were statistically distinguishable from those not receiving aid: They had more money to use on the things they needed, or wanted. In the OpenResearch pilot, participants spent more on housing, transportation, and food. Mothers who got cash through the Baby’s First Years initiative were less likely to be in poverty than those who did not. In other words, a famed anti-poverty measure reduced poverty.


This intuitive finding is underplayed, perhaps because it is so intuitive. Cash transfers aren’t new. No safety-net policy has ever been as thoroughly examined over the course of decades. Last year alone, initiatives to send cash and cashlike substitutes to American families cut the overall poverty rate in half. Just a few years ago, a massive temporary federal cash transfer to parents slashed the child-poverty rate to a historic low of 5.2 percent; the rate rebounded after the program ended. You give people money; they have money.


That said, I am not surprised that the pilots’ effects were limited, given when they were happening and how they were structured. The initiatives took place during and after the coronavirus pandemic, when Congress flooded families with stimulus checks, $600-a-week bonuses to unemployment-insurance payments, and a $3,600-per-kid child allowance. If the no-strings-attached payments from OpenResearch or Baby’s First Years were the only cash transfers that low-income families were receiving, I imagine that they would have had a stronger impact. (Cash transfers have more bang for the buck in developing countries than the super-wealthy United States for a related reason: The more money people have, the more expensive it is to improve their situation; the more intense the material deprivation, the greater effect a single dollar has in alleviating it.)   


More important, the pilots took place during an acute cost-of-living crisis: a giant surge in inflation combined with a long-simmering run-up in the price for child care, health care, and housing. A few hundred dollars a month was never going to secure a single mom an apartment in Denver or cover the cost of 9-to-5 day care in Queens. Thus it might have had a smaller impact on financial well-being than anticipated, and might explain why transfers did more for people living in low-cost Illinois and Texas than in the witheringly expensive Los Angeles metro area.


There is a real lesson for policy makers here. Cash is no good if you cannot buy the things you need with it, and the brutal cost of day care, elder care, higher education, doctor visits, prescription medication, and rent—especially rent—continues to hammer the working and middle classes. We cannot transfer our way out of this crisis. If you give parents child-care vouchers, prices will go up unless supply expands. If you provide rental assistance, landlords will soak up the cash. Right now, surging energy costs are eating up Social Security payments, jobless benefits, and earned-income tax-credit transfers.   


But the relationship between household income and supply constraints is not the focus of the current debate. Rather, folks are dinging cash-transfer initiatives for failing to bolster breastfeeding rates, cut maternal stress levels, change people’s physical activity, or increase people’s educational attainment. Given these results, a “big ‘give everyone cash’ program” will not “make them measurably healthier or happier, or get them better jobs, or improve their children’s intellectual development,” Piper writes, not “at any detectable scale.”


Hundreds of studies of cash-transfer programs conducted over the past half century, however, have come to the opposite conclusion. Giving people money does have strong ancillary benefits. Cash makes people healthier, eliminates hunger, increases educational attainment, cuts the disability rate, reduces inequality, raises lifetime earnings, and prevents incarceration. The strongest benefits redound to infants and children. But cash is not magic, and these second- and third-order effects take time to show up in the data. Mothers’ pensions, the precedent for today’s welfare program, had muted effects on the women receiving them from the 1910s to the 1930s, but significant effects on the lifetime earnings and educational attainment of their sons, decades later.


Perhaps other interventions would have worked better. Perhaps researchers should have taken the money from the pilots and spent it on, say, workforce training, job coaching, therapy, health counseling, or some other intervention. But such policies do not have a promising track record, and these studies shed no light on their comparative efficacy versus cash. Complicated programs with complicated participation criteria also tend to be expensive for the government to run and difficult for citizens to navigate, meaning fewer people use them. That’s a big reason to just give people money. Folks would rather receive cash than a refundable tax credit to reduce energy costs, or an income-scaled voucher redeemable at a certain location after you fill out a bunch of paperwork.


The point of giving people money right now is to get them out of poverty. The point of giving people money is to give their kids a better chance at a healthy, abundant life. Reading the studies, I kept on thinking about that temporary child allowance. When parents received the cash, they didn’t feel happier. They moved above the poverty line, and bought more groceries. They could afford more formula for their babies and berries for their toddlers. Maybe that’s a disappointment. But as a parent myself, I kept thinking: What a win.

Yglesias - Perfectly Legal and Undeniably Scandalous

Yglesias - Perfectly Legal and Undeniably Scandalous

Unlike his legally dubious attempt to fire a Fed governor, a lot of the president’s most irresponsible decisions are well within his authority.

August 31, 2025 at 12:00 PM UTC


By Matthew Yglesias

Matthew Yglesias is a columnist for Bloomberg Opinion. A co-founder of and former columnist for Vox, he writes the Slow Boring blog and newsletter. He is author of “One Billion Americans.”



One of the defining features of Donald Trump’s second presidency is an endless parade of legally dubious assaults on the foundations of American institutions. His administration’s attempt to destroy the independence of the Federal Reserve, with the director of the Federal Housing Finance Agency rummaging through private mortgage filings to gin up bad-faith charges of misconduct to create a pretext for firing a member of the Fed’s board, is only the latest example.

But there’s a popular aphorism in Washington: The scandal isn’t what’s illegal, the scandal is what’s legal. So it’s important not to let certain pernicious yet permissible Trump moves get lost in the shuffle.

Chief among these is the firing earlier this month of Air Force Lieutenant General Jeffrey Kruse as head of the Defense Intelligence Agency. Kruse was cashiered on a Friday afternoon without so much as an explanation — similar to how the administration handled dismissals of senior military officers earlier this year.

Firing high-ranking military officers is unquestionably a legitimate exercise of presidential power, and there is certainly no legal obligation for the president or his team to explain their reasons. Still, it is highly unusual to fire commanders in this way. Unlike cabinet secretaries and other conventional political appointees who resign as a matter of course when a new president is elected, the long-established custom in the United States is for flag officers to remain in place across administrations.

Kruse appears to have been fired because the White House did not like the DIA’s assessment of the efficacy of US air strikes against Iranian nuclear facilities. Again, the president is legally allowed to punish the head of an intelligence agency for reaching a conclusion that he disagrees with. But absent clear evidence of misconduct, it’s extremely unadvisable.

Intelligence work is difficult. Agencies often disagree about things in good faith. If political decision-makers start making it clear that only certain conclusions are acceptable, the quality of the work product is going to be compromised, and ultimately they will find themselves receiving bad information. And intelligence failures can blow up in spectacular ways.

Trump, of all people, should know this: The story of his rise to power cannot be told without explaining how the US war in Iraq discredited George W. Bush and the Republican Party establishment even while leaving much of the basic appeal of cultural conservatism in place. Bush never did anything quite as clumsy as the outright firing an agency head for saying the wrong thing, but his subtler modes of influence changed things for the worse. Trump’s cruder approach risks even larger disasters.

And he’s applied the same blunt approach to the transparent and staid realm of economic data. The US commissioner of labor statistics is a Senate-confirmed political appointee, so Trump clearly had the authority to fire Erika McEntarfer from the job several weeks ago. In his place, he wants to install a hyper-partisan economist from the right-wing Heritage Foundation.

The propaganda upside to installing a hack at the BLS is clear enough. And it’s unquestionably legal. But this kind of move, to quote another famous saying, is worse than a crime; it’s a mistake.

It is far more important, both substantively and politically, to try to improve economic conditions rather than to try to improve economic numbers. Short of an outright recession, pretty much any situation can be seen as a glass half full or half empty. The White House usually tries to make the case for half full, while the opposition party argues for half empty. Juking the stats could give the White House a leg up — but would also make it easier for the opposition to dismiss any good news as fabrication.

A more serious issue is that reliable economic data is essential for effective economic policy.

At the beginning of former President Barack Obama’s tenure, for example, the Commerce Department’s Bureau of Economic Analysis underestimated the severity of the recession. The data were eventually revised, and it’s possible to argue that the less grim numbers made Obama look better in the moment. But long term, it was a disaster: Neither Congress nor the administration had an accurate read on the state of the US economy, leading to a weaker stimulus, with dire effects for both their own political projects and American workers.


Trump’s tendency to treat disagreement as disrespect — and to conflate agreement with respect, an equally dangerous trait that was flagrantly on display at last week’s three-hour cabinet meeting — blinds both the country and himself to the possibility that things aren’t going as well as he’d like. His firing last week of the director of the Centers for Disease Control, which bodes ill for US public health, calls to mind his hostility early in the Covid-19 pandemic to the idea of widespread testing for the virus. It’s easy to forget, but long before the controversies over mask rules and school closures and vaccines, there was a prolonged period when the administration could have taken preemptive action against a virus that was then limited to China. Instead, it chose to downplay the risks.

A president is certainly within his rights to fire the head of the CDC, the DIA or the BLS. These are simply “normal” bad decisions, not ones that raise constitutional questions. But they are often consequential, and Trump’s impulses are consistently irresponsible.

Saturday, August 30, 2025

AI Has Broken High School and College

AI Has Broken High School and College

The Atlantic by Damon Beres / Aug 30


This is Atlantic Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. Sign up here.


Another school year is beginning—which means another year of AI-written essays, AI-completed problem sets, and, for teachers, AI-generated curricula. For the first time, seniors in high school have had their entire high-school careers defined to some extent by chatbots. The same applies for seniors in college: ChatGPT released in November 2022, meaning unlike last year’s graduating class, this year’s crop has had generative AI at its fingertips the whole time.


My colleagues Ian Bogost and Lila Shroff both recently wrote articles about these students and the state of AI in education. (Ian, a university professor himself, wrote about college, while Lila wrote about high school.) Their articles were striking: It is clear that AI has been widely adopted, by students and faculty alike, yet the technology has also turned school into a kind of free-for-all.


I asked Lila and Ian to have a brief conversation about their work—and about where AI in education goes from here.


This interview has been edited and condensed.


Lila Shroff: We’re a few years into AI in schools. Is the conversation maturing or changing in some way at universities?


Ian Bogost: Professors are less surprised that it exists, but there is maybe a bit of a blind spot to the state of adoption among students. I saw a panic in 2022, 2023—like, Oh my God, this can do anything. Or at least there were questions. Can this do everything? How much is my class at risk? Now I think there’s more of a sense of, Well, this thing still exists, but we have time. We don’t have to worry about it right away. And that might actually be a worse reaction than the original.


Lila: The blind-spot language rings true to the high-school environment too. I spoke to some high schoolers—granted this was quite a small sample—but basically it sounds like everybody is using this all the time for everything.


Ian: Not just for school, right? Anything they want to do, they’re asking ChatGPT now.


Lila: I was a sophomore in college when ChatGPT came out, so I witnessed some of this firsthand. There was much more anxiety—it felt like the rules were unclear. And I think both of our stories touched on the fact that for this incoming class of high-school and college seniors, they’ve barely had any of those four years without ChatGPT. Whatever sort of stigma or confusion that might have been there in earlier years is fading, and it’s becoming very much default and normalized.


Ian: Normalization is the thing that struck me the most. That is not a concept that I think the teachers have wrapped their heads around. Teachers and faculty also have been adopting AI carefully or casually—or maybe even in a more professional way, to write articles or letters of recommendation, which I’ve written about. There’s still this sense that it’s not really a part of their habit.


Lila: I looked into teachers at the K–12 level for the article I wrote. Three in 10 teachers are using AI weekly in some way.


Ian: Some kind of redesign of educational practice might be required, which is easy for me to say in an article. Instead of an answer, I have an approach to thinking about the answer that has been bouncing around in my brain. Are you familiar with the concept in software development called technical debt? In the software world, you make the decision about how to design or implement a system that feels good and right at the time. And maybe you know it’s going to be a bad idea in the long run, but for now, it makes sense and it is convenient. But you never get around to really making it better later on, and so you have all these nonoptimal aspects of your infrastructure.


That’s the state I feel like we’re in, at least in the university. It’s a little different in high school, especially in public high school, with these different regulatory regimes at work. But we accrued all this pedagogical debt, and not just since AI—there are aspects of teaching that we ought to be paying more attention to or doing better, like, this class needs to be smaller, or these kinds of assignments don’t work unless you have a lot of hands-on iterative feedback. We’ve been able to survive under the weight of pedagogical debt, and now something snapped. AI entered the scene and all of those bad or questionable—but understandable—decisions about how to design learning experiences are coming home to roost.


Lila: I agree that AI is a breaking point in education. One answer that seems to be emerging at the high-school level is a more practical, skills-based education. The College Board, for instance, has announced two new AP courses—AP Business and AP Cybersecurity. But there’s another group of people who are really concerned about how overreliance on these tools erodes critical-thinking skills, and maybe that means everyone should go read the classics and write their essays in cursive handwriting.


Ian: My young daughter has been going to this set of classes outside of school where she learned how to wire an outlet. We used to have shop class and metal class, and you could learn a trade, or at least begin to, in high school. A lot of that stuff has been disinvested. We used to touch more things. Now we move symbols around, and that’s kind of it.


I wonder if this all-or-nothing nature of AI use has something to do with that. If you had a place in your day as a high-school or college student where you just got to paint, or got to do on-the-ground work in the community, or apply the work you did in statistics class to solve a real-world problem—maybe that urge to just finish everything as rapidly as possible so you can get onto the next thing in your life would be less acute. The AI problem is a symptom of a bigger cultural illness, in a way.


Lila: Students are using AI exactly as it has been designed, right? They’re just making themselves more productive. If they were doing the same thing in an office, they might be getting a bonus.


Ian: Some of the students I talked to said, Your boss isn’t going to care how you get things done, just that they get done as effectively as possible. And they’re not wrong about that.


Lila: One student I talked to said she felt there was really too much to be done, and it was hard to stay on top of it all. Her message was, maybe slow down the pace of the work and give students more time to do things more richly.


Ian: The college students I talk to, if you slow it all down, they’re more likely to start a new club or practice lacrosse one more day a week. But I do love the idea of a slow-school movement to sort of counteract AI. That doesn’t necessarily mean excluding AI—it just means not filling every moment of every day with quite so much demand.


But you know, this doesn’t feel like the time for a victory of deliberateness and meaning in America. Instead, it just feels like you’re always going to be fighting against the drive to perform even more.

Online Shopping May Never Be the Same

Online Shopping May Never Be the Same

The Atlantic by Ian Bogost / Aug 30



A few years ago, I found the perfect rug for my daughter’s room. It had pink unicorns and flowers. But I scoffed at the price tag on Anthropologie’s website: more than $1,000, plus an additional fee for “white glove delivery.” Then I fired up Etsy. I found a similar product made by a workshop in India that shipped directly from there. It took weeks to arrive, but it was half the price.


Online shopping is a miracle: You can find items of any kind, fit for any purpose, for affordable prices—and shipped from all over the world to your door. But as of today, buying from international sellers has become more expensive for Americans. That’s because President Donald Trump ended the de minimis exemption on imported goods, a loophole that allowed millions of daily packages to enter the country without paying duties. The exemption has been around for a long time—nearly a century—but it took on new import (get it?) in 2016, when the maximum value for untaxed goods rose from $200 to $800. In that moment, the social-media-driven rise of direct-to-consumer e-commerce, drop-shipping, and online-marketplace sales were also accelerating. Ever since, American ports, mailboxes, and homes have been flooded with cheap clothing, electronics, accessories, skin-care products, toys, and a host of other consumer goods.


The de minimis loophole is a big reason e-commerce sites including Shein and Temu could sell you things for so cheap: They shipped straight from China, skirting any tariffs. The White House ended the exemption for goods from China earlier this year, and now de minimis is ending for all countries. That means that many things you might import could become more expensive (on account of the additional taxes) or harder to buy (because sellers won’t bother shipping to the U.S.), or take longer to arrive (because of customs backlogs), or any combination of those. The rug I bought a few years ago would now be subject to a 50 percent import duty, when you factor in tariffs on India. Presuming that cost is passed down to consumers, it’s enough to give a buyer like me pause.


You might not realize how much of the stuff you buy online comes directly from overseas. I didn’t, until I looked closely at my buying habits over the past few years. After all, sites such as Etsy and eBay offer seamless global commerce: A handmade craft object could come from Maine or Myanmar, straight to you. Even Amazon has benefited from de minimis. Various strategies have allowed the retailer’s marketplace suppliers to take advantage of de minimis when they import goods; at other times, when you buy from the big platforms’ sites, those vendors might ship what you ordered directly from abroad, tax free.


[Read: Amazon decides speed isn’t everything]


Buying cheap imported goods has become the best part of online shopping: Not only can you find the best deals from international sellers, but also you can source items to satiate specific hobbies and interests—say, drafting pens from Japan or instrument reeds from Belgium. I found that I had bought a host of stuff, on Etsy and beyond, that took advantage of de minimis, including rubber-tree hippo figurines from Denmark (naturally) and a surprise mandolin from Ireland for my daughter. Those goods would now be subject to an additional tariff. I’ve bought incredibly cheap Chinese- and Japanese-manufactured camera lenses that have fueled a resurgence of photography hobbyism for me and my son; I also bought a detailed and shockingly high-quality Paul Revere costume to help a neighbor’s kid beat her classmates in a school costume contest—a small thing, but one we’ll all remember.


Ah, and then the British faucet doodad. This was a big deal. When I tried to repurpose an old, turn-of-the century washbasin with separate hot and cold water spigots, I couldn’t find a faucet that fit the sink. Sure enough, some vendor in the United Kingdom had a $30 plastic tube that did the trick. International sellers sometimes are the only ones that have what you need, and you don’t need to be a particularly adept shopper to find them. A simple Google search will suffice.


Of course, being able to seamlessly import cheap stuff has also encouraged mindless consumerism. Some imported goods are crap that nobody ever needed, produced at unconscionable labor and environmental costs. My family has a bit of a LEGO habit, and my son took to buying the cheaper Chinese knockoff sets to maximize our, well, brick-building value, I suppose. It felt a little suspect to do this—the sets are direct copies of LEGO designs—and many of them remain in bags in a closet, unbuilt. Surely we didn’t need to import those. Nor the piles of cables, chargers, head lamps, and other low-cost electronic goods that broke after a few uses.



Whether it’s junk or not, Americans have become acclimated to buying a prodigious variety of wares from all over the world. When de minimis fused with global online commerce a decade ago, ordinary buyers like you and me started to see behind the curtain of domestic retailers. Anthropologie’s website touted that the unicorn rug was “exclusive” to its store. But that was never entirely true: Sellers offering the same style with similar materials found a way to reach buyers like me directly, thanks to online commerce and its associated marketplaces. That’s not going to change anytime soon. Instead, buying things will just become more painful. Someone will bear the burden of the new duties, and that someone is likely to be you.

The Trump Administration Will Automate Health Inequities

The Trump Administration Will Automate Health Inequities

The Atlantic by Craig Spencer / Aug 30



The White House’s AI Action Plan, released in July, mentions “health care” only three times. But it is one of the most consequential health policies of the second Trump administration. Its sweeping ambitions for AI—rolling back safeguards, fast-tracking “private-sector-led innovation,” and banning “ideological dogmas such as DEI”—will have long-term consequences for how medicine is practiced, how public health is governed, and who gets left behind.


Already, the Trump administration has purged data from government websites, slashed funding for research on marginalized communities, and pressured government researchers to restrict or retract work that contradicts political ideology. These actions aren’t just symbolic—they shape what gets measured, who gets studied, and which findings get published. Now, those same constraints are moving into the development of AI itself. Under the administration’s policies, developers have a clear incentive to make design choices or pick data sets that won’t provoke political scrutiny.


These signals are shaping the AI systems that will guide medical decision making for decades to come. The accumulation of technical choices that follows—encoded in algorithms, embedded in protocols, and scaled across millions of patients—will cement the particular biases of this moment in time into medicine’s future. And history has shown that once bias is encoded into clinical tools, even obvious harms can take decades to undo—if they’re undone at all.


AI tools were permeating every corner of medicine before the action plan was released: assisting radiologists, processing insurance claims, even communicating on behalf of overworked providers. They’re also being used to fast-track the discovery of new cancer therapies and antibiotics, while advancing precision medicine that helps providers tailor treatments to individual patients. Two-thirds of physicians used AI in 2024—a 78 percent jump from the year prior. Soon, not using AI to help determine diagnoses or treatments could be seen as malpractice.


At the same time, AI’s promise for medicine is limited by the technology’s shortcomings. One health-care AI model confidently hallucinated a nonexistent body part. Another may make doctors’ procedural skills worse. Providers are demanding stronger regulatory oversight of AI tools, and some patients are hesitant to have AI analyze their data.


The stated goal of the Trump administration’s AI Action Plan is to preserve American supremacy in the global AI arms race. But the plan also prompts developers of leading-edge AI models to make products free from “ideological bias” and “designed to pursue objective truth rather than social engineering agendas.” This guidance is murky enough that developers must interpret vague ideological cues, then quietly calibrate what their models can say, show, or even learn to avoid crossing a line that’s never clearly drawn.


Some medical tools incorporate large language models such as ChatGPT. But many AI tools are bespoke and proprietary and rely on narrower sets of medical data. Given how this administration has aimed to restrict data collection at the Department of Health and Human Services and ensure that those data conform to its ideas about gender and race, any health tools developed under Donald Trump’s AI action plan may face pressure to rely on training data that reflects similar principles. (In response to a request for comment, a White House official said in an email that the AI plan and the president’s executive order on scientific integrity together ensure that “scientists in the government use only objective, verifiable data and criteria in scientific decision making and when building and contracting for AI,” and that future clinical tools are “not limited by the political or ideological bias of the day.”)


Models don’t invent the world they govern; they depend on and reflect the data we feed them. That’s what every research scientist learns early on: garbage in, garbage out. And if governments narrow what counts as legitimate health data and research as AI models are built into medical practice, the blind spots won’t just persist; they’ll compound and calcify into the standards of care.


In the United States, gaps in data have already limited the perspective of AI tools. During the first years of COVID, data on race and ethnicity were frequently missing from death and vaccination reports. A review of data sets fed to AI models used during the pandemic found similarly poor representation. Cleaning up these gaps is difficult and expensive—but it’s the best way to ensure the algorithms don’t indelibly incorporate existing inequities into clinical code. After years of advocacy and investment, the U.S. had finally begun to close long-standing gaps in how we track health and who gets counted.


But over the past several months, that type of fragile progress has been deliberately rolled back. At times, CDC web pages have been rewritten to reflect ideology, not epidemiology. The National Institutes of Health halted funding for projects it labeled as “DEI”—despite never defining what that actually includes. Robert F. Kennedy Jr. has made noise about letting NIH scientists publish only in government-run journals, and demanded the retraction of a rigorous study, published in the Annals of Internal Medicine, that found no link between aluminum and autism. (Kennedy has promoted the opposite idea: that such vaccine ingredients are a cause of autism.) And a recent executive order gives political appointees control over research grants, including the power to cancel those that don’t “advance the President’s policy priorities.” Selective erasure of data is becoming the foundation for future health decisions.


American medicine has seen the consequences of building on such a shaky foundation before. Day-to-day practice has long relied on clinical tools that confuse race with biology. Lung-function testing used race corrections derived from slavery-era plantation medicine, leading to widespread underdiagnosis of serious lung disease in Black patients. In 2023, the American Thoracic Society urged the use of a race-neutral approach, yet adoption is uneven, with many labs and devices still defaulting to race-based settings. A kidney-function test used race coefficients that delayed specialty referrals and transplant eligibility. An obstetric calculator factored in race and ethnicity in ways that increased unnecessary Cesarean sections among Black and Hispanic women.


Once race-based adjustments are baked into software defaults, clinical guidelines, and training, they persist—quietly and predictably—for years. Even now, dozens of flawed decision-making tools that rely on outdated assumptions remain in daily use. Medical devices tell a similar story. Pulse oximeters can miss dangerously low oxygen levels in darker-skinned patients. During the COVID pandemic, those readings fed into hospital-triage algorithms—leading to disparities in treatment and trust. Once flawed metrics get embedded into “objective” tools, bias becomes practice, then policy.


When people in power define which data matter and the outputs are unchallenged, the outcomes can be disastrous. In the early 20th century, the founders of modern statistics—Francis Galton, Ronald Fisher, and Karl Pearson—were also architects of the eugenics movement. Galton, who coined the term eugenics, pioneered correlation and regression and used these tools to argue that traits like intelligence and morality were heritable and should be managed through selective breeding. Fisher, often hailed as the “father of modern statistics,” was an active leader in the U.K.’s Eugenics Society and backed its policy of “voluntary” sterilization of those deemed “feeble-minded.” Pearson, creator of the p-value and chi-squared tests, founded the Annals of Eugenics journal and deployed statistical analysis to argue that Jewish immigrants would become a “parasitic race.”


For each of these men—and the broader medical and public-health community that supported the eugenics movement—the veneer of data objectivity helped transform prejudice into policy. In the 1927 case Buck v. Bell, the Supreme Court codified their ideas when it upheld compulsory sterilization in the name of public health. That decision has never been formally overturned.


Many AI proponents argue concerns of bias are overblown. They’ll note that bias has been fretted over for years, and to some extent, they’re right: Bias was always present in AI models, but its effects were more limited—in part because the systems themselves were narrowly deployed. Until recently, the number of AI tools used in medicine was small, and most operated at the margins of health care, not at its core. What’s different now is the speed and the scale of AI’s expansion into this field, at the same time the Trump administration is dismantling guardrails for regulating AI and shaping these models’ future.


Human providers are biased, too, of course. Researchers have found that women’s medical concerns are dismissed more often than men’s, and some white medical students falsely believe Black patients have thicker skin or feel less pain. Human bias and AI bias alike can be addressed through training, transparency, and accountability, but the path for the latter requires accounting for both human fallibility and that of the technology itself. Technical fixes exist—reweighing data, retraining models, and bias audits—but they’re often narrow and opaque. Many advanced AI models—especially large language models—are functionally black boxes: Using them means feeding information in and waiting for outputs. When biases are produced in the computational process, the people who depend on that process are left unaware of when or how they were introduced. That opacity fuels a bias feedback loop: AI amplifies what we put in, then shapes what we take away, leaving humans more biased for having trusted it.


A “move fast and break things” rollout of AI in health care, especially when based on already biased data sets, will encode similar assumptions into models that are enigmatic and self-reinforcing. By the time anyone recognizes the flaws, they won’t just be baked into a formula; they’ll be indelibly built into the infrastructure of care.