Thursday, April 27, 2023

Yet another study confirms YIMBYs are right about everything. By Matthew Yglesias


www.slowboring.com
Yet another study confirms YIMBYs are right about everything
Matthew Yglesias
11 - 14 minutes

Lifting heavy things is a good way to build muscle, but working out once won’t make you much stronger. Giving poor families cash is an effective means of reducing poverty, but giving each household $1 isn’t going to change the poverty rate very much. Having more police officers on the street reduces crime, but adding one more Chicago police officer isn’t going to have a large impact. Putting carbon dioxide into the atmosphere contributes to rising global temperatures, but burning a single gallon of gasoline has a minimal impact on climate change.

These are just different versions of something we all know: small changes generally have small impacts.

Many people claim that new construction raises housing prices, either because they are confused or because they believe the demand induced by new amenities swamps the impact of increased supply. This is false, empirically, and in fact, the opposite is true — new construction reduces nearby prices relative to baseline. But, as in the examples above, a small change in the housing supply has only a small effect. Similarly, while I would expect regulatory changes to influence construction volumes, I also expect the scale of the change in construction to be related to the scale of the change in policy — small changes have small impacts.

This all seems quite obvious, but an Urban Institute study was recently written up in Governing magazine with the unfortunate headline “Zoning Changes Have Small Impact on Housing Supply.” I suspect this is related to the fact that one of the study’s authors, Yonah Freemark, characterized it that way on Twitter:

If you read the paper, “Land Use Reforms and Housing Costs: Does Allowing for Increased Density Lead to Greater Affordability?” I don’t think it conveys any information whatsoever as to whether zoning changes have a large or small impact on housing supply or whether upzonings are “enough” (enough for what?). What it does provide is the best evidence yet in the direction of change debate, delivering a knockout blow to the idea that regulatory relief has some perverse negative impact on affordability. Beyond that, all it tells us is that most changes to land use regulation are relatively small. This is not that surprising — small policy changes are more common than large policy changes.

Freemark is one of the named authors of the paper, along with Christina Plerhoples Stacy, Christopher Davis, Lydia Lo, Graham MacDonald, Vivian Zheng, and Rolf Pendall. And their paper does something a little different from earlier empirical work on this subject that I’ve cited.

Studies done by Brian Asquith, Evan Mast, Xiado Li, and Kate Pennington have looked at the price impact of new construction, not necessarily construction induced by policy change. The Pennington paper, in particular, is my favorite of these precisely because she avoids policy change and looks at new construction induced by buildings burning down, a kind of natural experiment. The existing policy in San Francisco generally prohibits buying an old building, knocking it down, and replacing it with a larger one. But if one burns and becomes uninhabitable, then you can replace it with a new building. That gives us insight into the possible impact of a hypothetical policy change, and what she finds is a win-win: local demand does rise when a new building is built, but supply rises even more, so the neighborhood becomes a better place to live and the rate of displacement falls. Good news!

The new paper takes a national look at policy change, which is hard because there are approximately seven zillion zoning entities in the United States. Their solution is to “use machine-learning algorithms to search US newspaper articles between 2000 and 2019” to try to get a comprehensive picture of changes in land use policy. They then mash that up with USPS and Census Bureau data to try to understand the impact.

I think these are the key findings of the paper:

    “Reforms that loosen restrictions are associated with a statistically significant, 0.8% increase in housing supply within 3 to 9 years of reform passage.”

    “[W]e find no statistically significant evidence that additional lower-cost units became available or moderated in cost in the years following reforms. However, impacts are positive across the affordability spectrum and we cannot rule out that impacts are equivalent across different income segments.”

    “Conversely, reforms that increase land-use restrictions and lower allowed densities are associated with increased median rents and a reduction in units affordable to middle-income renters.”

People of a certain mindset are going to find it borderline absurd that the headline conclusion of an empirical research project involving seven co-authors and state-of-the-art artificial intelligence is that supply and demand curves work like intro textbooks say they do. But this is a contentious subject that people do argue about, and Stacy et al. have the clearest confirmation yet that this is, in fact, the way the world works.

What the paper doesn’t tell us is what policy changes were actually made. The virtue of their method is that it let them chunk through the huge quantity of zoning adjustments made across tons of different jurisdictions in the United States. But all it knows is the direction of the adjustment — was it supposed to facilitate more construction or less? — not the magnitude. But oftentimes policy changes are small. Arlington County in the D.C. suburbs just had a huge political fight over a “missing middle” housing bill that doesn’t actually let anyone build larger buildings. The bill just allows for the subdivision of the kind of structures that are already allowed. The experience of Minneapolis has been that this does not generate much additional housing. That’s because this is, deliberately, a small change. It takes the symbolically important step of ending exclusionary single-family zoning while trying to avoid visible alteration of the built environment.

So I think the paper contains a really important headline conclusion, namely that supply skeptics are still wrong, and also reflects the reality that most land use changes are small. There’s nothing in there that should complicate YIMBY narratives or dissuade people who’ve understood the supply and demand dynamics all along.

Meanwhile, even as academics continue to refine their event studies, I think it’s important for people interested in policy to not lose sight of the big-picture, cross-sectional facts.

    A house in the New York, Boston, Los Angeles, or San Francisco metro area is much more expensive than one in the Atlanta, Houston, Phoenix, or Nashville metro area.

    The population is growing much more rapidly in the Atlanta, Houston, Phoenix, and Nashville metro areas than in the New York, Boston, Los Angeles, and San Francisco metro areas. 

How do we explain these two facts? Well, one popular account of (2) is that mismanagement in the blue states is leading people to flee en masse from crime and high taxes. But that’s inconsistent with (1), which seems to indicate high demand for these big coastal metros, albeit somewhat less so in the Zoom Era. The only coherent explanation is that metro areas vary in the extent to which high demand manifests as an increase in quantities versus an increase in prices.

This is not entirely about zoning, of course. It’s not a coincidence that the most supply-constrained cities are bounded by the ocean. It’s also worth noting that the expensive cities are actually larger and denser on average than the fast-growing cities. It’s not like if Boston adopted Nashville’s zoning it would get cheaper; that would actually be a downzoning. But the cross-sectional facts show us something important about changes. And they should help us to avoid things like this Bloomberg article, which wonders why Austin is getting more expensive if it’s allowing all this new construction. The article itself contains a chart that very helpfully shows that the vast majority of the Austin-specific increase in rents just reflects a national increase in prices.

What else happened during this period? Well, Austin’s population grew 22% between 2010 and 2020, Travis County grew 26%, and the Austin metro area grew 33% (!) while the overall national population grew about 7%.

In other words, there was a huge national increase in the relative demand for the Austin area compared to the average location in the United States. This nonetheless led to a modest increase in the price of living in the Austin area compared to the average location in the United States. And that’s because the Austin area added a lot of housing during this period — primarily single-family sprawl in the suburbs, but also some shiny new apartment and condo towers downtown. Meanwhile, note that within Austin (to say nothing of the suburbs), the majority of the land is colored bright yellow for various forms of low-density single-family zoning.

What would happen if Austin rezoned that land to allow for more density? We know the empirical direction of the change would be toward more construction and more affordability. But how much more construction and how much more affordability?

I’m annoyed that this new paper, which should be settling the induced demand argument, is instead being used to cast doubt on YIMBYism because I think there genuinely are important outstanding research questions in this space.

The most important one, by far, is the question of which policy changes actually generate a lot of new construction and how we can try to estimate this in advance.

There’s nothing wrong with incrementalism and passing small, politically feasible changes. But it’s not at all clear that the level of political opposition to a given land use change is proportional to the change’s influence on housing supply. “Ending single-family zoning” throughout an entire county, but in a way that generates very little actual construction, might be a worse strategy than rezoning one particular neighborhood for much taller buildings. There are also lots of issues beyond zoning that relate to lot occupancy and parking, and there can be complicated interactions between them. There’s room for more analytic work here to help policymakers make better decisions.

The other really big one is about parking.

One of the biggest projects in the pipeline in Washington, D.C. is this development called Parkside near the Minnesota Avenue Metro Station. I was talking to someone who’s been to community meetings in the area, and she says the questions people ask about this aren’t expressing esoteric supply/demand doubts. Residents believe (correctly!) that even a transit-oriented development project will raise the demand for local parking. This is not a high-brow concern that grantmakers and nonprofit leaders care about. But it’s a very important issue for the people who live nearby, and I think it’s the number one real-world driver of hostility to market-rate infill development. I think this is probably a problem you can solve by giving incumbent residents a property right to street parking rather than a right to veto new construction, but the world would benefit from more analytic work.

Finally, to the extent that “upzonings aren’t enough,” I think the basic insight is that some people are poor and therefore in need of help beyond what a good market system can provide. But is subsidized housing a good use of money, dollar-for-dollar, compared to a more generous Child Tax Credit? My guess is no, but that’s really just a guess. I wish I had a better sense of the actual impact of housing subsidies vs. spending on affordable housing construction vs. general financial assistance to poor people. This seems like a big open research question to me. But “does regulatory reform improve affordability?” is a question we’ve answered.

Wednesday, April 26, 2023

Every policy objective, all the time, all at once


www.slowboring.com
Every policy objective, all the time, all at once
Matthew Yglesias
12 - 15 minutes

Back on March 13, I wrote a column critiquing the Biden administration’s approach to industrial policy as lacking adequate focus. It didn’t make many waves in the wider world, but did seem to annoy some people on the White House staff. Then on April 2, Ezra Klein wrote a piece with similar themes in which he coined the phrase “Everything-Bagel Liberalism” to describe the phenomenon of unfocused policy. That catchphrase really did set the world on fire, generating a lot of enthusiasm from people who agree with me and Ezra, but also considerable pushback.

Here, the BlueGreen Alliance stands up for the everything bagel of industrial policy — climate action, union jobs, U.S.-made goods, and advancing racial and economic equity.

Heather Boushey from the White House Council of Economic Advisors was also at pains to note recently that she likes everything bagels.

Boushey’s reply in particular made me think that people are getting a little distracted by the metaphor here.

Washington, D.C. has enjoyed a bagel boom over the past 10 years. And one of the more recent additions, Call Your Mother, has gone from a farmer’s market popup to a regional chain that not only is very tasty but also counts White House Chief of Staff Jeff Zients as one of its lead investors. And look at Boushey’s bagel. It is, I assume, from CYM and has four toppings on it. BlueGreen’s metaphorical policy bagel also has four toppings.

Now go back and read Ezra’s column. He says “everything bagels are, of course, the best bagels. But that is because they add just enough to the bagel and no more.” This is not an argument against everything bagels! The point is that despite the name, an “everything bagel” is a harmonious blend of a relatively small number of toppings. One popular CYM choice is the za’atar bagel, which is a sort of mashup of an Ashkenazi Jewish culinary tradition (the bagel) with the Sephardic influence of za’atar, a Middle Eastern spice blend. Notably, the CYM everything bagel does not include za’atar because that would be too much stuff. The everything bagel topping is, like za’atar itself, a spice blend and not something you blend with yet more spices. In his column, Ezra is not talking about actual bagels, he is making a reference to the Academy Award-winning film “Everything Everywhere All At Once” whose plot features an “everything bagel” that includes literally all of multi-dimensional metaphysical possibility. Contemplating this totality induces paralysis and insanity and drives the plot of the movie forward.

It seems silly to be quibbling about bagels, but it’s actually important to define the terms of the debate here.

Nobody is saying that everything in real-world politics needs to be maximally fastidious technocratic policymaking designed to pursue a single goal monomaniacally. That’s not how the world works. I know that, Ezra knows that, everyone knows that. The disagreement here is not over the legitimacy of combining a few different ingredients into a harmonious blend. It’s over whether that is in fact what’s happening — are we dealing with a Call Your Mother everything bagel or the Everything Everywhere All at Once bagel of insanity?

To climb down a bit from the mount metaphor, here’s a very non-catchy way of making the point: progressives would benefit from giving more weight to dull Economics 101 considerations when making choices.

We’re living through a somewhat annoying irony of history.

Barack Obama ran what was, by the standards of real-world politics and policymaking, an extremely fussy and technocratic administration full of people like Peter Orszag and Cass Sunstein who cared a lot about efficiency and where very conventional economics types like Larry Summers and Jason Furman exerted a lot of influence. His term in office also corresponded with a period of very low interest rates, low inflation, and high unemployment. Under those conditions, the economy gets weird. You’re in the realm of what Paul Krugman calls “Depression Economics,” and the hallmark of Depression Economics is that efficiency doesn’t matter very much.

There’s a famous Keynes bit about how you can stimulate a depressed economy by paying unemployed workers to dig ditches and then pay them to fill the ditches back in again.

Keynes’ point was not that it would be a good idea to pay people to do pointless ditch-digging projects. His point was, rather, twofold:

    If, amidst a depressed economy, you have moral or ideological objections to the dole or you’re worried that free cash creates bad incentives, then you should come up with some kind of “anyone who wants money can get money by doing X” scheme.

    While ideally X should be as useful as possible, even if X is totally pointless, there is still some benefit to doing it, so you shouldn’t worry too much about the details. 

These are powerful and important insights, and they help explain why poorly targeted cash handouts and somewhat wasteful public works programs are totally fine during a steep depression. If the alternative to a poorly-targeted handout is one that leaves some vulnerable people falling through the cracks, it’s better to do the poorly-targeted handout. If the alternative to waste is doing very little, then it’s better to do the wasteful program. That’s because the most wasteful thing of all is to have people languishing in unemployment.

The upshot of this is that a lot of the technocratically-minded aspects of the Obama administration had very limited short-term payoffs, some of them were harmful, and the nature of being fussy and technocratic is that you alienate and annoy some people.

Trump took over and governed in a much more loosey-goosey way:

    He did a big tax cut for the rich, which he made more appealing by stapling a middle-class tax cut onto it.

    He spent more on the military, while also increasing non-military spending.

    He made no effort to reform retirement programs or the health care system.

    He enacted tariffs to help American manufacturers, and when that blew back on American farmers, he gave them cash handouts to compensate. 

If Trump had taken office at a time of full employment, these policies would have generated calamitous inflation and discredited him right away. But because he took office at a time when the labor market was still weak nine years after falling into recession, things worked out pretty well. Then came a huge pandemic, a couple of big stimulus bills, Joe Biden’s election, another giant stimulus bill that represented Democrats’ determination to err in the opposite direction from the party’s approach in 2009-2010, and — it worked! Unemployment is very low, prime-age labor force participation is high, nominal wage growth is extremely strong, and the right thing to say is “we did it! We avoided the long slump we were worried about!”

But avoiding a depression means that we don’t have Depression Economics anymore — we just have regular economics.

There’s nothing wrong with pursuing multiple policy objectives simultaneously.

Sometimes as a practical legislative matter, you need to assemble a logroll with political logic rather than policy logic, and it’s counterproductive for policy literalists to spend too much time getting angry about that. It’s also quite normal to try to meet multiple related objectives, even when there is some tension between them. National parks are in part about preserving wildlife and habitats, but also in part about creating recreational opportunities for tourists. These are adjacent goals — they both involve nature-y stuff, and the thing the tourists are trying to see is parkland rather than a theme park. But from a strict conservation perspective, the presence of visitors is bad. Attracting visitors involves things like parking lots and visitor centers and roads and dining facilities that are contrary to the conservation objectives.

Running through the hallways of the Interior Department yelling “PICK A LANE, ASSHOLES!” would be a crazy response to the existence of those tradeoffs. But it would be very bad for the National Park Service to convince itself that there are no tradeoffs. Instead, they are trying to balance considerations and create a good outcome.

Sometimes in politics, an official says something you hope and believe is just BS for political reasons (it’s politics, it happens) and not genuine confusion. For example, here’s Gina Raimondo explaining to Ezra why CHIPS Act funding has strings related to child care attached:

    “Every one of the requirements — or they’re not really requirements — nudges are for criteria or factors we think relate directly to the effectiveness of the project,” she told me. “You want to build a new fab that will require between 7,000 and 9,000 workers. The unemployment rate in the building trades is basically zero. If you don’t find a way to attract women to become builders and pipe fitters and welders, you will not be successful. So you have to be thinking about child care.”

She is correct that there is a serious question about labor supply in the building trades and that child care provisions to bring more women into the field is one way that could be addressed. But the idea that a prescriptive federal mandate makes this easier rather than harder is ridiculous. I think the best possible actual answer here is that this child care stuff is meaningless verbiage (she’s at pains to say it’s a nudge rather than a requirement) designed to let child care advocacy groups claim a win with no intended real-world effect. The next best is that they’ve just decided to try to divert some money that Congress appropriated for semiconductors into child care because they think child care is important and the tradeoff is worthwhile. What would be bad, though, is if they’ve convinced themselves that there are no tradeoffs here. Disagreement about values and priorities is inevitable in politics, but self-deception is dangerous.

This is not as catchy as saying that the Biden administration needs to avoid the temptations of everything bagel liberalism, but I think it’s probably more accurate to say that rigorous policy analysis has become underrated and that Biden in particular should listen somewhat more to orthodox economists and economics.

The basic fact that multiple objectives will dilute the efficacy of a subsidy program is one thing that might pop out of that. But the same basic issue recurs in other areas. The White House continues to fight it out in court over a student loan forgiveness program whose now-obsolete premise was that the country needed economic stimulus. There was a time when I agreed, but the situation changed and I changed my opinion. House Republicans’ plan to enact austerity by cutting poor people off their health insurance is a cruel and absurd idea, but enacting austerity by collecting student loan payments from a population that is higher income than the average American is perfectly reasonable.

You don’t need to go full libertarian by any means — I think the White House’s “junk fees” initiative has been grounded in pretty solid research — but you do need to take a hard look at rules that don’t make sense.

And you need to recognize that under present-day macroeconomic circumstances, something like wasteful spending on over-engineered mass transit projects is pure waste. If this were 2010, we might say it’s not so bad if cities are building excessively large stations for no real reason — they look nice, and it’s better than having construction workers unemployed. But as Raimondo says, right now we have essentially no unemployed construction workers. That’s bad because construction is very useful. So we shouldn’t be paying people to build stuff that’s useless. And we should be relaxing regulatory burdens on manufactured housing so we can find a less labor-intensive way of doing things when possible.

I’d ultimately like to frame this in a more positive way — I spent over 10 years banging my head against the wall because policymakers were underestimating the extent of labor market weakness and the importance of demand-side policy. Biden and the 117th Congress genuinely solved this problem and it was a huge deal. But they need to take the win and recognize that their success means we’re now in a world of much tougher constraints and tradeoffs.

Tuesday, April 25, 2023

Why I'm not worried about AI causing mass unemployment. By Timothy B Lee


www.understandingai.org
Why I'm not worried about AI causing mass unemployment
Timothy B Lee
18 - 23 minutes

In 2011, the venture capitalist Marc Andreessen published an essay that became a kind of manifesto for Silicon Valley during the 2010s.

“Software is eating the world,” Andreessen declared.

Computers and the Internet had already revolutionized a bunch of information-oriented businesses: books, movies, music, photography, telecommunications, and so forth. Software also played a major supporting role in more tangible industries. New cars had dozens of computer chips in them, for example, and the oil and gas industry made heavy use of software to discover new drilling sites.

But Andreessen, co-founder of the venture capital firm Andreessen Horowitz, argued that the software revolution was only getting started. “In many industries, new software ideas will result in the rise of new Silicon Valley-style start-ups that invade existing industries with impunity,” Andreessen wrote. “Companies in every industry need to assume that a software revolution is coming.”

At the time, Silicon Valley was abuzz with talk of the sharing economy. In 2011, Andreessen Horowitz invested in Airbnb, which was working to disrupt hotels by having people rent out their spare rooms. In 2013, the firm invested in Lyft, which was trying to revolutionize the taxi industry by letting people give rides in their personal vehicles.

When Bitcoin started to gain mainstream attention in 2013, Andreessen Horowitz jumped on that bandwagon too. Cryptocurrency was supposed to “eat” the financial world, rendering banks and other financial institutions irrelevant in much the same way Netflix made Blockbuster irrelevant and digital cameras bankrupted Kodak.

The venture capitalists who poured billions of dollars into startups like this during the 2010s expected some of them to eventually be as big as Google or Facebook. After all, they thought, industries like hospitality, transportation, and finance are huge. The rewards for disrupting them should be correspondingly large.

But it hasn’t worked out that way. Computers and smartphones have become ubiquitous across the economy. But this has led to only modest changes for established industries like health care, education, housing, and transportation.

Andreessen’s essay reflected a persistent blind spot in Silicon Valley thinking: a tendency to overestimate the power of information technology and underestimate the complexity of the physical world. In 2011, this led to excessive optimism about the economic impact of software startups. Today I suspect this same bias is distorting many people’s thinking about the likely impact of artificial intelligence.

Many technologists worry that AI will get so powerful that it will be capable of performing most of the jobs currently performed by humans, leading to mass unemployment. I don’t buy it. Certainly AI will have a significant impact on the economy—perhaps even bigger than the Internet. But there will also be significant sectors of the economy that see only modest changes as a result of AI. And there will continue to be plenty of work for human beings to do.

One way to evaluate Andreessen’s 2011 prediction is to look at the most successful software startups of the 2010s. With help from Connor Leech, the CEO of job search website Employbl, I made a list of successful Internet startups founded since 2009. They can be broken down into a few major groups:

There are social networks and messaging apps like Discord, Instagram, Slack, Snap, TikTok, Whatsapp, and Zoom. The success of these companies doesn’t really bolster the “software eating the world” thesis because they were entering fields already dominated by other tech companies.

Companies like Square, Stripe, Robinhood, and Venmo have thrived by offering modern interfaces for traditional financial services. They represent incremental progress in the finance sector, not a revolution.

In contrast, cryptocurrency companies like Coinbase and Circle are trying to build new payment rails that will eventually make conventional financial institutions irrelevant. But these firms have struggled, especially in the last year, and crypto-based financial products remain far from mainstream adoption.

The startups that best fit the “software eating the world” thesis are probably “sharing economy” companies like Bird, DoorDash, Instacart, Lime, Lyft, Uber, and WeWork. Each of these companies use software to offer services in the “real world”—taxi rides, scooter rentals, food delivery, lodging, office space, and so forth. They enjoyed a lot of hype in the mid-2010s, and most of them have struggled in the last few years.

Some of them have been total fiascos. WeWork failed to disrupt commercial real estate. Shares in the scooter startup Bird have lost 97 percent of their value since the company went public less than two years ago. Last year I drove for Lyft for a week and wrote about its difficulty in turning a profit. 

The two most successful “sharing economy” startups are probably Airbnb (founded in 2008) and Uber (founded in 2009). These companies are each worth tens of billions of dollars, and they seem likely to be enduring, profitable businesses.

Still, Airbnb has only a modest share of the overall lodging industry. And in recent years, the quality of Uber’s service has deteriorated, with higher fees and longer wait times. Smartphone-based ride hailing is a marginal improvement over conventional taxis, but hasn’t been a revolution.

In his 2011 essay, Andreessen specifically mentions health care and education as industries ripe for disruption by software. But as far as I can see that hasn’t happened. Hospitals increasingly use computers for record-keeping and billing and software has been used to make new drugs and medical devices. Many people learn foreign languages using Duolingo or watch educational videos on YouTube. But people largely go to the same schools and hospitals they did 10 or 20 years ago.

The reason I’m relitigating this 12-year-old argument is that I hear echoes of it in contemporary discussions of AI. In the early 2010s, Silicon Valley thought leaders looked at the early success of companies like Airbnb and Uber, extrapolated wildly, and concluded that software was going to transform the entire economy. Today, AI thought leaders are looking at the early success of ChatGPT and Stable Diffusion, extrapolating wildly, and concluding that AI software is going to transform the economy and put tons of people out of work.

To be clear, I do think AI is going to be a big deal. I wouldn’t have started an AI newsletter otherwise. But as with the Internet, I expect the impact to be concentrated in information-focused industries and occupations. And most of the American economy is not information-focused: It’s focused on delivering physical goods and services like homes, cars, restaurant meals, and haircuts. It will be hard for AI to have a big impact on these industries for the same reasons that it’s been hard for Internet startups to do so.

A common line of thinking about AI and the labor market goes something like this: During the 20th century, we automated a lot of physical tasks. As a result, a worker’s value has increasingly been based on brain power rather than muscle power. But now it looks like AI will enable computers to surpass humans at many cognitive tasks as well. That could lead to a future where the average human worker isn’t better than machines at anything, and as a result they can’t find a job.

One thing this argument misses is that there are still lots and lots of tasks that human beings can perform better than any machine.

Take plumbers, for example. They need to get in and out of their cars, climb stairs and ladders, and carry heavy objects. The job also requires fine motor skills.

Today’s robots can’t perform these tasks at anywhere close to a human level. So even if you had human-level AI that perfectly understood how to be a plumber, it’s not obvious we could build a robot body versatile enough to do the job.

Back in 2015, the Defense Advanced Research Projects Agency held a contest where teams competed to build humanoid robots that could perform simple tasks like driving a vehicle, opening a door, turning a valve, and cutting a hole in a wall using a drill. A lot of the robots fell over during the competition. The successful ones performed the tasks far slower than a human being.

And these were research prototypes, not commercial products. Each robot in the DARPA competition was supported by a team of researchers that performed extensive maintenance between trials. So it would take far more human labor to prepare one of these robots to do a plumbing job than to just have a human plumber do the job.

Of course, that was eight years ago. Have robots improved since then? Earlier this year, Boston Dynamics, a leading robotics company, released a video showing that its humanoid Atlas robot (also still a research prototype) now has claw-like hands and can pick up heavy objects. While this represents impressive progress (previous versions had no hands at all), the company still has a long way to go.

“The simple claw grippers mean Atlas crushes everything it picks up,” Ars Technica’s Ron Amadeo wrote in his writeup of the video. You wouldn’t want this robot to fixing your leaky faucet no matter how intelligent its software was.

Some companies, including Tesla and Figure, claim they’re developing humanoid robots that will be capable of safely performing a range of domestic tasks. As far as I know, none of these companies have shipped these products yet, and I’m skeptical they’ll be able to live up to their own hype. Even if they do, it’ll take many years to manufacture enough humanoid robots to have a major impact on the labor market. And in the short-to-medium term, we’d get a lot of new jobs in the robotics sector.

But let’s suppose robotics technology progresses rapidly and in a couple of decades there are millions of humanoid robots with the physical capacity to perform most jobs people can do. Human workers will still retain another big advantage in the marketplace: the fact that other people like to interact with them.

This is most obvious in the caring professions. Even if someone invented a robot that was proven to be as safe as a human babysitter, I wouldn’t want it taking care of my 2-year-old. I bet you wouldn’t either. By the same token, elder care workers, physical therapists, psychologists, nurses, and workers in similar jobs should be insulated from AI-related disruption because most people are going to prefer human caregivers over robots.

But this point applies far beyond the caring professions. Think about coffee shops, for example. Many Starbucks locations already have high-end coffee machines that grind the beans and produce coffee that’s indistinguishable from what a human barista would make. It wouldn’t be hard to fully automate Starbucks locations so that a robot arm hands you your coffee when it’s ready.

But Starbucks isn’t going to deploy a robot like that because Starbucks customers aren’t just buying cups of coffee—they’re buying the relative luxury experience of having another human being prepare coffee for them.

In 2012, the Wall Street Journal reported that Starbucks had ordered its baristas to “stop making multiple drinks at a time” because customers were complaining that harried baristas had “reduced the fine art of coffee making to a mechanized process with all the romance of an assembly line.” You know what’s even less romantic than your barista making multiple cups of coffee at the same time? Having a robot make your coffee.

Lots of other industries work this way:

    Despite the low cost of workout apps (and before that workout videos) lots of people pay a premium to attend fitness classes with human instructors.

    Recorded music is cheap and ubiquitous, but fans pay a premium to see their favorite artists perform live.

    Downscale restaurants take orders at the counter (or on a touchscreen) and make customers bus their own trays, while fancier restaurants have a small army of waiters ready to refill your water glass and sweep up crumbs.

This dynamic will become even more important in the AI era. For example, in the future there will probably be low-cost virtual schools where students interact with chatbot tutors rather than human teachers. Undoubtedly some students will find them useful.

But many others will hate it for the same reasons they hated Zoom school during the pandemic. Lots of people like listening to in-person lectures. It’s easier to stay on track with your studies if you know a human professor (or teaching assistant) will be disappointed in you for turning in an assignment late or getting a bad score on a test. And going to a brick-and-mortar school lets you make friends with your classmates and participate in extracurricular activities. That’s going to continue drawing students to conventional schools no matter how good AI-powered virtual schools get.

Similarly, you might be able to get a medical diagnosis from a chatbot, but many people will be willing to pay extra for advice from a human doctor or nurse—especially if they need a physical exam. People may feel more comfortable sharing intimate details with a human doctor, not only because they feel an emotional connection but also because they may have more confidence that the information they share won’t be misused.

There’s a risk we could wind up with a two-tier health care system where affluent patients can talk to human doctors on demand while lower-income patients are required to interact with a chatbot first. But it’s hard to imagine AI eliminating demand for doctors altogether.

AI very likely will lead to lower employment in some occupations. A few categories of workers may be completely replaced by AI software. But there will be many more occupations where AI automates some parts of the job, allowing human workers to focus on other tasks and become more productive overall.

Even when machines automate a substantial portion of an occupation, that won’t necessarily lead to big job losses. Take automatic teller machines. Banks installed hundreds of thousands of ATMs in the 1980s and 1990s. This made it cheaper to operate a bank branch, so banks opened more branches. And each branch still had some tellers to handle tasks too complex for the ATM. As a result, the total number of bank tellers in the U.S. changed little between 1980 and 2010.

I expect AI to have a similar impact on many occupations. For example, Github Co-pilot is AI software that is already helping programmers increase their productivity by 20 to 50 percent. So will companies take this as an opportunity to cut their engineering workforces? Some might, but I bet most will instead take the opportunity to produce more software.

And in cases where AI does reduce employment in an occupation, workers will move to other occupations where demand is strong. And as we’ve seen, there are plenty of jobs, from plumbers to baristas to doctors, that aren’t going to be replaced by AI any time soon.

A recent paper by computer scientist Ed Felten, business professor Manav Raj, and economist Robert Seamans looked at a database of occupations and tried to define how “exposed” each job was to large language models like ChatGPT. They did this by breaking each job down into 52 human abilities (things like “oral comprehension, oral expression, inductive reasoning, arm-hand steadiness”) and then estimating how well AI software could perform each of these tasks.

Telemarketers topped the list of most exposed occupations, followed by several categories of college professors. Judges, arbitrators, and clinical psychologists also made the list. This seems like a reasonable first step for estimating the impact of AI on the labor market, but I also think the list illustrates the limitations of the methodology.

For example, it’s very hard to imagine we are ever going to replace human judges with AI. No matter how technically competent AI might get at interpreting and applying the law, people are going to want to keep that function in human hands. People will be more likely to trust rulings made by a human being whose reasoning they can understand and who seems to share their values.

I’m also skeptical that we’ll see falling demand for clinical psychologists. While some people will undoubtedly enjoy talking to chatbots about their mental health problems, many others will prefer to talk face-to-face with a human being. And as I explained above, it’s not obvious that AI will put many college professors out of work.

Interestingly, the researchers found a strong positive correlation between language model exposure and income: higher-wage occupations are more likely to be impacted by the latest AI technology. So it’s very possible that AI technology will narrow the large wage premium that opened up between college graduates and less educated workers in the late 20th century.

Finally, it’s important to remember that the level of employment across the economy is ultimately driven by macroeconomic factors: If consumers spend more money, then businesses will respond by hiring more workers. The last three years have illustrated how powerful this can be: In the wake of the pandemic, Congress and the Fed worked a little too hard to boost the economy, producing a super-tight labor market and rising inflation.

If AI starts replacing workers in the coming years, that will put downward pressure on wages and prices while growing the economic pie. That will give the Fed more leeway to cut interest rates and give Congress more room to raise spending or cut taxes. As long as Congress and the Fed are doing their jobs, there’s no reason for the total number of jobs, economy-wide, to decrease.

The Internet’s biggest impact on the world may turn out to be cultural rather than economic. Music, television, and journalism have been transformed by platforms like Spotify, YouTube, Netflix, and Twitter. Social media has transformed politics, increasing polarization and powering the rise of populist movements around the world. And many white-collar workers have had to master new tools to remain at the forefront of their careers.

But in material terms, our lives aren’t much different than they were 30 years ago; our homes, grocery stores, neighborhoods, schools, and hospitals haven’t changed very much. Economic growth in the United States actually slowed down during the Internet era: inflation-adjusted GDP per capita rose more between 1962 and 1992 than it did between 1992 and 2022.

I expect a similar story with AI. Stable Diffusion has already revolutionized how we make digital images, and audio and video content won’t be far behind. The ability to generate customized (and possibly fake) text, images, and video on a large scale could have dramatic and unpredictable effects on our politics. Many white-collar workers will need to master new AI tools to remain at the forefront of their careers. Some might be forced to change careers altogether.

And if (like me) you’re a white-collar worker who sits in front of a computer all day, that will probably feel like a revolution, just as the Internet felt like a revolution to us 20 years ago. But most of life—and most jobs—are not online. The impact of AI on most people’s day-to-day lives is likely to be correspondingly modest.

I’ve got a favor to ask. To help me keep up with the rapid pace of change in AI, I’m looking to talk to readers who work with AI—whether that’s building it, using it, or studying it.

I’ve opened up a number of 30-minute slots on my calendar tomorrow and Wednesday. If you’d be willing to talk to me, please click here. All conversations will be strictly off the record. Thank you!

A Nuclear Revival Needs More Rules, Not Less. By David Fickling

A Nuclear Revival Needs More Rules, Not Less

David Fickling | Bloomberg — Read time: 4 minutes


April 19, 2023 at 8:55 p.m. EDT

After decades of winter for nuclear power in North America and Western Europe, there’s recently been some long-overdue signs of spring.


In the state of Georgia, the first unit of the 2.2 gigawatt Vogtle expansion project — $34 billion and 17 years in the making — was connected to the grid April 1. It may have come in at more than double the cost and seven years later originally forecast, but it’s just the second new civilian reactor completed in the US since 1996. The unit has begun testing and may start up as soon as this year.


Similar green shoots are showing in northern Europe. Finland’s 1.6 gigawatt Olkiluoto 3 reactor, plagued by similar time and cost overruns, finally delivered electricity April 16, becoming the continent’s first reactor in more than 15 years. (1)


Story continues below advertisement

It’s popular to blame environmentalists for the way atomic power stopped growing in rich countries three decades ago. Too many protests and consequent safety regulations have made it impossible to build nuclear power plants (the argument goes) slowing what should be an easy path to decarbonizing our economies. Clear away the red tape, and the market will do the rest.


Look at the long and painful histories of Vogtle and Olkiluoto, however, and it becomes clear that the truth is close to the opposite. It’s not simply a surfeit of regulations, but more importantly the deregulation of energy markets that has stifled atomic power in recent decades.


Most of the world will need modest, but crucial, shares of nuclear energy in tandem with wind and solar to build affordable zero-carbon grids over the coming decades. Correctly identifying the source of the problem, rather than turning renewables versus nuclear into a culture-war battle, will be crucial to making sure those plants get built.


Story continues below advertisement

Atomic power facilities are, by their nature, megaprojects. Only hydroelectric dams can compete with them in terms of sheer energy output. There’s an entire United Nations agency dedicated in part to ensuring that the fissile fuel supply isn’t diverted toward military uses, and a complex global convention to help governments share liability in the event of an accident. Even in a best-case scenario, nuclear plants take the best part of a decade and billions of dollars to build, followed by further decades before they’ve paid for themselves.


Almost every other megaproject we construct depends on coordinated government support, if not outright ownership, because only states or protected monopolies have the capacity to take on the financial and operational risks. That was the case for power, too, during the nuclear boom from 1970 to 1990, when monopolistic utilities in Europe, North America and the former Soviet Union constructed plants with little regard to cost or even demand. 


Those risks have only been heightened by the way that western countries have deregulated energy markets since the 1990s, exposing multi-decade nuclear projects to the vagaries of volatile pricing rather than the fixed returns that they once expected. It’s no coincidence that the nuclear construction boom has lasted longest in Asian and former Soviet countries where state-backed monopolies and managed power pricing still hold sway.


Story continues below advertisement

While being ostensibly private-sector projects, Vogtle and Olkiluoto both underline this point. The former was only finished thanks to some $12 billion of loan guarantees from the US government. VC Summer, an almost identical plant across the state border in South Carolina that ran into simultaneous difficulties when their common contractor Westinghouse filed for bankruptcy in 2017, failed to attract funding from Washington and was abandoned. The key ultimate shareholder in Olkiluoto, meanwhile, is not a profit-maximizing investor, but a cooperative of local power users. 


Some form of government support will be necessary if atomic power isn’t to wither as its aging plants reach the end of their lives. Contracts-for-difference are a type of financial derivative used in the UK power market that allow low-carbon generators to swap volatile electricity prices for fixed ones, with a government agency acting as counterparty. Sticking to boring but familiar water-cooled reactor designs, rather than trying to reinvent the wheel with fast neutron or small modular reactors, is the best way of ensuring that projects come in closer to time and budget plans. Governments shouldn’t be long-term owners of reactors, but their vast financial capacity and ability to knock heads together means they’re best-placed to manage the construction stage before selling off operating projects to private investors.


It might be an enjoyable sport to turn the problems of our power grids into a cage fight between hippies and engineers. If we want affordable zero-carbon power in 2050, however, we will need to fix the real-world issues that have held back nuclear energy for decades, rather than blaming everything on omnipotent environmental campaigners. There are solutions to atomic power’s malaise. But the first step toward recovery is admitting you have a problem.


Story continues below advertisement


David Fickling is a Bloomberg Opinion columnist covering energy and commodities. Previously, he worked for Bloomberg News, the Wall Street Journal and the Financial Times.


More stories like this are available on bloomberg.com/opinion

Story continues below advertisement

Even Fox News Admits Climate Change Is Real Now

Mark Gongloff | Bloomberg — Read time: 3 minutes

April 24, 2023 at 6:20 a.m. EDT

The good news is that after nearly half a century, 1.1C of global heating and countless natural disasters, people for the most part finally accept the basic science of human-caused climate change. The bad news is that they still don’t seem willing to do very much about it. 


A new study of TV coverage of the 2021 report of the Intergovernmental Panel on Climate Change found few expressions of doubt about the basic science of anthropogenic global warming — almost all of which were confined to right-wing media such as Fox News. And even Fox has mostly morphed, relative to coverage of previous IPCC reports, from questioning the reality and causes of climate change to doubts about its severity and the need to take action.   


Old-school climate skeptics still show up with depressing regularity in social-media feeds, of course, but they are probably over-represented in such fever swamps. The rest of the world has apparently moved on, which is a moment to celebrate.

Unfortunately, the new flavor of climate skepticism is almost as pervasive and unhelpful as the old kind was. Mainstream media coverage of the 2021 IPCC report was filled with sources expressing doubt about the policy responses to climate change, according to the study conducted by researchers from Oxford, Cornell and other universities, focusing on 20 news channels in Australia, Brazil, Sweden, the UK and the US. Such voices were even louder on mainstream channels than on right-wing ones:


It is healthy and constructive to debate the right approaches to transitioning away from fossil fuels and preparing for an increasingly hostile climate. But much of today’s climate skepticism is designed to frustrate any sort of policy response at all.


Conservatives, the study notes, now generally acknowledge climate change is real and caused by humans. But they also suggest aggressive climate action will hurt economic growth, fuel inflation, punish low- and middle-income families and foster energy insecurity. They argue other polluting countries, particularly China, should act first. As has long been the case for climate skeptics — going back to the days when Exxon contradicted the science of its own researchers — the goal seems to be delaying the transition to renewable energy for as long as possible.



“Any definitive shift towards response skepticism across the media, such as vocal opposition to net zero policies, represents an important new challenge to climate action,” Oxford researcher James Painter wrote in a column about his study.


Such arguments may already be having a real impact. Foot-dragging on climate is everywhere you look. At the recent G-7 meeting, the world’s most developed countries lost their nerve when it came to setting a timeline to phase out coal power. None yet have plans to reduce fossil-fuel use enough to achieve net zero by 2050 and limit global warming to 1.5C, as all have pledged to do. As Bloomberg Green noted, Japan is the worst offender in this regard, but the US and Germany aren’t doing much better: 


Meanwhile, Republican politicians in the US are attacking banks, money managers and companies for trying to hasten the transition to renewable energy. Fortunately, companies are starting to push back, because all basically now agree that climate change isn’t just some liberal obsession but a real and growing threat to business. Those politicians are having to weigh their need to stir up the base with anti-woke performance art against their need for campaign cash. 



So maybe there is some reason to hope it won’t take another half-century to change the climate consensus again, this time about the need for more-aggressive action. Given how much the planet has already warmed and how much carbon is already in the atmosphere, we no longer have five years to spare, much less 50.  

Mark Gongloff is a Bloomberg Opinion editor and columnist covering climate change. A former managing editor of Fortune.com, he ran the HuffPost’s business and technology coverage and was a reporter and editor for the Wall Street Journal.


Sunday, April 23, 2023

Illiberal Regimes Hate Modernity But Can’t Live Without It


www.theunpopulist.net
Illiberal Regimes Hate Modernity But Can’t Live Without It
Samantha Hancox-Li
17 - 22 minutes
The headquarters of Mussolini's Italian Fascist Party, 1934.

Mussolini’s Italian Fascist Party headquarters, 1934. RareHistoricalPhotos.com

In January of this year, rightwing media personality Matt Walsh tweeted "Singapore is able to have nice things in part because they execute drug dealers by hanging and arrest even petty vandals and thieves and beat them with a cane until they bleed. We don't have nice things because we aren't willing to do what is required to maintain them." What is striking about this claim is the degree to which even illiberal activists like Walsh, who have made it their mission to oppose the modern world and return us to pre-modern structures of hierarchy and domination, are irretrievably infected by the logic of modernity. On an emotional level, he thinks, "Drug dealers are sinners and should be scourged," But even to himself, he wraps this up in a materialist logic: "Drug dealers should be scourged because that will bring us prosperity."

Everybody wants what modernity offers. And all the illiberals are tying themselves in knots trying to fit their own philosophy to it—hence the bizarre association of technological progress with caning people for smoking weed. Meanwhile, in the America that actually exists, weed is big business: the increasing legalization of marijuana has transformed it into a hundred-billion-dollar a year industry.

I want to explore this tension. My claim is that all contemporary illiberal movements face a fundamental problem: The illiberal's dilemma. On the one hand, everybody wants what modern prosperity offers—power, comfort, security, wealth. On the other hand, illiberals reject what makes modern prosperity possible—freedom, diversity, the continual churn of change. We can taxonomize different varieties of illiberalism according to how they attempt to square this circle—from the herrenvolk democracies to the petro-dictators to the authoritarian capitalists. And perhaps we liberals can take our own lessons from their failures.

The Magic Beast

But before turning to the illiberal's dilemma, a bit of essential stage-setting: a brief history of economic growth, 50,000 BCE to present. The facts I am about to rehearse here are commonplace among economic historians, though they are not well-known among the general public.

Prior to 1800, long run per capita economic growth was flat. In other words: for the first 50,000 years after the invention of agriculture, the overall productivity of people didn't change very much. Specific societies and places might see small periods of gradual enrichment—but these were, invariably, followed by contraction. And there were always more mouths to feed. Most people, for most of history, lived around the level of subsistence—above it, if they were lucky; below it, if they were not.

Around the year 1800, give or take a few decades, this changed; it changed in a specific place—England—and spread outwards from there. Long-run per capita economic growth started to average between 1 and 3% a year. Three percent growth might not sound like much, but compounded over a generation it means an economy that doubles in size every 25 years or so. Even over the lifetime of a single individual, this represents an astounding rate of progress. Children could expect to be twice as rich as their parents, and their children twice as rich as that. The impact of this on world history is hard to overstate: for the first time, there was enough to go around to guarantee every citizen not merely a decent life, but a luxurious one. The average individual in the modern world works less, earns more, and lives a life of security and comfort that would have been unimaginable to even the richest kings in the world in 1700.

This change was driven by a shift in the nature of technological progress. Modern wealth does not consist of the same stuff that existed in 1700 but "only more of it." We are not sitting on a giant pile of shovels, potatoes, wooden shoes and rough spun wool. Instead we made better stuff and we made it in new ways and we made it in incomprehensible quantities. This does not mean that growth depended on rare world-shaking inventions. Rather, it depended on tinkerers' tweaks, ideas a mere two or three percent better than the previous one—but three percent, compounded over decades, turns out to be quite a lot. Modern economic growth is quite simply about the steady progress of technology and innovation

This sustained technological innovation is a product of a deeper change in culture, politics, and social form. In particular, it is a product of the transition from closed to open societies: societies in which increasingly large sections of the population have access to economic, political, and social participation. Liberal democracy seems to be the social form best suited to generating, disseminating, and sustaining continual innovation. This is a less widely accepted theory among economic historians, but one for which there is strong evidence.

Here's how that works in practice: An individual sees an opportunity to make things a little better or do things a little better. Maybe that means economically, and it gets them more money. Maybe that's politically, and it gets them more votes. Maybe that's socially, and it gets them more esteem. And at the same time, everyone else is constantly doing the same thing. And some of these experiments work and some of them don't, and what works one year might not work the next. But on the whole, the continual ferment of experimentation produces some good ideas—not great, necessarily, but on the whole two or three percent better than what came before.

And of course there are winners and losers. Sometimes the churn produces mRNA vaccines, cheap solar power, rechargeable batteries that last for days and days, a universal translator in your pocket. Other days it produces leaded gasoline, acid rain, and black lung disease. That's why liberal democracies also include mechanisms for addressing the common good in a democratic fashion: environmental protection, workers' rights, welfare and social security. They distribute the stresses and strains of a society in continual motion without fracture. And so everyone is mostly okay with letting everyone else pursue their own perception of the good (or at least the slightly better) while they themselves pursue theirs.

This might all sound like a fairy tale except for the part where it works pretty well in practice. All that is solid melts into air, as the man said. What he didn't add is that at the very same time new and better things are coalescing out of the air all around you.

The Illiberal's Dilemma

The picture of modern prosperity outlined above poses a problem for illiberals. The fundamental nature of illiberalism—regardless of which palette-swap reactionary movement we're talking about—is a pervading and indeed pre-intellectual libidinal love for hierarchy—for status, power, order, and violence. As the famous Wilhoit Proposition puts it: "Conservatism consists of exactly one proposition, to wit: there must be in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect." This of course gets dressed up in various ways, ethnic or religious and always, always patriarchal guises, but the fundamental idea is the same: there are some people who belong on top, and good things should accrue to them, and other people who belong on the bottom, and bad things should accrue to them, most especially domination and inferiority. Of course, being on top of the hierarchy in the modern world requires having all the stuff that modernity produces: the weapons, the medicines, the media, the food, the travel, the abundance, the everything. Everyone wants the stuff. No conservative who longs for an earlier age suggests cutting themselves off from the modern world in an intentional community as the Amish have—because that would mean accepting an inferior place in the hierarchy of power.

The illiberal's dilemma is therefore simple: modern economic growth and especially the continual churn of new productive technologies is inimical to the maintenance of stable status hierarchies. Consider one historically central example. The pre-modern world saw wealth and power based very consistently on a single thing: ownership of land and the rents accruing therefrom. The industrial revolution increased the productivity of land—and, paradoxically, decreased the power of landowners, as they became an increasingly small fraction of the overall economy. Land became worth less and less as compared to industrial development—and, suddenly, our political and social worlds stopped orbiting around the big farmer up on the hill. A stable status hierarchy is a closed loop of political, economic, and social domination. The continual emergence of new and more efficient modes of production inherently threatens such loops. 

This dynamic recurs over and over. Coal baron from West Virginia? Sad news, new solar plants are cheaper than existing coal plants, and all those billions in assets you had buried beneath the ground are rapidly ticking down to zero. Don't like Jim Crow? Move north and get a factory job. Your father wants you to be a boy? Move to the city and work as a coder. 

Modern illiberalism is driven by an attempt to square this circle: to maintain old hierarchies while participating in modern abundance. This allows us to analyze our varied cast of authoritarians in simple terms.

Varieties of Illiberalism

I want to focus on three contemporary illiberal formations: petro-dictators, herrenvolk democracies, and authoritarian capitalists. Each of these seeks to separate the churn of modernity and the stability of hierarchy in a different way. And each of them struggles to maintain that balance—in a different way.

Petro-dictators respond to the illiberal's dilemma by displacing modernity in space. "Modernity abroad, repression at home," with the gap bridged by the extraordinary wealth that oil brings. The fruits of modernity are purchased and imported, sustaining the wealth and power of the elite in their home countries. The Kingdom of Saudi Arabia is the model here. The Kingdom is blessed with immense resource wealth in the form of gigantic deposits of light, sweet crude oil. This oil is one of the fundamental resources of the modern world. For decades it has been in extraordinarily high demand by the industrial economies of the world. This has allowed the House of Saud to sell oil, buy modernity, and keep their pleasing hierarchies intact.

This dynamic is sustained by the fact that oil requires an unusually small industrial footprint relative to its productivity. Pumpjacks and pipelines don't run themselves, but the labor required is a fraction of that of a coalmine and a railroad. Refining can take place abroad; advanced drilling machines can simply be purchased on the world market with oil revenues. The educated, skilled workforce that might otherwise demand social reforms can be kept at arm's length from the petro-dictator.

Is this sustainable? Well, it worked pretty well for a long time. But the churn never stops. And these days the petro-dictators themselves are signaling pretty hard that they can read the writing on the wall: The green transition is coming, and "peak oil" no longer refers to the moment we run out, but the moment its price starts dropping and never stops because new technology has enabled us to produce better products without oil. Hence Mohammed bin Salman's increasingly outlandish pitches for ultramodern hubs of tech and finance built from wholecloth in the Saudi desert. Of course, if the foregoing arguments are correct, none of these are going to work without reform of the Kingdom's political and social systems, because who wants to bank with a man who will have you cut apart with bone saws if you don't like the interest rate?

Herrenvolk democracies respond to the illiberal's dilemma by displacing modernity socially instead of physically. "Modernity for me, hierarchy for thee." The model here is Orbán's Hungary. Such states propose to offer the benefits of modern liberal democracies (democracy, rights, economic growth) to certain citizens, while simultaneously stripping it from other members of that very same society—women, ethnic and religious minorities, queers, and immigrants are generally at the top of the list here. This nominally provides the benefits of both modernity and hierarchy to those fortunate enough to be members of the true Volk.

As it happens, this does not work particularly well in practice. While Hungary might be beloved of America's own Eurofascist Claremont Institute, the Claremont bros typically neglect to mention that Hungary has the GDP of a mid-sized American city, and only achieves that much because it receives approximately 10% of its own GDP in EU subsidies.

Indeed, herrenvolk states have a quite predictable tendency to produce economic stagnation. They tend towards cronyism—a political economy not of growth and innovation, but favors and kickbacks. Letting the authoritarians into politics lets them pick winners and the losers—and mostly they like to pick their friends, it turns out. This feedback loop between government cronyism and support for herrenvolkism is central to its political economy—call it "rule by car dealership owner." To the extent that herrenvolkism appears economically sustainable, it's usually because it's leeching off modernity in some obscure fashion.

Authoritarian capitalists respond to the illiberal's dilemma by trying to establish a separation between the political and the economic spheres. "Modernity for the economy, hierarchy for politics." The model here is, famously, China. From 1958 to 1976, Mao Zedong led China from the catastrophe of the Great Leap Forward to the catastrophe of the Cultural Revolution, killing tens of millions of Chinese citizens to no appreciable benefit, economic or otherwise. But beginning in 1978, Deng Xiaoping began a series of economic reforms intended to introduce a measured degree of capitalism into Chinese society while maintaining a stable but closed political system of rule by the Communist Party. The results, as we all know, were extraordinary, transforming China from starving backwater to industrial powerhouse.

Is this system sustainable? Maybe. But recent events suggest that things are breaking down in predictable ways. Xi Jinping has begun killing off profitable industries (like tech) and lifting up unprofitable ones (like farming). Combined banking and real estate crises signal an economy that is struggling to rebalance itself in the face of economic and demographic transitions. Decades of export-led industrialization produced a specific kind of power base. But now the churn continues, and its closed political system is struggling to manage the resulting shifts in power and prestige without fracturing. And Xi's increasingly incompetent meddling in the economy shows no signs of slowing. Given the power wielded by personalist dictators, it is less than clear that a neat separation between an open economy and a closed political system is sustainable in the long run.

The Liberal's Dilemma

So much for the illiberals and their dilemmas. There is a lesson here for us liberals as well. We too are confronted with the relentless churn of modern capitalism, this magic beast that throws up prosperity. And the chaos of it all offends our sensibilities. Surely there is so much waste. Fast food, fast fashion, all that cheap plastic crap filling up our landfills—surely we could direct it all in a better way.

This aspiration is given its latest expression in the contemporary "degrowth" movement, which aspires to reorient the world economy around not "profit" but "real human need." It is therefore worth considering other historical attempts to harness the magic beast of modernity to total state control. And here we come to one project I have conspicuously failed to mention, because it was neither illiberal in conception nor liberal in execution: the Soviet Union. The great attempt at world communism began with the best of intentions: to overthrow oppression and liberate the people of the world.

Its planned economy promised to out-grow and out-produce wasteful capitalism by focusing not on profit but social need and genuine productivity. It was run according to top-down directives about how much of what kinds of material goods to produce. While the Soviets did not aim at degrowth but world preeminence, their failures are instructive. Unable to deliver steady per capita productivity growth, they compensated with natural resource extraction and Western debt. This practice of selling oil and importing advanced machinery has a certain parallel with the methods of petro-dictators: modernity abroad, hierarchy at home. 

But the facade could only be sustained so long, and in 1991 the Soviet Union would collapse entirely after Gorbachev's failed attempts at economic and political liberalization. Like other non-liberal regimes before it, the Soviet Union failed to deliver the continual technological innovation and economic dynamism that sustains modern growth. That growth requires the freedom to experiment according to your own perception of the good—a freedom that is incompatible with the goals of degrowth, as my colleague Paul Crider has argued.

From this world historical debacle, certain liberals have drawn the lesson that the free market must be free, and the great task is to keep "politics" out of markets to the greatest extent possible. In a grimly ironic turn of events, the (former) Soviet Union would once again be the ideological experiment ground for ideological imports from the West. In 1991, the "best and the brightest" would bring unbridled free market capitalism to Russia—with results that are now painfully evident. Robber barons carved up the government into their own little fiefdoms, the economy (outside of oil windfalls) continued to stagnate, and now the country is being led to ruin by crooks and warlords. Capitalism without liberalism is just feudalism with a different name.

We liberals can therefore take our own caution from this history. The magic beast of modernity produces immense economic growth—so long as it is neither left completely free to run wild, nor broken completely to politicians' saddles. That is the liberal's dilemma.

This essay is reprinted from Liberal Currents, where it was originally published

Share

Share The UnPopulist

Thursday, April 20, 2023

Medicaid work requirements are cruel and pointless. By Matthew Yglesias


www.slowboring.com
Medicaid work requirements are cruel and pointless
Matthew Yglesias
11 - 14 minutes

I was hoping to write a comprehensive analysis of House Republicans’ debt ceiling demands this week, but the caucus is such a clown show that I found myself running up against deadline with Kevin McCarthy still not having actually articulated those demands.

Note that the formal expiration of the statutory debt ceiling happened months ago, and we’re deep into the phase of “extraordinary measures.” Republicans decided to provoke this crisis before deciding what they are even fighting about. Then on Monday, McCarthy went to the New York Stock Exchange to give a speech in which he essentially begged Wall Street to stage a stock market crash to try to pressure the White House to come to the bargaining table. The White House line has continually been that there should be a clean debt ceiling increase, just as Trump got three times. But even if you don’t buy that, the fact is that McCarthy’s been demanding negotiations without a proposal because he’s struggling to come up with anything that can get 218 votes.

It’s exhausting to be continually writing about the gross irresponsibility of this approach, but I do think it’s important to try to retain the capacity to be shocked and outraged about it.

It’s not as dramatic as Trump sending a mob to attack the Capitol, but it reflects a similar lack of seriousness about the responsibilities of governance. There is a discretionary appropriations process through which House Republicans can press their ideas about appropriate discretionary spending levels. If Republicans want to make policy changes to regulation, they should either strike a bargain with Democrats to secure bipartisan support or else pass those changes when they hold a majority. If they’re frustrated by the fact that the filibuster makes it impossible to change regulatory policy without bipartisan support, they should join with Democratic filibuster reformers to create a better system. “We need to threaten a global financial meltdown to get the REINS Act done because the Senate votes by supermajority” is insane. Everyone involved in this should do some serious reflection as to what it is they are trying to do with their lives.

That said, one specific proposal that is unquestionably in the mix is adding work requirements to Medicaid. This is a policy worth discussing because it polls well, so Republicans will keep pushing, and I think it’s pretty bad.

In particular, I think work requirements involve a kind of sleight of hand. The pitch is that increasing labor force participation would have desirable economic impacts. We’re supposed to believe that there’s some large cadre of able-bodied potential workers loafing around and that we can mobilize them via work requirements, creating a win-win situation that actually raises their income while generating non-inflationary growth. The reality is that this makes very little logical sense, seems empirically false based on the available evidence, and unless you implement it in an unreasonably cruel way, the relevant pool of people is tiny. What you end up with instead is a proposition that’s more like “if you tell 100 people they need to jump through some hoops to keep their health insurance, only 99 of them will actually do the paperwork right, so you can save some money on the one guy.”

That is almost certainly true, but it seems like an awfully shitty way to treat the other 99 people.

It’s just very hard for me to picture a person who, if he wanted to, could get a job that paid him money but chooses not to because he enjoys leisure time and gets Medicaid anyway.

After all, while Medicaid has genuine value to its recipients, it’s not super fun or anything. There are probably some borderline cases around disability where if you kicked someone off benefits they would, in fact, be able to come up with some work. And there are certainly Medicare/Medicaid dual eligibles who might be unable to afford retirement if you cut them off from their Medicaid benefits. But all work requirement proposals that I’m aware of exempt the elderly and disabled because their proponents (to their credit, I suppose) are trying not to be cartoonishly evil.

But that’s the problem with trying to use health benefits as a supply-side intervention in the labor market: the situations in which it could plausibly work are precisely the situations in which it’s most heartless and cruel.

This is not to deny that there are aspects of our social support system that have bad supply-side effects. For example, you’d probably see more teenagers working part-time jobs if not for the fact that saving up money as a high school student costs students financial aid if they enroll in college. That strikes me as an undesirable property of the system, but to avoid it, you need to spend more money. In general, benefit phase-out cliffs seem like a serious issue to me — they deter upward mobility and probably marriage, too. The whole thing is worth taking a much harder look at. But the solution there is to make benefits more universal and have higher, flatter taxes to pay for them. The politics of that are ugly, but the supply-side benefits would be real!

We got a good empirical experiment on imposing a work requirement on non-elderly, non-disabled people in Arkansas back in 2018 and saw that “work requirements did not increase employment over eighteen months of follow-up.”

It just doesn’t make sense — nobody is sitting around living high on the hog, enjoying their free health insurance. People work because it pays money, and money is useful.

But a bunch of people did lose their health insurance.

A dream-scenario work requirement wouldn’t really save any money because everyone would just go get a job, which is good for the economy and ultimately probably good for most of the newly employed people.

The actual experience in Arkansas was the opposite, though — nobody got a job as a result of the work requirement, but thousands of people did lose coverage due to non-compliance. And to cite the same study as before, they ended up in pretty serious trouble. Fifty percent of Arkansans who lost coverage reported problems with medical debt, 56% said they delayed care because of cost, and 64% delayed taking their medications. That all sounds pretty bad to me. But unfortunately, you can’t get away with saying your political opponents are crazy and irrational all the time — if all you really care about is cutting spending, this shows that work requirements do, in fact, cut spending.

And it’s a pretty neat political trick.

Because as we all know, “cut spending” polls very well, but individual proposed spending cuts tend to poll very poorly. Work requirements, though, do poll well — I think because most people don’t like the idea of an able-bodied non-elderly person living on the public dole. Which is fair enough, I suppose, though again: if you’re imagining some horde of people getting off their asses in response to a work requirement and getting a job, you’re dreaming. What’s going to happen is some people with serious barriers to employment that would maybe qualify them for an exemption aren’t going to understand how to get the exemption and will lose their Medicaid. So Medicaid spending will be cut without anyone needing to stand up and say “what I’m saying is I want poor people to lose their health insurance so if they get sick they go broke and then get treated at public expense anyway in the emergency room.”

In my opinion, trying to cut federal spending by cutting Medicaid is really dumb.

Medicaid, for starters, is a pretty stingy program. It compensates providers at a lower rate than Medicare and at a much lower rate than private insurance. For that reason, the quality of the user experience on Medicaid is a lot worse than on other forms of insurance. But Medicaid recipients generally say they are happy with their coverage. And it fulfills the core functions of health insurance coverage: safeguarding people against financial calamity, guaranteeing access to preventative care and useful medication, and regularizing hospital compensation for meeting indigent people’s urgent needs.

Giving unemployed people access to Medicaid helps them out and does not slow the rate at which they secure new jobs. Medicaid reduces recidivism among ex-prisoners. By financing drug treatment programs, Medicaid leads to fewer assaults, thefts, and robberies. When people lose access to Medicaid, they get cut off from mental health services and crime goes up.

When kids get Medicaid, they earn higher wages and pay more taxes over the course of their lifetime.

Does it really make sense to give up on those benefits because some of those kids’ parents may not have an adequately documented work history or have put the paperwork together on a valid excuse? Do we expect the people who need mental health care and drug treatment to also be experts on keeping up with the shifting tides of Medicaid eligibility?

I think this is a weird road to go down in pursuit of fiscal savings.

It’s especially nutty because we know Republicans are fighting to increase the deficit by defunding the tax police, and they’re working to lay the groundwork for a very expensive and regressive extension of the Trump tax cuts starting in 2025. Meaning these budget cuts aren’t ultimately about reducing the deficit at all, they’re about increasing the fiscal headroom available for tax cuts for rich people.

So much of this debt ceiling standoff feels like a farcical replay of the one we saw during Barack Obama’s administration.

Chris Hayes looked at how this is playing out and said “Republicans understand better than anyone that running fiscal deficits boosts growth and austerity hurts growth. They have a completely consistent record on this: they favor large deficits under Republican presidents and huge cuts and austerity under Democratic presidents.”

But I do think it’s worth saying that the situation has changed in important ways, even though the GOP position has not. Back when Obama was president, interest rates were pinned to the zero lower bound, inflation was quiescent, and unemployment was high. Under those circumstances, austerity really was hurting the labor market and economic growth. Republicans were trying to force Obama to cut spending and Obama was trying to force Republicans to raise taxes. Obama’s hopes of achieving this via a grand bargain came to naught, but taxes did rise and spending did fall, so in some sense everyone got what they wanted. The problem is that spending cuts and tax hikes were inappropriate for the economic situation and meant we faced a years-long crawl back to full employment.

Things are different now! The unemployment rate is very low. Inflation seems to have been contained, but it’s still above the target. Interest rates are going up. A little austerity would be a pretty good idea.

Does that mean you should try to balance the budget on the backs of the poor? Or poke holes in the most threadbare parts of the safety net? Pick on the weak? Cut the cheapest health program around? No, that’s nuts. These are the times the Obama-style grand bargain was built for. If you want to cut spending in a way that minimizes harm to people, you need to look at Medicare reimbursement rates and you need to look at Social Security benefits for folks in the top half of the income spectrum. And if you’re going to touch the third rail like that, you need a balanced approach that looks at revenue options, too — ideally closing loopholes and curbing tax expenditures so that you’re draining demand out of the economy while preserving incentives to work and invest.

It drove me crazy back in 2012 that everyone who’s anyone in the American elite was talking about these ideas when they were macroeconomically inappropriate, and it’s driving me doubly crazy that so few people are talking about them now that the time is actually right.

Wednesday, April 19, 2023

The strange death of education reform Part IV


www.slowboring.com
The strange death of education reform Part IV
Matthew Yglesias
12 - 15 minutes

I guest lectured at a 9 a.m. University of Chicago class recently, and before it started I joked with the professor that the students must hate him. He said he actually liked to schedule the class early because it let him leverage selection effects to ensure he was teaching motivated students.

He was sort of joking, but I think this gets at a fundamental truth of education at all levels, which is that selection effects and peer effects are very important.

Students like to have good peers, and they like to have good teachers. But teachers also like to have good students. When people say they’re moving somewhere “for the good schools,” they don’t usually mean that they have assessed the local school system with a state-of-the-art model that compares student learning to expected learning based on the demographics of the community. They mean something like “this school is full of kids whose parents had the means and inclination to pay a premium to live in the good school district.” But critically, they don’t just mean that — since most teachers prefer to teach in a school full of motivated kids and supportive families, it’s genuinely true that it will be easier for that kind of school to fill vacancies and retain staff. The selection effects build on top of each other. And of course as the school gets better, the real estate gets pricier, which further screens out families with fewer resources and less motivation.

The significance of selection effects in education is fundamentally why I describe the current conservative push for unregulated school choice as the abandonment of education reform rather than an escalation.

Words are just words, and everyone is entitled to call whatever policy change they are pushing for “reform.” But there was an education reform movement that was motivated by a particular set of ideas and aspirations exemplified by the slogan No Child Left Behind and a focus on the achievement gap. Education reform defined the problem to be solved as the low end of the American educational system — the concern that students were being “left behind” by a system that overemphasized localism. What conservatives call “universal” school choice or ESAs, and what I’m calling unregulated choice, is not an alternative theory of how to achieve that goal; it’s a theory that it was a bad goal and we should just stop worrying about kids being left behind.

And while I think this is a bad idea, I also think it’s an idea that the left is going to have a hard time combatting unless it re-engages with those reform goals. A funny thing about Freddie deBoer’s strain of hardcore education skepticism is he perceives himself (and is often perceived by others) as an ally of America’s unionized public school teachers against the depredations of neoliberal reformers. But if his claim that schools don’t really matter is true, then the conservative solution of dismantling the whole public education system makes a lot more sense than maintaining it as a make-work jobs program.

As I explained in my post on charter schools, charters are not publicly managed, but they are subject to a lot of public accountability.

They have to be granted a charter to operate in order to open, and their charter can be revoked by the authorizing agency if the school is deemed wanting. Actual practice varies a lot from state to state, but every state has some regulations in place. And critically, the charters are never allowed to explicitly engage in the kind of student selection that is a driving force in both public and private schools. The fact that charters aren’t allowed to pick their students is so centrally important to the nature of the charter school that a big issue in the charter debate is the allegation that these schools implicitly engage in selective admission, either by expelling or counseling out low-performers or through other means. It seems likely to me that some of that probably is happening. In D.C., for example, the public middle schools start in sixth grade, but the charter middle schools often start in fifth grade. If you want your child to attend a charter middle school, you need to pull them out of public elementary school one year early. That essentially guarantees that the families who enter the middle school charter lottery are better-informed, more organized, and more committed to education than the average DCPS family.

This is an annoying quirk of the D.C. system, and I think the Council or the charter board should make them stop. That said, all things considered, the selection effects in the charter system seem less egregious than those in the public schools (access to which is auctioned via the real estate market) and are obviously less so than in the private schools, which explicitly decide which students to let in.

By contrast, here’s how universal school choice works in Arizona:

    What is an ESA? An Empowerment Scholarship Account (ESA) is an account administered by the Arizona Department of Education (ADE) and funded by state tax dollars to provide education options for qualified Arizona students.

    An ESA consists of 90% of the state funding that would have otherwise been allocated to the school district or charter school for the qualified student (does not include federal or local funding). By accepting an ESA, the student's parent or guardian is signing a contract agreeing to provide an education that includes at least the following subjects: reading, grammar, mathematics, social studies and science. ESA funding can be used to pay private school tuition, for curriculum, home education, tutoring and more.

It’s basically money with almost no strings attached — you just need to spend it on something broadly education-related, including private school tuition.

And while this is, technically, a reform of education policy, I see it as the abandonment of the goals of education reform. We already have a system of unconstrained choice in place for higher education and know exactly how it works. Colleges compete with each other to attract the best applicants. When people say Princeton is a good college, they don’t mean Princeton students demonstrate large learning gains based on a value-added model. And they don’t mean that randomized controlled trials show that kids who attend Princeton do better in life than identical students who don’t attend Princeton. Princeton being a good college means that Princeton’s incoming first-year students have high SAT scores and perform well on other academic measures.

It’s a system that not only leaves some children behind but also leaves them behind deliberately, as a matter of policy design.

If you fully privatized the education system in a city like D.C. that already has a lot of de facto selection due to the real estate market, you would end up with an even more aggressive sorting system. The kids who are most advantaged by the overlapping factors of parental income, culturally-transmitted commitment to education, and genetic aptitude would all cluster in schools that refuse to even try to educate any of the harder cases. Meanwhile, the kids with the most disorganized parents and highest needs would languish in schools that find it practically impossible to recruit teachers with any options at all. It seems like a horrifying vision to me, though it’s perhaps compelling to some. Either way, though, it’s the opposite of the Bush/Obama goal of reconfiguring the school system to better serve the neediest families.

Note that under the current Arizona rules, you can only use state money, not local or federal education funding.

That means for a typical parent, an ESA isn’t nearly enough to actually pay for private school tuition. So the real impact of the system isn’t so much to afford more kids the opportunity to go to private school as it is to generate a small financial windfall for people who are homeschooling or sending their kids to private school anyway.

Sometimes people push bad policy ideas with an appeal that just baffles me, but in this case, I think the appeal to conservatives is very clear and rational. The right likes the idea of regressive tax cuts, and the right is very into helping religious people. And an Arizona-style ESA system drains a modest amount of money out of traditional public schools and cuts checks to a group of people who are mostly rich and mostly religious. It’s a very straightforward form of interest group politics that has nothing to do with trying to improve educational performance. When a state like Massachusetts with a high-performing charter sector refuses to allow expansion of even the best-performing charter networks, that’s pure interest group politics at work. And the current ESA push is the same thing. In equilibrium, the private schools probably just end up charging slightly higher tuition anyway.

Market mechanisms are really good at meeting consumer demands, but the issue in education is that the main thing consumers demand is to get their kid into a school with other smart kids. So in a heavily marketized sector like American higher education, you get a lot of sorting and see schools investing in non-educational programming to try to attract applicants. Charter school performance is incredibly variable from state to state because in the states that regulate them laxly, quality winds up being bad. And in states that have existing private school vouchers programs, the results, similarly, are bad:

    In the last few years, a spate of studies have shown that voucher programs in Indiana, Louisiana, Ohio, and Washington D.C. hurt student achievement — often causing moderate to large declines.

    Advocates have pushed back, saying the programs were new and results might improve over time. In three of the four places, that hasn’t happened, at least in math.

    “While the early research was somewhat mixed … it is striking how consistent these recent results are,” said Joe Waddington, a University of Kentucky professor who has studied Indiana’s voucher program. “We’ve started to see persistent negative effects of receiving a voucher on student math achievement.”

This is a challenging topic to discuss because, in the current climate, nobody wants to be paternalistic about anything. But we see across the board that the sovereign consumer does not deliver on the promise of a school system that generates a high average level of instructional quality.

Of course, a “Strange Death of Education Reform” column wouldn’t be complete without a little punching left.

And if the problem with privatizing schools is you end up with too much selection and cream-skimming, a big problem with the current fads in left-wing thinking about education is they totally ignore the needs and sentiments of normal people. Denver had been an education reform city for a long time, but the union-aligned progressive faction took control of the school board and, post-Floyd, implemented a bunch of changes to school discipline that were supposed to serve equity needs. They took police officers out of schools, which left civilian administrators in charge of doing a daily “pat down” of a student with a very troubled record, right up until the point where he shot two of them. You had the school board saying that attempted murder isn’t a good reason to suspend someone from school as long as the shooting happened off school grounds. And more broadly, you had principals saying that they are being pushed to accommodate egregious discipline cases in ways that compromise school safety.

I don’t know that I have the full answer to exactly where the safety/inclusion line needs to be drawn, but the general point here is that while the strength of public schools as a concept is to serve high-need families, for the concept to work, you need to make public schools that people want to attend.

That means an environment that is safe for students, and it also means that you need to think about how your discipline policies impact your ability to recruit and retain good teachers and administrators. If you sacrifice everything on the altar of maximum inclusiveness, middle-class families will flee the school district and/or demand privatization initiatives. Inclusiveness is a good goal, but the conceptual anchor needs to be high-quality public services that most people want to use — then you can include people in that.