Thursday, May 18, 2023

I'm skeptical that powerful AI will solve major human problems. By Matthew Yglesias

I'm skeptical that powerful AI will solve major human problems. By Matthew Yglesias — Read time: 13 minutes


I'm skeptical that powerful AI will solve major human problems

Unless it takes over the world — which I would prefer to avoid


It’s pretty easy to see why Los Angeles has spawned the most vicious NIMBYism in California state and local government — the traffic jam situation is really dire.


The most prominent criticism that “One Billion Americans” received was Bill Maher saying my ideas would make traffic jams worse, and I don’t think it’s a coincidence that he’s based in Los Angeles. This is to say that not only are the traffic jams a significant problem in the sense that it’s annoying to be stuck in traffic, but they are the source of incredible downstream public policy problems. Why does LA have the worst homelessness problem in America? It’s the NIMBYism, which is downstream of the traffic jams. Why can’t patriotic centrists get behind an agenda of expanded national immigration and economic growth? Traffic jams.


Sam Altman, the CEO of OpenAI, remarked the other day that in his opinion, the only possible solution to America’s long-term fiscal challenges is to unleash more software development.




This analysis suffers from a number of problems, but I think the biggest one goes back to the traffic jams. After all, it’s unambiguously true that we could shrink the debt:GDP ratio if we allowed more housebuilding and more skilled immigration. But the traffic jams would get worse. Do we need a technological breakthrough to fix this? Will AI help?


I doubt it. Not because AI couldn’t help (it clearly could), but because there are well-known existing technological solutions that aren’t being implemented because the politics is difficult.


And that’s true across many domains of American life — the biggest roadblock isn’t a lack of technology, it’s a lack of implementation and followthrough. So more and more of our innovative energy has been poured into the narrow zone of software development, to the point where people have started using “tech” to mean “computer stuff,” even though flying to the moon and nuclear fission and many other landmark technological breakthroughs during the era of faster productivity growth had nothing to do with computer stuff.


We have the technology

In a big open area, I have no objection to the commonsense view that you should reduce congestion by building more highways. But the Los Angeles Basin is already very built-up, so there are too few opportunities for building and too many people around to solve the problem through that method. And, of course, if LA were to solve its housing problems through massive shifts in land use policy (which they should), there would be even more cars on the road.


I think it’s important to be clear-sighted about this.


The smart play would be to concentrate the upzoning around the area’s existing and planned Metro stations and in and near its many existing islands of locally walkable neighborhoods. The tax revenue spawned by more housebuilding would lay the groundwork for a more rapid buildout of Metro, and it’s entirely plausible that a YIMBYfied version of Los Angeles would have many fewer vehicle miles traveled per capita than the current version of the city. But in congestion terms, what matters is aggregate VMT, not VMT per capita, and it’s hard to avoid the conclusion that Maher is right: many more people would mean more cars on the road and worse traffic jams.


The good news is that a well-understood solution exists in the form of congestion pricing: charging people for using the roads at crowded times reduces crowding.


The downside to charging for access to the roads, obviously, is that people end up with less money. On the other hand, LA residents are paying a combined retail sales tax of 9.5%, which is really high. A charge high enough to decongest LA’s freeways would generate a lot of money, especially if the population grew, and that money could be used to cut the sales tax dramatically. This would leave people paying the same amount of tax on average as they pay now, except in the new version of the city, the people with above-average tax bills (people who drive more than the average Angelino) would also reap large benefits in the form of fewer traffic jams. Then having solved the traffic problem, you could also solve the housing problem. Repeating this in congested or expensive metro areas across the country would be a boon to economic growth.


Is the real solution AI? In a sense, yes.


If you had a congestion pricing system up and running, then I bet an intelligent agent that monitored citywide traffic conditions and used data to continuously refine the pricing algorithm could really improve it. And of course self-driving cars could improve a lot of aspects of the transportation system as long they don’t just lead to a massive increase in traffic jams, which is what will happen if we unleash them on crowded cities with no congestion pricing.


But in another sense, no, AI can’t help us here. When I asked GPT-4 to write a plan to reduce freeway congestion and generate revenue for Los Angeles County, it correctly suggested congestion pricing.



But everyone who’s looked at this already knows that; we don’t need AI to tell us. We need to do it, which is the hard part.


Solving political problems is hard and important

There was a fad a few years back of clowning tech types who were incredibly invested in solving trivial problems (it’s an app that picks up your dry cleaning!) or hyping useless things (crypto).


I broadly agree with that line of criticism. On the other hand, it was often offered by leftist hater types who seemed a bit weirdly unaware of why so much entrepreneurial energy was being poured into trivial things. If you try to do something more significant, like design a new generation of nuclear reactors that can be mass-produced in a way that will hopefully lower construction costs, you run into a buzzsaw of regulatory opposition. Congress passed a law a few years back directing the NRC to change this and create a clear pathway for licensing microreactors, but the NRC staff responded by creating a new pathway that’s even harder to clear. That reflects bureaucratic conservatism, but also, I think, a sense on the part of the politically appointed NRC commissioners that despite Congress’ actions, few members on the Hill really want to push through the relevant barriers.


Public opinion on nuclear is narrowly supportive, and I would say the political situation reflects that. Both the Biden administration and GOP leaders have taken pro-nuclear public positions, made funding available for nuclear research, helped keep existing plants open, and are generally favorably disposed to nuclear power — but not to rocking the regulatory boat in a way that would actually facilitate a substantial expansion of nuclear power.


Back to Altman’s debt:GDP chart.


Economic growth would be a big help here. We also could use some good old-fashioned fiscal austerity. The problem, again, is that politics is hard.


Not hard in the sense that nobody is smart enough to draw up a plan that raises taxes and cuts spending. The problem isn’t really even that the oxen who’d be gored in such a plan are too politically powerful. The problem is that the precise allocation of the pain makes a big difference, people care a lot about fairness, and no one wants to be a sucker. When the Trump administration set out to enact regressive tax cuts in 2017, the scale of the cuts was limited by moderate Republicans’ concern about the deficit. If the deficit Trump inherited from Obama had been smaller, he would have passed a larger regressive tax cut. If the deficit Trump inherited from Obama had been bigger, the cut would’ve been smaller. That means the considerable deficit reduction that was enacted from 2010 to 2015 simply went to fuel the Trump tax cuts — a consequence that was both predictable and predicted (by me, among others) at the time.


The same is true across the regulatory space.


A carbon tax would make more sense than efforts at “supply side” regulation of the fossil fuel industry. But how do you actually accomplish that politically? I can sketch out what the law would look like, but how do you make the bargain stick? When you talk about economic growth, I think you find that there are huge barriers to accomplishing anything outside of the narrow space of media/software intangibles. But whether we’re talking about the budget or freeway congestion or natural gas pipelines, the reality is that political problems require political solutions, not just technological ones.


There’s a tendency in the AI hype community to posit AI as the solution to stagnation. I’ve read so many non-specific accounts of how AI is going to help us cure all kinds of diseases and do other miraculous things. And I’m sure that applying more computer power can be helpful here. But the field of medicine is hobbled by problems like almost no medications being in the top pregnancy safety tier, including the most commonly used medication for morning sickness. Why is that? Well, it’s because to be classified as Category A, a drug has to have undergone clinical trials in pregnant women, and it’s basically impossible under the current canons of medical ethics to organize such a clinical trial. The pregnancy thing sticks out to me because it seems so dumb. People will (hopefully) continue to have children for a long time, so improving our capacity to provide useful medicine and peace of mind during pregnancy has considerable value over the long horizon. But the “ethical” issues here mean that the research isn’t just slow or impossible — it will never happen. This is one of several areas where I wish I could force the bioethics community to take some remedial philosophy classes because I genuinely don’t understand what framework licenses this indifference to progress.


Maybe in the future we won’t need clinical trials at all because computer simulations will be so convincing? But that would still require policy change. And my whole point here is that just because you can identify a beneficial policy change, that doesn’t mean it will happen.


Loss of control

I oftentimes find it difficult to distinguish between the AI hypemen and the AI doomsayers, in part because of their commonly shared assumption that AI systems will soon become incredibly powerful. A good example of the conjunction of the two is the work of Jeffrey Ladish, who is both very concerned that powerful AIs may cause human extinction and also looking forward to a potential AI-powered utopia wherein, among other things, artificial intelligence will solve problems that cannot be solved by material abundance alone:


But some problems won’t be solved with unlimited health and material abundance. Some people don’t have any friends, and material resources wouldn’t solve that. I have some guesses at solutions here, which I’ll list below, but the meta-solution is that we will have AI systems far wiser and smarter than us, and they can help us generate, test, and implement solutions. 


I expect one of the most significant of these solutions will be an unlimited number of AI therapists. Currently, therapists can sometimes be really effective, but they often aren’t. There’s a huge variance in the skill of therapists. I expect that once we have an abundance of superhuman-level therapists, a huge chunk of social problems will go away. People struggling will always have someone sympathetic to talk to. They’ll always have someone to help them learn the skills that will help them be friends. In the worst case, even if someone can’t find any friends, there will be AI systems to keep them company. It might sound weird, but I don’t think it’s much different than keeping a dog or cat for companionship, except that it might be closer to fulfilling human connection than those animals can provide. In a good world, no one will have to be alone. 


In addition to the task of directly coaching people on their social problems, AI systems will be able to help matchmake people, both for friendship and romance.3 It can be hard to find someone to date. It can be hard to find new friends. Imagine if there was someone who deeply understood everyone and could connect people who were likely to do well together. I think that’ll solve another huge chunk of social problems. In a good world, people will have help finding people who fit well with them.


And further:


It feels a little embarrassing to admit, but the thought of an AI matchmaker who really gets me, helping me find the love of my life… well, it’s a whole trope but honestly I really want that. I still want agency in the matter - I don’t want a completely arranged match - but I can imagine going to a salon or party where a superintelligent system invites people based on mutual compatibility. That sounds amazing.


Your mileage may vary as to whether you find this to be an optimistic alternative to AI ruin or a quasi-dystopian portrait of humans being relegated to a status similar to that enjoyed by residents of the world’s better zoos and aquariums.


Either way, Ladish does not apply this style of reasoning to the problem of material abundance itself. I think the brute fact is that no matter how useful AI may be to the scientists and engineers of the future, creating a world of material plenty requires political solutions, not just engineering ones. If elevators didn’t exist, people would say that inventing elevators could solve housing scarcity. In the real world, we know that multifamily housing is simply illegal on the vast majority of America’s developed land and that even where it is legal, its scale and quantity tend to be sharply limited.


This means the skillful AI matchmaker/therapist who becomes your indispensable guide to making life’s most intimate and significant decisions would arise significantly before the end of material scarcity. But such a wise and trusted advisor could also help you improve your political opinions and solve these problems. More broadly, once we put powerful artificial intelligence to work on trying to solve our housing, traffic, immigration, budgetary, energy, or biomedical problems, they will swiftly reach the same conclusion that many philanthropists ultimately reach — the most powerful lever for solving social problems is manipulating the political system. So what happens then? Rival AIs combat each other for political influence? AIs coordinate and solve all our political problems just as they solve all our romantic problems and humans are totally disempowered? Power-hungry AIs decide to stop wasting the world’s energy resources on maintaining its superfluous human population?


Between hype and ruin

I was, at best, the fourth-best student in my not-very-large high school physics class, so I can’t really lay out in any detail what I think superintelligent agents charged with solving major problems would do.


But what I can tell you is that domain specialists often underrate the extent to which every major problem ultimately traces back to the same nexus of issues related to selfishness, short-sightedness, endemic lack of trust in institutions, the difficulty of making credible bargains, systemic principle-agent problems, status quo bias, and the broad difficulty of making positive-sum policy changes. Now there are probably some possibilities for positive-sum policy change that only a genius-level human or some kind of super-human intelligence could perceive. But I’m quite sure that there are tons of opportunities for positive-sum policy change that normal, moderately informed people are actually very familiar with where the practical obstacle is the politics itself. And while solving all those problems in order to unleash massive prosperity sounds great to me, doing it with artificial intelligence sounds less like an alternative to AI ruin than yet another path that converges on the disempowerment of humanity.


Perhaps we will welcome our new AI overlords and perhaps they will treat us kindly. But either way, my current preference is to put many fewer eggs in the “further advances in Computer Stuff will save us” basket and many more eggs in the “we need to directly address public policy problems that relate to macroscopic objects in physical space.”



Reduced optimism about the upside of more and more computer stuff should, I think, help open people’s hearts to the argument for a more cautious approach to the runaway development of AI. Everything in life is some mixture of risk versus reward, but in areas where I think policy is too risk-averse (clinical trials, nuclear fission, airline pilot training), the downside is well-understood and bounded in a way that the downsides to artificial intelligence just aren’t. Frustration with the slow pace of change in these areas is understandable, and I can see why smart, ambitious people who are impatient for economic growth are so attracted to the software domain and its permissionless innovation. But if you think it through rationally, that’s a reason to redouble efforts to change policy around physical reality, not to plow ahead recklessly with software systems that could only solve major problems by finding ways to manipulate and control human society.


No comments:

Post a Comment

Note: Only a member of this blog may post a comment.