Wednesday, November 29, 2023
Major brands are not only pausing ads on Elon Musk’s X. They’re stepping away from the platform altogether. By Oliver Darcy
Tuesday, November 28, 2023
AI Doomers are worse than wrong - they're incompetent. By Jeremiah Johnson
Read time: 11 minutes
JEREMIAH JOHNSON
NOV 24, 2023
AI Doomers are worse than wrong - they're incompetent
Even judged on their own terms, AI Doomers are terrible and ineffective
Last week one of the most important tech companies in the world nearly self-destructed. And the entire thing was caused by the wild incompetence of a small slice of ‘effective altruists’.
Other sites have reported the exact series of events in greater detail, so I’m going to just run through the basics. OpenAI is an oddly structured AI company/non-profit1 that’s famous for its large language models like GPT-4 and ChatGPT as well as image creation tools like DALL-E. Thanks mostly to the sensational debut of ChatGPT, it’s now valued at around $80 billion and many observers think it could break into the Microsoft/Google/Apple/Amazon/Meta2 tier of tech giants. But last week, with essentially no warnings of any kind, OpenAI’s board of directors fired founder and CEO Sam Altman. The board said Altman was not “consistently candid in his communications” with the board, without elaborating or providing more detail.
The backlash to the board’s decision was nearly immediate. Altman is extraordinarily popular at OpenAI and in Silicon Valley writ large, and that popularity proved durable against the board’s vague accusations. President and chairman Greg Brockman resigned in protest. Giant institutional investors in OpenAI (including Microsoft, Sequoia Capital, and Thrive Capital) began to press behind the scenes for the decision to be reversed. Less than 24 hours after his firing, Altman was in negotiations with the board to return to the company. More than 90% of the company’s workforce3 threatened to resign if Altman wasn’t reinstated. Microsoft basically threatened to hire Altman, steal all of OpenAI’s employees and just recreate the entire company themselves.
There were several embarrassing twists and turns. Altman was back but then he wasn’t, then the board tried a desperation merger with rival Anthropic which was turned down immediately, and the entire time the OpenAI office was leaking rumors like a sieve. Finally on November 21st, four days after Altman was fired, he was reinstated as CEO and the board members who voted to oust him were replaced. In trying to fire Altman, the board ended up firing themselves.
There are dozens of angles you can take to talk about this story, but the most interesting one for me is how this epitomizes the buffoonery and tactical incompetence of the AI doom movement.
AI being firing an employee, office setting, realism
AI-generated prompt: “AI fires a CEO, office setting”
It’s unclear exactly why the OpenAI board decided to fire Altman. They’ve specifically denied it was due to any ‘malfeasance’ and at no point has anyone on the board provided any detail about the supposed lack of ‘candid communications’. Some speculate it’s because of a staff letter warning about a ‘powerful discovery that could threaten humanity’. Some think it stemmed from a dispute Altman had with Helen Toner, one of the board members who voted to oust him. Some think that it’s a disagreement about moving too fast in ways that endanger safety.
Whatever the precise nature of the disagreement, one thing is clear. There were two camps within OpenAI - one group of AI doomers laser-focused on AI safety and one groups more focused on commercializing OpenAI’s products. The conflict was between these two camps, with the board members who voted Altman out in the AI doom camp and Altman in the more commercial camp. And you can’t understand what happened at OpenAI without understanding the group that believes AI will destroy humanity as we know it.
I am not an AI doomer.4 I think the idea that AI is going to kill us all is deeply silly, thoroughly non-rigorous and the product of far too much navel gazing and sci-fi storytelling. But there are plenty of people who do believe that AI either will or might kill all of humanity, and they take this idea very seriously. They don’t just think “AI could take our jobs” or “AI could accidentally cause a big disaster” or “AI will be bad for the environment/capitalism/copyright/etc”. They think that AI is advancing so fast that pretty soon we’re going to create a godlike artificial intelligence which will really, truly kill every single human on the planet in service of some inscrutable AI goal. These folks exist. Often times they’re actually very smart, nice and well-meaning people. They have a significant amount of institutional power in the non-profit and effective altruism worlds. They have sucked up hundreds of millions of dollars of funding for their many institutes and centers studying the problem. They would likely call themselves something like ‘AI Safety Advocates’. A less flattering and more accurate name would be ‘AI Doomers’. Everybody wants AI to be safe, but only one group thinks we’re literally all going to die.
I disagree with the ‘AI Doom’ hypothesis. But what’s remarkable is how even if you grant their premise, for all their influence and institutes and piles of money and effort they have essentially no accomplishments. If anything, the AI doom movement has made things worse by their own standards. It’s one of the least effective, most tactically inept social movements I’ve ever seen.
How do you measure something like that? By looking at the evidence in front of your face. OpenAI’s strange institutional setup (a non-profit controlling an $80B for-profit corporation) is a direct result of AI doom fears. Just in case OpenAI-the-business made an AI that was too advanced, just in case they were tempted by profit to push safety to the side… the non-profit’s board would be able to step in and stop it. On the surface, that’s almost certainly what happened with Sam Altman’s firing. The board members who agreed to fire him all have extensive ties to the effective altruism and AI doom camps. The board was likely uncomfortable with the runaway success of OpenAI’s LLM models and wanted to slow down the pace of development, while Altman was publicly pushing to go faster and dream bigger.
The problem with the board’s approach is that they failed. They failed catastrophically. I cannot emphasize in strong enough terms how much of a public humiliation this is for the AI doom camp. One week ago, true-believer AI safety/AI doom advocates had formal control of the most important, advanced and influential AI company in the world. Now they’re all gone. They completely neutered all their institutional power with an idiotic strategic blunder.
The board fired Altman seemingly without a single thought about what would happen after they fired him. I’m curious what they actually thought was going to happen - they would fire Altman and all the investors in the for-profit corporation would just say “Oh, I guess we should just not develop this revolutionary technology we paid billions for. You’re right, money doesn’t matter! This is a thing that we venture capitalists often say, haha!”.
It seems pretty damn clear that they had no game plan. They didn’t do even basic due diligence. If they had, they’d have realized that every institutional investor, more than 90% of their own employees and virtually the entire tech industry would back Altman. They’d realize that firing Altman would cause the company to self-destruct.
But maybe things were so bad and the AI was so dangerous that destroying the company was actually good! This is the view expressed by board member Helen Toner who said that destroying the company could be consistent with the board’s mission. The problem with Helen Toner’s strategy is that while Helen Toner might have total control over OpenAI, she does not have total control over the rest of the tech industry. When the board fired Altman, he was scooped up by Microsoft within 48 hours. Within 72 hours, there was a standing offer of employment for any OpenAI employee to jump ship to Microsoft at equal pay. And the vast majority of their employees were on board with this. The end result of board’s actions would be that OpenAI still existed, only it’d be called ‘MicrosoftAI’ instead. And there would be even fewer safeguards against dangerous AI - Microsoft is a company that laid off its entire AI ethics and safety team earlier this year. Not a single post-firing scenario here was actually good for the AI doomer camp. It’s hard to overstate what a parade of dumb-fuckery this was. Wile E. Coyote has had more success against the Road Runner than OpenAI’s board has had in slowing dangerous AI developments.
Should We Watch Wile E. Coyote Go Off the Cliff? – The Heartland Institute
Sam Altman (left) watches the OpenAI board (right) attempt to oust him
This buffoonish incompetence is sadly typical for AI doomers. For all the worry, for all the effort that people put into thinking about AI doom there is a startling lack of any real achievements that make AI concretely safer. I’ve asked this question before - What value have you actually produced? - and usually I get pointed to some very sad stuff like ‘Here is a white paper we wrote called Functional Decision Theory: A New Theory of Instrumental Rationality’. And hey, papers like these don’t do anything, but what they lack in impact they make up for in volume! Or I’ll hear “We convinced this company to test their AI for dangerous scenarios before release”. If your greatest accomplishment is encouraging companies to test their own products in basic ways, you may want to consider whether you’ve actually done anything at all.
There’s a sense in which I’m being very unfair to AI doom advocates. They do actually have a huge string of accomplishments - the only problem is that it’s accomplishments in the exact opposite direction from their stated goals. If anything, they’ve made super-advanced AI happen faster. OpenAI was explicitly founded in the name of AI safety! Now OpenAI is leading the charge to develop cutting-edge AIs faster than anyone else, and they’re apparently so dangerous the CEO needed to be fired. AI enthusiasts will take this as a win, but it sure is curious that the world’s most advanced AI models are coming from an organization founded by people who think AI might kill everyone.
Or consider Anthropic. Anthropic was founded by ex-OpenAI employees who worried the company was not focused enough on safety. They decamped and founded their own rival firm that would truly, actually care about safety. They were true AI doom believers. And what impact did founding Anthropic have? OpenAI, late in 2022, became afraid that Anthropic was going to beat them to the punch with a chatbot. They quickly released a modified version of GPT3.5 to the public under the name ‘ChatGPT’. Yes, Anthropic’s existence was the reason ChatGPT was published to the world. And Anthropic, paragons of safety and advocates of The Right Way To Develop AI, ended up partnering with Amazon in the end, making them just as beholden to shareholders and corporate profits as any other tech startup. You will notice the pattern - every time AI doom advocates take major action, they seem to push AI further and faster.
This isn’t just my idle theorizing. Ask Sam Altman himself:
Eliezer Yudkowsky is both the world’s worst Harry Potter fanfiction writer5 and the most important figure in the AI doom movement, having sounded the alarm on dangerous AI for more than a decade. And Altman himself thinks Big Yud’s net impact has been to accelerate AGI (artificial general intelligence, aka smarter-than-human AI).
Even Yudkowsky himself, who founded the Machine Intelligence Research Institute to study how to develop AI safely, basically thinks all his efforts have been worthless. In an editorial for TIME, he said ‘We are not prepared’, and ‘There is no plan’. He advocated for a total worldwide shutdown of every single instance of AI development and AI research. He said that we should airstrike countries who develop AI, and would rather risk nuclear war than have AI being developed anywhere on earth. Leaving aside the lunacy of that suggestion, it’s a frank admission that AI doomers haven’t accomplished anything despite more than a decade of effort.
The upshot of all this is that the net impact of the AI safety/AI doom movement has been to make AI happen faster, not slower. They have no real achievements of any significance to their name. They write white papers, they found institutes, they take in money, but by their own standards they have accomplished worse than nothing. There are various cope justifications for these failures - maybe it would be even worse counterfactually! Maybe firing him and then hiring him back was actually logical by some crazy mental jiu-jitsu! Stop it. It’s embarrassing. The crowd that’s perfectly willing to speculate about the nature of godlike future AIs is congenitally unable to see the obvious thing directly in front of them.
There’s a real irony that AI doom is tightly interwoven with the ‘effective altruist’ world. To editorialize a bit: I consider myself somewhat of an effective altruist, but I got into the movement as someone who thinks stopping malaria deaths in Africa is a good idea because it’s so cost-effective. It pisses me off that AI doomers have ruined the label of effective altruist6. Nothing AI doomers do has had the slightest amount of impact. As far as I can tell they haven’t benefited humanity in any real way, even by their own standards. They are the opposite of ‘effective’. At best they are a money and talent drain that directs funding and bright, well-meaning young people into pointless work. At worst they are active grifters.
C'est pire qu'un crime, c'est une faute
- Charles Maurice de Talleyrand-Périgord
I really wish the AI safety/doom camp would stop and take stock of exactly what it is they think they’re accomplishing. They won’t, but I wish they would. I’d love to see them just separated from the EA movement entirely. I’d love for EA funders to stop throwing money at them. I’d love to see them admit that not only do they not accomplish anything with their hundreds of millions, they don’t even have a proper framework from which to measure their non-accomplishments. Their whole ecosystem is full of sound and fury, but not much else.
When Napoleon executed the Duke of Enghien in 1804, Talleyrand famously commented “It is worse than a crime, it is a mistake”. The AI doom movement is worse than wrong, it’s utterly incompetent. The firing of Sam Altman was only the latest example from a movement steeped in incompetence, labelled as ‘effective altruism’ but without the slightest evidence of effectiveness to back them up.
Share this post! Or you too might end up as the CEO of OpenAI!
It’s Time to Name Anti-Palestinian Bigotry. By Peter Beinart
Monday, November 27, 2023
The Justice Scalia Mythology that Still Haunts our Politics and our Law. By Eric Segall
Friday, November 24, 2023
Your Decision To Cave To Internet Weirdos And/Or Your Youngest And Most Annoying Staffers Is Unlikely To Age Well. By Jesse Singal
— Read time: 6 minutes
JESSE SINGAL
NOV 23, 2023
PAID
Your Decision To Cave To Internet Weirdos And/Or Your Youngest And Most Annoying Staffers Is Unlikely To Age Well
Think about your legacy!
Last night I got drinks with a friend, let’s call him “Jon,” who is pretty well-connected in politics. He’s had a successful career working for a bunch of politicians, several of whom you have heard of. That’s why, in addition to being a very good guy, he is a fun person to get drinks with. These folks always have stories.
Jon and I both felt, circa 2016, that the whole “man, these college students sure are crazy” thing was overblown. Seven years or so later, there we were, catching up and shaking our heads at just how wrong we’d been. The craziness absolutely spread into liberal institutions and caused countless meltdowns, as documented most comprehensively by Ryan Grim in The Intercept. From 2016 or so on, Jon saw things get worse and worse in progressive politics, and I saw things get worse and worse in mainstream (that is, progressive) journalism, and the similarities. . . weren’t subtle.
My friend told me that some politicians (including, again, ones you’ve heard of) are now “extremely frustrated and sick of their mostly younger and more radical staffers feeling entitled to dictate the terms of their policy positions and then slam them on social media if they didn't capitulate,” as he summed things up in a follow-up text message. These politicians are being less and less shy about expressing their views about these staffers in various ways. This, he thinks, is a sign of a pendulum swinging back toward relative normalcy after a period of major fervor.
But in some places, the pendulum isn’t quite there yet.
Jon said there’s still a lot of dysfunction in progressive politics, and what frustrated him the most was the extent to which it stymies actual progress out there in the real world. I told him about Freddie deBoer’s “iron law of institutions and the left” idea — in progressive institutions, as in all institutions, people often act in a manner seeking to improve their standing within the organization, rather than in a manner conducive to achieving the organization’s internal goals. Jon certainly had some examples on that front.
I think there’s a basic, important insight here: however pathetic they might act during a moment of panic and recrimination, at the end of the day the leadership of major organizations got where they are because they would like to accomplish things, and because they are capable of playing politics, compromising, and so on. They probably have a limited reserve of patience for staffers demanding these norms be tossed aside in favor of some sort of ill-defined Twitter revolution. That’s particularly true when it comes to younger employees, who have not yet accomplished anything or proven their worth, and who are sometimes — I know this from my conversation with not just Jon but others in the worlds of politics and NGOs — undeniable drags on their organizations that management keenly wishes they could un-hire.
In almost every mainstream organization, in other words, there’s a basic limiting factor on how crazy things can get. It just hasn’t kicked in everywhere yet.
***
On October 27, Helen Lewis wrote in her newsletter, “I’m talking to Hannah Barnes, author of Time to Think, about gender, science and scepticism in Brighton on 9 January.” Specifically, the event was to take place as part of the Brighton Skeptics in the Pub events series, which sounds like it combines several of my leading interests. (I interviewed Barnes about her book here, and we had Lewis on Blocked and Reported here [co-hosting with Katie] and here [joining us for an end-of-year too-online Christmas quiz].)
The event quickly sold out. Which makes sense, because Lewis (The Atlantic) and Barnes (BBC, recently decided to move to The New Statesman) are both big names within journalism, and because Time to Think is a book about a very hot subject (youth gender medicine) that was a Sunday Times Bestseller. And what better subject for an event hosted by a skeptics’ organization?
Except people got very upset, some of them complained, there was an open letter (I will link to it if I can find it, but no luck so far), and this morning — the morning after my conversation with Jon about meltdowns within progressive organizations — well, you can probably see where this was all headed: “Sorry to say that despite selling out immediately, this event has now been cancelled,” wrote Lewis on Twitter, which some people call X, which is a stupid name. “The organiser offered all kinds of compromises—a trans voice on the panel, a separate rebuttal event, even a statement disassociating the Skeptic Society from our views—but it wasn’t enough.” The event page no longer exists. (Update: Here’s an explanation posted by the organizer.)
This organiZer (sorry, Helen — this is America) acted in a really craven way here. I know it’s risky to become enamored with one theory and then use it to explain everything, because usually things are More Complicated Than That(™), but could you imagine a better illustration of the iron law of institutions and the left? Clearly, an event like this would have been good for the skeptics’ movement, given the subject matter. Clearly, the event was a success — it sold out! — that would bring more attention to this particular group due to the celebrity wattage of the invited guests. And yet this organizer was so worried about his standing within the organization and his broader community — or at least I’d bet a significant amount that that was what fundamentally motivated this — that he had to make the ridiculous move of canceling the event, even though that move was always going to cause a cavalcade of criticism from reasonable people everywhere. (“I set up Oxford Sceptics in the Pub 14 years ago,” tweeted (not Xed) Andy Lewis. “Right now, I am thinking of coming out of retirement to take on the utter clowns that have destroyed the whole concept of public critical thinking meetings.” (Lewis got hold of some other, rather colorful details about the campaign to cancel or disrupt the event, which you can read about here.)
I strongly suspect, based on the current trajectory of that great big invisible pendulum, that three or four years hence a cancellation like this will be unthinkable. But in the meantime, leaders of organizations should consider both their futures and their legacies. Is it likely that this man will think back to the time he canceled an event about a best-selling book and say “Yes, that was a good idea! That promoted the cause of skeptical thinking and critical inquiry.” Seems unlikely!
There’s a bigger, fascinating story here about what happened to large segments of the atheist/skeptic communities, albeit one that has been somewhat told already. It has to do with concepts like “Atheism Plus,” and it can partly explain the astonishing meltdowns of once-critically minded online spaces like Science-Based Medicine. Maybe someday a thorough excavation would make for good BARPod or Singal-Minded fodder.
But for now, all I’ll say is that individuals in leadership roles should keep that pendulum in mind, as hard as that might be to do when you are worried your friends and colleagues — particularly the very online, very angry ones — are mad at you, and might take that anger public. If you can’t make the right choice, maybe leadership just isn’t for you?
Questions? Comments? Proposals for a new movement called Atheism Minus-Plus? I’m at singalminded@gmail.com.
Thankful mailbag. By Matthew Yglesias
— Read time: 16 minutes
Thankful mailbag
Argentina, whether news is pointless, and the trouble with the long-term
Happy Black Friday! I hope you find discounts that bring joy to your life and help reduce the rate of inflation. But if you’re trying to get your holiday shopping done early, there’s really no better gift than the gift of content.
Give a gift subscription
It’s a busy week with lots of travel, so I’m going to keep this intro short and sweet. Onward to the questions.
TheElasticStranger: I find the NYtimes to be irritatingly negative in their coverage of Biden. At the same time much of their other coverage and opinion could be viewed as annoyingly left-leaning.
I understand the structural reasons for this as you’ve laid out before, so I’ll just ask, If your aim was to stop Donald Trump from being elected, and your only instrument wasthe NYtimes, how would you change the coverage vs how it is presently structured for maximum impact? Obviously they could be more positive, but you don’t want to be obvious shills either, and it seems to me they could try broadening their audience to include more moderate voters as well.
Abstracting away a bit from the specifics of the New York Times, I think the key to building an optimal propaganda organ is you really want the content to be incredible normie and down-the-middle.
You want a publication that appeals, fundamentally, to moderate and center-right readers. That means really looking incredibly rigorously at your not-so-political content — movie and book reviews, cooking, science, etc. — to studiously excise any hint of bias or anything that would be off-putting to people with conservative sensibilities. Then you want to basically just ignore any story that is primarily about intra-Dem infighting and do a lot of coverage of any story that generates intra-GOP infighting. And in the specific context of the 2024 race, you want to have lots of articles about abortion rights and health care, with plenty of scrutiny of Trump’s policy positions. You’d want articles where frontline Republicans hand-wring about whether Trump’s legal problems will sink them. You’d want stories in which businessmen say “I’m a 100 percent Republican, but these tariffs will make inflation worse.”
Mainstream media tends to do roughly the opposite, letting demographic factors pull the broad tone of their coverage to the left in a way that alienates center-right readers, but keeping their actual coverage of American partisan politics studiously down the middle.
Just some guy: So uhh... Argentina. How do you see that all playing out?
I don’t really have a handle on Javier Milei, who keeps getting shorthanded in the media as a radical libertarian and also as an Argentinian Trump.
This is maybe my small-minded literalism, but whenever I think about this situation, my brain keeps tripping over the fact that these descriptions are wildly at odds with each other.
Trump’s approach to policy, to the extent you can make any sense of it, has always struck me as basically an American form of Peronism — you sideline market economics while also not doing social democracy and instead just try to direct goodies to your favored constituency. I think people are just drawing the Milei/Trump connection based on vibes, and really he’s a more market-oriented guy in his policy aspirations. But the fact that Milei himself seems to encourage these comparisons confuses me.
At any rate, Argentina really could use a dose of right-wing economics. They need to cut spending, and they need market-oriented reforms to raise productivity in non-agricultural sectors. To my way of thinking, it does not seem helpful or constructive to have the basic case for market reform and fiscal discipline be yoked to hard-core libertarian ideology. Milei seems like an extremist who’d be pushing these ideas in any circumstances and who’s probably a less persuasive salesperson than someone more moderate and pragmatic. That said, he won and I hope he manages to get some constructive stuff done.
Will he? Argentina, like a lot of Latin American countries, has a Madisonian political system, so an inexperienced-but-charismatic president with relatively radical rules is going to be facing off against a congress that his party doesn’t control. In theory, that system is supposed to force compromise and moderation in a way that’s constructive. In practice, that kind of setup frequently leads to constitutional crisis and democratic breakdown. We’ll see what happens.
Aaron: Why engage in Virginia Plan erasure when you talk about “Madisonian” separation of powers? In all seriousness, I do think that broader understanding of the Virginia Plan—which was a parliamentary system and almost certainly closer to Madison’s ideal institutional setup than the Constitution—would make more Americans receptive to the idea that there are other ways to run a democracy.
I didn’t know much about the Virginia Plan until I read this question, but you are correct that it outlines something much more similar to a parliamentary regime.
But I dunno, Madison gets the credit (or “credit”) for the system we ended up with, so that’s why people call it a Madisonian system.
Kyle: You and other pundits will often explain Trump’s appeal as the result of his comfort staking out moderate positions on certain key issues such as Social Security and abortion. I think this explanation amounts to post hoc rationalization and obscures reality. Yes, by moderating on these issues, he makes himself more palatable to secular working-class voters, other things equal. But this explanation elides all the very unpopular things Trump does e.g. inciting insurrections, attempting to repeal Obamacare, cutting taxes on the rich, the access Hollywood tape, etc. It offers no explanation for why these unpopular things don’t move people’s votes much. Any attempt to connect a politician’s success to his actions and stances needs to account for all of them, rather than just cherry-pick the good stances and saying that they explain the outcome, no?
I think it’s important when talking about such things to be clear as to the specific question we’re asking. The starting point with Trump is that this guy is not Modi or AMLO. He’s not even Viktor Orbán in terms of winning elections. There’s mostly nothing to explain about Trump’s “appeal” because he mostly isn’t appealing.
The point that I try to make about the role of his positioning on Social Security and Medicare in the 2016 race is that embracing a nominee who moderated on those stance is sufficient to explain why Republicans were able to beat Hillary Clinton.
A lot of people find it vexing that Mitt Romney, who seems like a smart and honorable guy, lost while Trump, who’s a thug and a scumbag, won. This generates a lot of paradoxical explanations of the true nature of Trump’s appeal, plus a lot of whining about the alleged mistreatment of Romney. The basic truth is that Romney was a much stronger candidate than Trump, but he ran on a much less appealing platform. If Romney had run on Trump’s positions, he would have won; if Trump had run on Romney’s positions, he would have gotten crushed. In general, I think people who talk about American politics are excessively knowledgeable about the micro-details of these campaigns, and they ignore the big picture reality that Democrats have been moving left since 2012 and this drives a lot of changes.
N.N. What do you make of the idea (as seen in this book and this podcast among many other places) that people should stop reading the news because it takes a huge amount of time, is largely pointless, and makes them unhappy?
My main doubt about this thesis is that I think most people actually aren’t reading the news. But I do think that those who do choose to read the news would benefit from being a little more thoughtful and a little more self-critical about their reading.
Is there a topic you are sincerely interested in learning more about? Then by all means, seek out news on that subject. But most people already know whether they are going to vote Democratic or Republican in 2024 (or in 2028 for that matter) and actually don’t need to stay up to speed on the latest developments in the national campaign. You could either try to learn about something that seems objectively important and under-covered (the civil war in Sudan, for example) or you could try to learn about something that’s of idiosyncratic relevance to your local life (education policy in the community where you live, for example). And if you want to follow national partisan politics because you think it’s entertaining, that’s also okay. People consume media content for fun all the time. But be aware that’s what you’re doing.
That said, while entertainment is fine, it’s also good intellectual discipline to try not to develop false beliefs. So you ought to be at least a little suspicious of writers who you enjoy because everything they tell you is psychologically pleasing. There’s a real skill to that. To “explaining” to people in your ideological niche why every passing event in the news demonstrates their basic correctness about everything. And that kind of content, well, it can make people really happy. But it’s likely to be misleading.
Jonathan Hallam: I wish everything weren't about Israel and Palestine, but, this question cannot not be asked to the author of One Billion Americans, so: What are your thoughts on offering Palestinians US citizenship? Would this be politically easy, as both pro-Palestinians and pro-Israeli, a true win-win? Or is the context such that it would be viewed as lose-lose?
In the book I call for “ruthless pragmatism” on immigration — i.e., try to find any means of securing higher levels of legal immigration that are politically viable.
This idea doesn’t sound viable to me, though if someone has evidence I’d be willing to consider it. One idea I have toyed with is the idea of creating a special visa program for Middle Eastern Christians, who suffer from various forms of persecution and who I think might be sympathetic in the eyes of some right-of-center voters and politicians. It wouldn’t need to be all Middle Eastern Christians, but basically the idea would be a special program for some number of them (with some conditions) on top of existing visa programs. There is a substantial Christian minority in Palestine (a minority that is disproportionately likely to want to emigrate), though the larger numbers are in Egypt, Syria, and Lebanon. I dunno if there’d be any juice in that, but I could imagine it.
As a solution to Israel/Palestine, of course “Palestinians should emigrate and enjoy better lives in other wealthier places” is a solution that many people on the far-right of Israeli politics believe in. And it’s clearly true that in concrete material terms, Palestinians could go live in the United States or the in the immigrant-heavy Gulf states and be better off than they currently are. But the way the parameters of the conflict are defined, that would be a total defeat for the Palestinian cause, so it’s wildly unacceptable.
Scottie J: I’m partially repulsed at my own earnestness here but I was hoping you could expound on your frequent contention that “doing the right thing is overrated.” My assumption is that being politically expedient leads to better long term outcomes is essentially what you’re getting at here. Are there any unpopular things that are worth doing because they move the proverbial ball forward? Is there criteria that should be used to evaluate when the policy or vote is worth the political hit?
“Doing the right thing is overrated” ≠ “never do the right thing.”
loubyornotlouby: Can you break down what you feel the public reaction Open AI Board's decision will likely mean for the Effective Altruist / Rationalist movements long term?
Here I think it’s really worth distinguishing between Good Old-Fashioned Effective Altruism (which is about trying to be more effective and empirically rigorous in charitable giving) and the effort to build a movement around “long-termism.”
The idea that we should consider the interests of the future, not just the interests of the present, predates the EA movement and comes up in a wide variety of contexts. A big part of the debate over how to set the social cost of carbon for regulatory purposes, for example, comes down to what “discount rate” you should use when considering the long-term costs of climate change. The standard environmentalist move is to argue for low (or even zero) discount rates while business interests favor higher ones. As a philosophical argument, I think the case for a very low discount rate is pretty ironclad. But as anyone who has ever kicked this idea around in an ethics seminar can tell you, it’s challenging to draw out the practical consequences. One very boring technical issue relates to OMB calculations for cost-benefit analysis, and the Biden administration is, in fact, issuing a new Circular A4 that mandates less discounting. So in that sense, long-termism is triumphing in the seat of power like never before.
But the problem that you see not just with the recent OpenAI board drama, but with the whole lifecycle of OpenAI, is that it’s really hard to say what the long-term consequences of your actions will be.
OpenAI was originally founded as a nonprofit with heavy EA influence, on the theory that AI would be central to the future of humanity and it was important to develop this technology in an “open” way.
OpenAI’s team then decided that openness was actually bad for the cause of AI safety and ditched that founding commitment to openness.
OpenAI’s team then decided that the nonprofit structure was too limiting and they needed to commercialize to gain access to the level of computing resources they needed, so they did a big restructuring and strategic pivot.
A bunch of OpenAI people and other EAs disagreed with the way OpenAI was handling things and founded Anthropic as a rival, even-more-EA AI lab.
Then a while after that schism, a new internal schism emerged at OpenAI between the CEO and the board (I really recommend this summary of events as superior to what you’ll read in the business press), which led to the dramatic showdown.
The board ultimately decided that reconstituting OpenAI with Sam Altman still in charge and a new board would be better, all things considered, than letting Altman and his team walk to Microsoft.
I think you can make the case that not just the board drama, but every step along this ladder — from founding OpenAI to the Anthropic schism etc — has been counterproductive from the standpoint of AI safety. But I also think Altman is completely sincere in his own belief that empowering Sam Altman is the best way to usher in an AI utopia. These questions are just not tractable in the way that trying to estimate which global health programs are most cost-effective are.
Meanwhile, over on the Good Old-Fashioned Effective Altruism side, things seem to be going well. Open Philanthropy just announced a few new programs in areas like developing country lead poising that are badly underfunded and neglected. This is good, important stuff. Obviously if a rogue AI kills everyone in 2037, nobody is going to care very much whether we promoted best practices around car battery recycling. But we have a very high degree of certainty that lead toxicity kills tons of kids and harms many more. And we know that the recycling of car batteries is a big source of that lead. And we know what the best practices are. So if we try to promote policies ensuring safe recycling of car batteries, we may fail, but we will probably make at least some progress and almost certainly not make things worse.
John E: Paul refuses the power to implement the Golden Path, while Leto accepts.
Given the choice, which one would you choose?
Per the above, if I were in Paul’s situation I would talk myself into doubting my prescience and not do it.
Greg Packnett: Are there any policy stakes to the debate about whether the economy is actually good or the stats are misleading? That is, if the anti-Stancil hypothesis is correct (ie the economy is actually bad and data showing that people think it's bad are capturing something normal economic statistics are missing), what sorts of errors are policy makers who think the Stancil hypothesis is correct (ie the economy is good but people think it's bad because of some combination of partisanship, ideology, and regular every day ignorance) likely to make?
As a separate question, usually when a false belief about the economy is widely held, there's a lot of money to be made being among the few who are right. So how should an anti-Stancilite invest? For that matter, how should a Stancilite who wants to make money betting that the economy's strength is underrated invest?
I wish that people would talk more clearly in terms of the policy stakes. To a lot of people on the left, it’s clear that they perceive the policy stakes to be that if we admit the current economy is terrible, then we will see the need for dramatic expansion of the welfare state. That’s fine as a thing to believe, but it doesn’t do anything to explain why perceptions of the economy were so much better in 2018 or even 2013 — it’s not like the country had a more robust welfare state back then.
Conversely, conservatives want to say things are terrible and this shows we need to bring back Trump. But they won’t explain why higher tariffs and a much larger budget deficit would make food prices go back down. Mass deportations would clearly make food more expensive.
So in all these cases, it would be more constructive to just talk about which things are problems right now (interest rates) and which are not (unemployment), and try to say why your proposed ideas would be helpful.
In terms of investing, here’s some back-of-the-napkin technical analysis I did, just drawing a big ol’ line from the stock market peak before the Great Recession to today. These numbers are adjusted for inflation.
It seems to me that the investment community, when voting with its money, thinks the economy is basically fine. Stancilism triumphs. If you think people are really suffering out there much worse than Stancil admits, you should probably bet on a recession coming soon and short the market.
EC-2021: How should the liberal arts argue for their value? Arguing for economic value seems probably true, but (1) it's unclear that will remain true, (2) seems to concede that college is about job training which they don't really want to do and (3) doesn't seem to be believed, regardless of truth.
I've said a couple of times that we need more “public intellectuals” to argue for/demonstrate the value proposition of the liberal arts, but I'm wondering if this is actually true.
I’ve said this before, but I think there’s just a big divergence between what most people see as potentially valuable in the liberal arts and what most humanities faculty think is valuable and important.
But any modern society has a lot of educated professionals. Many of them are technical specialists (engineers, scientists, doctors) and some of them aren’t (lawyers, teachers, middle managers), but they form a sort of collective social elite. And I think it’s not that hard to persuade people that over and above the specific skills members of this elite need to do their jobs, it’s good for them to be inculcated with a sense of the values of American civilization. That involves understanding our American political history, but also the history of proto-constitutionalism in England and the classical republics than the founders looked to as inspiration. A particular sense of religious freedom is an important part of the story of America, and while that’s not a sectarian point (the point is religious freedom!), the fact is also that as a historical matter, the American concept of religious freedom develops out of the specific circumstances of the Protestant Reformation.
So you have an important sequence of historical events — from Greece to Rome to “the Dark Ages” and the Renaissance and Reformation and the founding of America. You have a philosophical lineage from Plato and Aristotle to Hobbes and Locke and Mill and Rawls.
And you have literary and artistic cultures that were informed by these historical and intellectual trends and that also informed them. And you have traditionally had a belief that it is important for important people to be broadly educated in these themes. But while I think that kind of traditional broad liberal education would of course involve some exposure to radical critics of Anglo-American liberal capitalism (it’s good to be well-informed) and perhaps even a smattering of instructors who endorse the radical critiques (it’s good to sit in rooms and listen to smart people with ideas you don’t agree with), the current trends on campus are toward an atmosphere where the radical criticism predominates. And as the critical theories themselves would tell you, there’s no way Anglo-American liberal capitalist society is going to sustain generous financial support for institutions whose self-ascribed mission is to undermine faith in the main underpinnings of society.