Tuesday, November 28, 2023

AI Doomers are worse than wrong - they're incompetent. By Jeremiah Johnson

Read time: 11 minutes

JEREMIAH JOHNSON

NOV 24, 2023


AI Doomers are worse than wrong - they're incompetent

Even judged on their own terms, AI Doomers are terrible and ineffective


Last week one of the most important tech companies in the world nearly self-destructed. And the entire thing was caused by the wild incompetence of a small slice of ‘effective altruists’.


Other sites have reported the exact series of events in greater detail, so I’m going to just run through the basics. OpenAI is an oddly structured AI company/non-profit1 that’s famous for its large language models like GPT-4 and ChatGPT as well as image creation tools like DALL-E. Thanks mostly to the sensational debut of ChatGPT, it’s now valued at around $80 billion and many observers think it could break into the Microsoft/Google/Apple/Amazon/Meta2 tier of tech giants. But last week, with essentially no warnings of any kind, OpenAI’s board of directors fired founder and CEO Sam Altman. The board said Altman was not “consistently candid in his communications” with the board, without elaborating or providing more detail.


The backlash to the board’s decision was nearly immediate. Altman is extraordinarily popular at OpenAI and in Silicon Valley writ large, and that popularity proved durable against the board’s vague accusations. President and chairman Greg Brockman resigned in protest. Giant institutional investors in OpenAI (including Microsoft, Sequoia Capital, and Thrive Capital) began to press behind the scenes for the decision to be reversed. Less than 24 hours after his firing, Altman was in negotiations with the board to return to the company. More than 90% of the company’s workforce3 threatened to resign if Altman wasn’t reinstated. Microsoft basically threatened to hire Altman, steal all of OpenAI’s employees and just recreate the entire company themselves.


There were several embarrassing twists and turns. Altman was back but then he wasn’t, then the board tried a desperation merger with rival Anthropic which was turned down immediately, and the entire time the OpenAI office was leaking rumors like a sieve. Finally on November 21st, four days after Altman was fired, he was reinstated as CEO and the board members who voted to oust him were replaced. In trying to fire Altman, the board ended up firing themselves.


There are dozens of angles you can take to talk about this story, but the most interesting one for me is how this epitomizes the buffoonery and tactical incompetence of the AI doom movement.


AI being firing an employee, office setting, realism

AI-generated prompt: “AI fires a CEO, office setting”

It’s unclear exactly why the OpenAI board decided to fire Altman. They’ve specifically denied it was due to any ‘malfeasance’ and at no point has anyone on the board provided any detail about the supposed lack of ‘candid communications’. Some speculate it’s because of a staff letter warning about a ‘powerful discovery that could threaten humanity’. Some think it stemmed from a dispute Altman had with Helen Toner, one of the board members who voted to oust him. Some think that it’s a disagreement about moving too fast in ways that endanger safety.


Whatever the precise nature of the disagreement, one thing is clear. There were two camps within OpenAI - one group of AI doomers laser-focused on AI safety and one groups more focused on commercializing OpenAI’s products. The conflict was between these two camps, with the board members who voted Altman out in the AI doom camp and Altman in the more commercial camp. And you can’t understand what happened at OpenAI without understanding the group that believes AI will destroy humanity as we know it.


I am not an AI doomer.4 I think the idea that AI is going to kill us all is deeply silly, thoroughly non-rigorous and the product of far too much navel gazing and sci-fi storytelling. But there are plenty of people who do believe that AI either will or might kill all of humanity, and they take this idea very seriously. They don’t just think “AI could take our jobs” or “AI could accidentally cause a big disaster” or “AI will be bad for the environment/capitalism/copyright/etc”. They think that AI is advancing so fast that pretty soon we’re going to create a godlike artificial intelligence which will really, truly kill every single human on the planet in service of some inscrutable AI goal. These folks exist. Often times they’re actually very smart, nice and well-meaning people. They have a significant amount of institutional power in the non-profit and effective altruism worlds. They have sucked up hundreds of millions of dollars of funding for their many institutes and centers studying the problem. They would likely call themselves something like ‘AI Safety Advocates’. A less flattering and more accurate name would be ‘AI Doomers’. Everybody wants AI to be safe, but only one group thinks we’re literally all going to die.


I disagree with the ‘AI Doom’ hypothesis. But what’s remarkable is how even if you grant their premise, for all their influence and institutes and piles of money and effort they have essentially no accomplishments. If anything, the AI doom movement has made things worse by their own standards. It’s one of the least effective, most tactically inept social movements I’ve ever seen.


How do you measure something like that? By looking at the evidence in front of your face. OpenAI’s strange institutional setup (a non-profit controlling an $80B for-profit corporation) is a direct result of AI doom fears. Just in case OpenAI-the-business made an AI that was too advanced, just in case they were tempted by profit to push safety to the side… the non-profit’s board would be able to step in and stop it. On the surface, that’s almost certainly what happened with Sam Altman’s firing. The board members who agreed to fire him all have extensive ties to the effective altruism and AI doom camps. The board was likely uncomfortable with the runaway success of OpenAI’s LLM models and wanted to slow down the pace of development, while Altman was publicly pushing to go faster and dream bigger.


The problem with the board’s approach is that they failed. They failed catastrophically. I cannot emphasize in strong enough terms how much of a public humiliation this is for the AI doom camp. One week ago, true-believer AI safety/AI doom advocates had formal control of the most important, advanced and influential AI company in the world. Now they’re all gone. They completely neutered all their institutional power with an idiotic strategic blunder.


The board fired Altman seemingly without a single thought about what would happen after they fired him. I’m curious what they actually thought was going to happen - they would fire Altman and all the investors in the for-profit corporation would just say “Oh, I guess we should just not develop this revolutionary technology we paid billions for. You’re right, money doesn’t matter! This is a thing that we venture capitalists often say, haha!”.


It seems pretty damn clear that they had no game plan. They didn’t do even basic due diligence. If they had, they’d have realized that every institutional investor, more than 90% of their own employees and virtually the entire tech industry would back Altman. They’d realize that firing Altman would cause the company to self-destruct.


But maybe things were so bad and the AI was so dangerous that destroying the company was actually good! This is the view expressed by board member Helen Toner who said that destroying the company could be consistent with the board’s mission. The problem with Helen Toner’s strategy is that while Helen Toner might have total control over OpenAI, she does not have total control over the rest of the tech industry. When the board fired Altman, he was scooped up by Microsoft within 48 hours. Within 72 hours, there was a standing offer of employment for any OpenAI employee to jump ship to Microsoft at equal pay. And the vast majority of their employees were on board with this. The end result of board’s actions would be that OpenAI still existed, only it’d be called ‘MicrosoftAI’ instead. And there would be even fewer safeguards against dangerous AI - Microsoft is a company that laid off its entire AI ethics and safety team earlier this year. Not a single post-firing scenario here was actually good for the AI doomer camp. It’s hard to overstate what a parade of dumb-fuckery this was. Wile E. Coyote has had more success against the Road Runner than OpenAI’s board has had in slowing dangerous AI developments.


Should We Watch Wile E. Coyote Go Off the Cliff? – The Heartland Institute

Sam Altman (left) watches the OpenAI board (right) attempt to oust him

This buffoonish incompetence is sadly typical for AI doomers. For all the worry, for all the effort that people put into thinking about AI doom there is a startling lack of any real achievements that make AI concretely safer. I’ve asked this question before - What value have you actually produced? - and usually I get pointed to some very sad stuff like ‘Here is a white paper we wrote called Functional Decision Theory: A New Theory of Instrumental Rationality’. And hey, papers like these don’t do anything, but what they lack in impact they make up for in volume! Or I’ll hear “We convinced this company to test their AI for dangerous scenarios before release”. If your greatest accomplishment is encouraging companies to test their own products in basic ways, you may want to consider whether you’ve actually done anything at all.


There’s a sense in which I’m being very unfair to AI doom advocates. They do actually have a huge string of accomplishments - the only problem is that it’s accomplishments in the exact opposite direction from their stated goals. If anything, they’ve made super-advanced AI happen faster. OpenAI was explicitly founded in the name of AI safety! Now OpenAI is leading the charge to develop cutting-edge AIs faster than anyone else, and they’re apparently so dangerous the CEO needed to be fired. AI enthusiasts will take this as a win, but it sure is curious that the world’s most advanced AI models are coming from an organization founded by people who think AI might kill everyone.


Or consider Anthropic. Anthropic was founded by ex-OpenAI employees who worried the company was not focused enough on safety. They decamped and founded their own rival firm that would truly, actually care about safety. They were true AI doom believers. And what impact did founding Anthropic have? OpenAI, late in 2022, became afraid that Anthropic was going to beat them to the punch with a chatbot. They quickly released a modified version of GPT3.5 to the public under the name ‘ChatGPT’. Yes, Anthropic’s existence was the reason ChatGPT was published to the world. And Anthropic, paragons of safety and advocates of The Right Way To Develop AI, ended up partnering with Amazon in the end, making them just as beholden to shareholders and corporate profits as any other tech startup. You will notice the pattern - every time AI doom advocates take major action, they seem to push AI further and faster.


This isn’t just my idle theorizing. Ask Sam Altman himself:



Eliezer Yudkowsky is both the world’s worst Harry Potter fanfiction writer5 and the most important figure in the AI doom movement, having sounded the alarm on dangerous AI for more than a decade. And Altman himself thinks Big Yud’s net impact has been to accelerate AGI (artificial general intelligence, aka smarter-than-human AI).


Even Yudkowsky himself, who founded the Machine Intelligence Research Institute to study how to develop AI safely, basically thinks all his efforts have been worthless. In an editorial for TIME, he said ‘We are not prepared’, and ‘There is no plan’. He advocated for a total worldwide shutdown of every single instance of AI development and AI research. He said that we should airstrike countries who develop AI, and would rather risk nuclear war than have AI being developed anywhere on earth. Leaving aside the lunacy of that suggestion, it’s a frank admission that AI doomers haven’t accomplished anything despite more than a decade of effort.


The upshot of all this is that the net impact of the AI safety/AI doom movement has been to make AI happen faster, not slower. They have no real achievements of any significance to their name. They write white papers, they found institutes, they take in money, but by their own standards they have accomplished worse than nothing. There are various cope justifications for these failures - maybe it would be even worse counterfactually! Maybe firing him and then hiring him back was actually logical by some crazy mental jiu-jitsu! Stop it. It’s embarrassing. The crowd that’s perfectly willing to speculate about the nature of godlike future AIs is congenitally unable to see the obvious thing directly in front of them.


There’s a real irony that AI doom is tightly interwoven with the ‘effective altruist’ world. To editorialize a bit: I consider myself somewhat of an effective altruist, but I got into the movement as someone who thinks stopping malaria deaths in Africa is a good idea because it’s so cost-effective. It pisses me off that AI doomers have ruined the label of effective altruist6. Nothing AI doomers do has had the slightest amount of impact. As far as I can tell they haven’t benefited humanity in any real way, even by their own standards. They are the opposite of ‘effective’. At best they are a money and talent drain that directs funding and bright, well-meaning young people into pointless work. At worst they are active grifters.


C'est pire qu'un crime, c'est une faute


- Charles Maurice de Talleyrand-PĂ©rigord


I really wish the AI safety/doom camp would stop and take stock of exactly what it is they think they’re accomplishing. They won’t, but I wish they would. I’d love to see them just separated from the EA movement entirely. I’d love for EA funders to stop throwing money at them. I’d love to see them admit that not only do they not accomplish anything with their hundreds of millions, they don’t even have a proper framework from which to measure their non-accomplishments. Their whole ecosystem is full of sound and fury, but not much else.


When Napoleon executed the Duke of Enghien in 1804, Talleyrand famously commented “It is worse than a crime, it is a mistake”. The AI doom movement is worse than wrong, it’s utterly incompetent. The firing of Sam Altman was only the latest example from a movement steeped in incompetence, labelled as ‘effective altruism’ but without the slightest evidence of effectiveness to back them up.


Share this post! Or you too might end up as the CEO of OpenAI!


No comments:

Post a Comment

Note: Only a member of this blog may post a comment.