Thursday, February 27, 2025

Tangle - Kash Patel and Dan Bongino to lead FBI.

Kash Patel and Dan Bongino to lead FBI.

Tangle by Isaac Saul / Feb 27, 2025


My take.

Reminder: "My take" is a section where I give myself space to share my own personal opinion. If you have feedback, criticism or compliments, don't unsubscribe. Write in by replying to this email, or leave a comment.


Patel leading the FBI is the result of a phenomenon unique to Trump.

Even taking prior criticisms of the FBI and its leadership as valid, both Patel and Bongino are at another level of concerning.

Patel is an ultra-loyalist, Bongino is even more out there, and I’m not optimistic about the FBI under their leadership.

Let’s play a game.


I’m going to share seven quotes. Some of them are real things Kash Patel and Dan Bongino have said. Some of them are made up. Let’s see if you can spot the fake ones.


“We’re blessed by God to have Donald Trump be our juggernaut of justice, to be our leader, to be our continued warrior in the arena.”

“My recommendation is Donald Trump should ignore this [court order]... who is going to arrest him? The marshals? You guys know who the U.S. Marshals work for? The Department of Justice, that is under the — oh yeah, the executive branch. Donald Trump is going to order his own arrest? This is ridiculous.”

“The only thing that matters is power. That is all that matters. ‘No it doesn’t, we have a system of checks and balances.’ Ha! That’s a good one. That’s really funny. We do?”

“The irony about this for the scumbag commie libs is that the cold civil war they’re pushing for will end really badly for them. Libs are the biggest pussies I’ve ever seen and they use others to do their dirty work. Their mommas are still doing their laundry for them as they celebrate tonight that their long sought goal of the destruction of the Republic has been reached. But they’re not ready for what comes next.”

“My entire life now is about owning the libs.”

“And you've got to harness that following that Q [of QAnon] has garnered and just sort of tweak it a little bit. That's all I'm saying. He should get credit for all of the things he has accomplished, because it's hard to establish a movement."

“We’re going to come after the people in the media who lied about American citizens who helped Joe Biden rig presidential elections. We’re going to come after you, whether it’s criminally or civilly. We’ll figure that out. But yeah, we’re putting you all on notice.”

Just kidding. They’re all real.


1, 6, and 7 were things Kash Patel said. 2, 3, 4, and 5 are things Dan Bongino said. 


It’s not hard to understand how we got here. During Donald Trump’s first term, he surrounded himself with some of the shadiest and most corrupt people in politics. The Paul Manaforts of the world invited questions about his connections to Russia; those questions turned into a media frenzy; that media frenzy drove FBI investigations; those investigations led to a special counsel; that special counsel nearly cost Trump his presidency. 


I’ve written before about the many things we got wrong about Trump and Russia. I don’t want to relitigate them here, but I think Trump deserved to be investigated and also was not guilty of colluding with Russia to win the 2016 election. As I feared at the time, one of the great consequences of the Trump investigation — the reason I desperately wanted the federal government’s probe to be on the up and up in every manner — was the politicized arms race that it set off. Once you open that Pandora’s box, there is no going back — especially not in the American partisan warfare of the 21st century.


Of all the ways the Trump investigation could have gone, our current reality is one of the worst possible iterations. Our politics have only become more polarized since 2016, and Trump just won reelection on a campaign largely centered on personal grievances and promises of revenge. He has no interest in depoliticizing federal institutions like the FBI; he wants to remake them in his mold. He has no interest in leaving anything in the past; he wants payback. He wants to fire every lawyer that was hired under Biden and fire every prosecutor that was involved not just in the “Russia hoax,” but also in prosecuting January 6, a day full of very real crimes. All of these motivations are evident in putting Kash Patel at the head of the FBI.


When Patel was first tapped by Trump, I wrote about a phenomenon I described as the “Trump circularity” — when Trump does some norm-breaking thing (for better or for worse) that puts all of our political footing onto new ground, which he then gets to mold to his own political advantage. 


Kash Patel and Dan Bongino are part of this circularity. Patel, at least, has some relevant experience, but I’m still not thrilled about him leading the bureau. He has openly promised retribution against Trump's political enemies, he’s made his career a loyalty show to Trump, he’s said the figure at the center of the QAnon cult should "get credit for all the things he has accomplished," he hawks dietary supplements to “reverse the vaxx n get healthy,” and he claims he’s going to crack down on leakers and prosecute journalists. He also still will not admit that Joe Biden won the 2020 election, and we found out during his confirmation hearing he has a massive conflict of interest in China. 


Say what you want about James Comey or Christopher Wray (and there’s plenty to criticize), but neither of them is even close to as politically compromised as Patel. They’re not even in the same galaxy. And if the politicization of the FBI is a thing you are worried about and loathe, if you were mad about Comey undermining Hillary Clinton or investigating Trump, or upset Wray’s FBI raided a president’s home, then this is the wrong direction to go. This leads us deeper down the hole.


As for Bongino, well… he is somehow even more out there. Personal disclosure: Soon after Trump came into office in 2016, before he was a famous podcaster, Bongino was constantly spreading easy-to-debunk nonsense on Twitter, and I used to call him out on it. We tangled on social media pretty regularly, arguing and calling each other not-so-nice names. In response, he blocked me. And then I watched his star rise — slowly at first, and then all at once, and now he’s a major celebrity with the online right. Mostly, his fame was driven by the kind of nonsense I used to call him out for.


In this line of work, I’m always conscious of how my readers might view me, and I’m sometimes wary of being too hard on one side of the aisle for consecutive days. We have a politically diverse audience looking for fair takes and a diversity of viewpoints. But in the “my take” section, my promise is not to seek a centrist position or toe the line. Instead, my promise is to be honest, even if it’s inconvenient for me and risky for my business. And the honest truth is that Kash Patel is an alarming FBI director with a smattering of good ideas that, weighed against everything else he’s said and done, completely fail to reassure us that he will act apolitically and in respect of the law. I’m not naive and sycophantic about the government enough to believe the FBI is some deeply ethical, non-political organization; it isn’t, and never has been. 


But it just got a lot worse. 


Bongino leading these agents is just hard to fathom. He’s so radical (again, just read a few sample quotes above) and so power hungry that I struggle to imagine what he’ll try to do with so much control. My only hope is that there are still enough ethical and law-abiding agents and lawyers among the FBI’s roughly 38,000 employees to check Patel’s and Bongino’s worst desires. But I can’t say I’m enthusiastic about the odds. 


Tuesday, February 25, 2025

Saving Medicaid Is a Better Democratic Strategy Than Fighting DOGE. Matthew Yglesias

Saving Medicaid Is a Better Democratic Strategy Than Fighting DOGE. Matthew Yglesias 

February 25 — Read time: 4 minutes


Not about DOGE, not about Ukraine, not about Kash Patel, not about the president implicitly or explicitly comparing himself to Napoleon or a king. For Democratic leaders in both the House and Senate, the preferred topic of discussion is the fiscal framework currently wending its way through the House of Representatives, where trillions of dollars in tax cuts will be partially offset with cuts to health care and food assistance for the poor.


Even Senate Democrats are working on strategies to defend programs such as Medicaid and SNAP, despite the fact that their Republican counterparts are advancing a separate strategy that largely leaves these programs alone. Budget Chair Lindsey Graham’s resolution is much more modest in scope than the House draft, focused on cutting funds for clean energy in order to put more money into immigration enforcement and the military. His take is that Republicans should get this done fast, and then discuss the larger question of taxes and the safety net later in a separate bill.


Democrats are not really engaging with Graham’s resolution, which they see as a mere vehicle to get to the House’s framework. That may or may not be true. But what is true is that House and Senate leaders from both parties are fundamentally agreed on where Democrats’ strongest ground is: defending the social safety net.


Former President George W. Bush tried to privatize Social Security, and he failed. Former House Speaker Paul Ryan tried to cut Medicare, and he failed. In his first term, President Donald Trump tried to repeal the Affordable Care Act, and he failed. Right now DOGE is generating huge levels of excitement or alarm, depending on who you talk to, but dealing with relatively trivial sums of money.


Medicaid, by contrast, is a genuinely big and expensive program. If you want to enact major tax cuts without rattling bond markets, and have decided that programs such as Social Security and Medicare are off limits, Medicaid is the obvious choice.


The 10 Largest Categories in the Federal Budget

If congressional Republicans are serious about cutting spending, it will be hard for them avoid some popular programs


Still, Medicaid itself is popular. Eighty million people currently enjoy Medicaid benefits, two-thirds of adults say they have some connection to the program through a close family member. What’s more, though many state-level Republicans continue to oppose Medicaid expansion, it has proved popular and durable in states as red as Louisiana, Kentucky and Kansas. Cutting this program is going to be politically costly for Republicans.


That explains why Trump is saying Medicaid “won’t be touched” even though Republicans’ plans call for massive cuts. It also explains why House Republican leaders think the best way forward is to slip Medicaid cuts into a big legislative package crammed with other stuff that their members can easily defend — like an extension of Trump’s 2017 tax cuts. And it explains why Senate Republicans have adopted the reverse strategy — first pass a bill with no Medicaid cuts, putting some points on the board, and turn to slashing the safety net later.


Finally, it explains why Democrats are eager to skip past everything currently dominating the headlines and talk about Medicaid cuts instead. It’s not just that it’s a good issue for them. It’s a unifying issue for a party that is leaderless and struggling to regain its footing after a gutting defeat. Not only does Senator Bernie Sanders champion Medicaid, but so do red state governors like Andy Beshear of Kentucky and Laura Kelly of Kansas.


Meanwhile, over the last decade Trump has rebuilt the Republican coalition into something that is much more (as a salesman like him might put it) downscale. His 2016 campaign brought large numbers of low-income White voters into the tent. In 2020 and 2024, he posted significant gains with working-class Black and Hispanic voters. The upshot is that a much larger share of SNAP and Medicaid recipients is now voting Republican than in the past. That makes it harder to slash programs they depend on.


The alternative, of course, is for Republicans to slash taxes without offsetting spending cuts. That’s what they ended up doing in 2001, 2003 and 2017. DOGE’s ongoing antics could provide rhetorical cover for this course of action. The actual amount of money being saved is trivial relative to the cost of Republicans’ tax ideas, but there is certainly a lot of public discussion of spending cuts.


The problem is that — unlike in 2001, 2003 or certainly 2017 — the US is now in a fiscal environment where the budget deficit and the inflation outlook are weighing on interest rates. This is hurting consumers looking to buy cars. It’s making it harder for homebuilders to add new supply. And it’s even become a fiscal problem on its own terms. As old bonds roll over into new ones at higher interest rates, spending on debt service is soaring.


Basic budgetary tradeoffs are more real and more important now than they have been at any point in the 21st century. If Republicans want to be the party of large tax cuts — and, by all indications, they very much do — then they will also have to be either the party of large cuts to programs for the poor or the party of higher interest rates for the middle class. Merely pointing this out is the first step on the Democrats’ road to recovery.


 

Brazil Stood Up for Its Democracy. Why Didn’t the U.S.? Quico Toro

 Brazil Stood Up for Its Democracy. Why Didn’t the U.S.?

For years now, politics in Brazil have been the fun-house-mirror version of those in the United States. The dynamic was never plainer than it became last week, when Brazilian prosecutors formally charged the far-right former President Jair Bolsonaro, along with 33 co-conspirators, with crimes connected to a sprawling plan to overthrow the nation’s democracy and hang on to power after losing an election in October of 2022.

That the charges against Bolsonaro sound familiar to Americans is no coincidence. Bolsonaro consulted with figures in Donald Trump’s orbit in pursuit of his election-denial strategy. But the indictment against Bolsonaro suggests that the Brazilian leader went much further than Trump did, allegedly bringing high-ranking military officers into a coup plot and signing off on a plan to have prominent political opponents murdered.

In this, as in so many things, Bolsonaro comes across as a cruder, more thuggish version of his northern doppelgänger. Trump calculated, shrewdly, to try to retain his electoral viability after his January 6 defeat; Bolsonaro seems to have lacked that impulse control. He attempted so violent a power grab that the institutional immune system tasked with protecting Brazil’s democracy was shocked into overdrive.

The distortion in the mirror is most pronounced with regard to this institutional response. While American prosecutors languidly dotted i’s and crossed t’s, Brazil’s institutions seemed to understand early on that they faced an existential threat from the former president. Fewer than seven months after the attempted coup, Brazil’s Supreme Electoral Court ruled Bolsonaro ineligible to stand for office again until 2030. Interestingly, that decision wasn’t even handed down as a consequence of the attempted coup itself, but of Bolsonaro’s abuse of official acts to promote himself as a candidate, as well as his insistence on casting doubt, without evidence, on the fairness of the election.

The U.S. might have done the same thing. In December 2023, Colorado’s secretary of state refused to allow Trump’s name on the state’s primary ballot, following the state supreme court’s judgment that his role in the events of January 6, 2021, rendered him ineligible to run for president. Trump appealed the legality of the move, and the case came before the U.S. Supreme Court. The justices could have done what their Brazilian counterparts did—ruled that abuses of power and attempts to overturn an election were disqualifying for the highest office of the land. Instead, in March 2024, they voted unanimously to allow Trump to stand.

My home country, Venezuela, faced a roughly analogous situation in 1999, when President Hugo Chávez moved to convene a constituent assembly to rewrite Venezuela’s constitution, which contained no provision for him to do so. Cowed, the supreme court allowed him to go ahead. Venezuela’s then–chief justice, Cecilia Sosa, wrote a furious resignation letter, saying that the court had “committed suicide to avoid being murdered.” The result in Venezuela was the same as that in the United States: The rule of law was dead.

I can’t help but wish that U.S. jurists had shown the nerve of their Brazilian counterparts. In their charging documents against Bolsonaro, Brazil’s prosecutors don’t mumble technicalities: They charge him with attempting a coup d’état, which is what he did. Brazilian law enforcement didn’t tie itself up in knots appointing special counsels; the attorney general, Paulo Gonet, announced the charges himself. The conspiracy “had as leaders the president of the Republic himself and his candidate for vice president, General Braga Neto. Both accepted, encouraged, and carried out acts classified in criminal statutes as attacks on the … independence of the powers and the democratic rule of law,” Gonet said.

Contrast that with the proceduralism at the core of the case against President Trump. After an interminable delay that ultimately rendered the entire exercise moot, Special Counsel Jack Smith charged Trump not for trying to overthrow the government but for “conspiring to obstruct the official proceeding” (that would lead him to lose power) as well as “conspiring to defraud the United States”—a crime so abstract that only a constitutional lawyer knows what it actually means.

In ruling Bolsonaro ineligible to run for office, Brazil’s elections court did not engage in lengthy disquisitions on 19th-century jurisprudence, as the U.S. Supreme Court did in the Colorado case: They said that he had serially abused his power, which is what he did, and which is what renders him unfit for office. This bluntness, this willingness to call a spade a spade, was something the American republic, for all its institutional sophistication, seemed unable to match.

As recently as 2014, one would have been hard-pressed to find anyone willing to forecast that Brazil’s institutions would prove more effective than those of the United States at protecting democracy from populist menace. Maybe Brazilians are just more comfortable with, and accustomed to, holding national leaders to account: The current center-left president, Luiz Inácio Lula da Silva, spent more than two years in prison for corruption after his last stint in power. (Lula was ultimately freed and allowed to stand for office again when courts ruled that the judge in his initial prosecution was biased.) Or maybe it was the speed of response: Rather than waiting months or years to move against the rioters who took over the country’s governing institutions, the Brazilian police started jailing them and investigating the coup conspiracy almost immediately after it took place.

But the biggest difference is that dictatorship is a much more real menace in Brazil, a country that democratized only in the 1980s, than it is in a country that’s never experienced it. Older Brazilians carry the scars, in many cases literal ones, of their fight against dictatorship. This fight for them is visceral in a way it isn’t—yet—for Americans.

Brazil has demonstrated how democracies that value themselves defend themselves. America could have done the same.

There Is No AI Revolution. Ed Zitron

 There Is No AI Revolution

Soundtrack: Mack Glocky - Chasing Cars


Last week, I spent a great deal of time and words framing the generative AI industry as a cynical con where OpenAI's Sam Altman and Anthropic's Dario Amodei have used a compliant media and braindead investors to frame unprofitable, unsustainable, environmentally-damaging and mediocre cloud software as some sort of powerful, futuristic automation.

Yet as I prepared a script for Better Offline (and discussed it with my buddy Kasey, as I often do), I kept coming back to one thought: where's the money?

No, really, where is it? Where is the money that this supposedly revolutionary, world-changing industry is making, and will make?

The answer is simple: I do not believe it exists. Generative AI lacks the basic unit economics, product-market fit, or market penetration associated with any meaningful software boom, and outside of OpenAI, the industry may be pathetically, hopelessly small, all while providing few meaningful business returns and constantly losing money.

I am deeply worried about this industry, and I need you to know why.

On Unit Economics and Generative AI

Putting aside the hype and bluster, OpenAI — as with all generative AI model developers — loses money on every single prompt and output. Its products do not scale like traditional software, in that the more users it gets, the more expensive its services are to run because its models are so compute-intensive.

For example, ChatGPT having 400 million weekly active users is not the same thing as a traditional app like Instagram or Facebook having that many users. The cost of serving a regular user of an app like Instagram is significantly smaller, because these are, effectively, websites with connecting APIs, images, videos and user interactions. These platforms aren’t innately compute-heavy, at least to the same extent as generative AI, and so you don’t require the same level of infrastructure to support the same amount of people. 

Conversely, generative AI requires expensive-to-buy and expensive-to-run GPUs, both for inference and training the models themselves. The GPUs must be run at full tilt for both inference and training models, which shortens their lifespan, while also consuming ungodly amounts of energy. And surrounding that GPU is the rest of the computer, which is usually highly-specced, and thus, expensive.

These models also require endless amounts of training data, supplies of which have been running out for a long time. While synthetic data might bridge some of the gap, at least in situations where there’s a definitive right and wrong answer (like a mathematical problem), there are likely diminishing returns due to the sheer amount of data necessary to make a large language model even larger — data amounting to more than four times the size of the internet.

These companies also must spend hundreds of millions of dollars on salaries to attract and retain AI talent — as much as $1.5 billion a year in OpenAI's case (before stock-based compensation). In 2016, Microsoft claimed that top AI talent could cost as much as an NFL quarterback to hire, and that sum has likely only increased since then, given the generative AI frenzy.

As an aside: One analyst told the Wall Street Journal that companies running generative AI models "could be utilizing half of [their] capital expenditure[s]...because all of these things could break down." As in it’s possible hyperscalers could spend 50% of their capital expenditures replacing broken stuff.

Though these costs are not a burden on OpenAI or Anthropic, they absolutely are on Microsoft, Google and Amazon.

As a result of the costs of running these services, a free user of ChatGPT is a cost burden on OpenAI, as is every free customer of Google's Gemini, Anthropic's Claude, Perplexity, or any other generative AI company.

Said costs are also so severe that even paying customers lose these companies money. Even the most successful company in the business appears to have no way to stop burning money — and as I'll explain, there's only one real company in this industry, OpenAI, and it is most decidedly not a real business.

OpenAI Spent $9 Billion to make $4 billion In 2024, and the entirety of its revenue ($4 billion) is spent on compute ($2 billion to run models, $3 billion to train them)

As a note — I have repeatedly said OpenAI lost $5 billion after revenue in 2024. However, I can no longer in good conscience suggest that it burned “only” $5 billion. It’s time to be honest about these numbers. While it’s fair to say that OpenAI’s “net losses” are $5 billion, it’s time to be clear about what it costs to run this company.

  • 2024 Revenue: According to reporting by The Information, OpenAI's revenue was likely somewhere in the region of $4 billion.
  • Burn Rate: The Information also reports that OpenAI lost $5 billion after revenue in 2024, excluding stock-based compensation, which OpenAI, like other startups, uses as a means of compensation on top of cash. Nevertheless, the more it gives away, the less it has for capital raises. To put this in blunt terms, based on reporting by The Information, running OpenAI cost $9 billion dollars in 2024. The cost of the compute to train models alone ($3 billion) obliterates the entirety of its subscription revenue, and the compute from running models ($2 billion) takes the rest, and then some. It doesn’t just cost more to run OpenAI than it makes — it costs the company a billion dollars more than the entirety of its revenue to run the software it sells before any other costs.  
  • OpenAI also spends an alarming amount of money on salaries — over $700 million in 2024 before you consider stock-based compensation, a number that will also have to increase because it’s “growing” which means “hiring as many people as possible,” and it’s paying through the nose.
  • How Does It Make Money: The majority of its revenue (70+%) comes from subscriptions to premium versions of ChatGPT, with the rest coming from selling access to its models via its API.
    • The Information also reported that OpenAI now has 15.5 million paying subscribers, though it's unclear what level of OpenAI's premium products they're paying for, or how “sticky” those customers are, or the cost of customer acquisition, or any other metric that would tell us how valuable those customers are to the bottom line. Nevertheless, OpenAI loses money on every single paying customer, just like with its free users. Increasing paid subscribers also, somehow, increases OpenAI's burn rate. This is not a real company.

The New York Times reports that OpenAI projects it'll make $11.6 billion in 2025, and assuming that OpenAI burns at the same rate it did in 2024 — spending $2.25 to make $1 — OpenAI is on course to burn over $26 billion in 2025 for a loss of $14.4 billion. Who knows what its actual costs will be, and as a private company (or, more accurately, entity, as for the moment it remains a weird for-profit/nonprofit hybrid) it’s not obligated to disclose its financials. The only information we’ll get will come from leaked documents and dogged reporting, like the excellent work from The New York Times and The Information cited above. 

It's also important to note that OpenAI's costs are partially subsidized by its relationship with Microsoft, which provides cloud compute credits for its Azure service, which is also offered to OpenAI at a discount. Or, put another way, it’s like OpenAI got paid with airmiles, but the airline lowered the redemption cost of booking a flight with those airmiles, allowing it to take more flights than another person with the equivalent amount of points. At this point, it isn’t clear if OpenAI is still paying out of the billions of credits it received from Microsoft in 2023 or whether it’s had to start using cold-hard cash. 

Until recently, OpenAI exclusively used Microsoft's Azure services to train, host, and run its models, but recent changes to the deal means that OpenAI is now working with Oracle to build out further data centers to do so. The end of the exclusivity agreement is reportedly due to a deterioration of the chummy relationship between OpenAI and Redmond, according to The Wall Street Journal, with the latter allegedly growing tired of OpenAI’s constant demands for more compute, and the former feeling as though Microsoft had failed to live up to its obligations to provide the resources needed for OpenAI to sustain its growth.

It is unclear whether this partnership with Oracle will work in the same way as the Microsoft deal. If not, OpenAI’s operating costs will only go up. Per reporting from The Information, OpenAI pays just over 25% of the cost of Azure’s GPU compute as part of its deal with Microsoft — around $1.30-per-GPU-per-hour versus the regular Azure cost of $3.40 to $4.

On User Numbers

OpenAI recently announced that it has 400 million weekly active users.

Weekly Active Users can refer to any seven-day period in a month, meaning that OpenAI can effectively use any spike in traffic to say that it’s “increased its weekly active users,” because it can choose the best seven-day period in a month. This isn’t to say they aren’t “big,” but these numbers are easy to game.

When I asked OpenAI to define what a “weekly active user” was, it responded by pointing me to a tweet by Chief Operating Officer Brad Lightcap that said “ChatGPT recently crossed 400M WAU, we feel very fortunate to serve 5% of the world every week.” It is extremely questionable that it refuses to define this core metric, and without a definition, in my opinion, there is no way to assume anything other than the fact that OpenAI is actively gaming its numbers.

There's likely two reasons it focuses on weekly active users:

  1. As I described, these numbers are easy to game.
  2. The majority of OpenAI’s revenue comes from paid subscriptions to ChatGPT.

The latter point is crucial, because it suggests OpenAI is not doing anywhere near as well as it seems based on the very basic metrics used to measure the success of a software product.

The Information reported on January 31st that OpenAI had 15.5 million monthly paying subscribers, and immediately added that this was a “less than 5% conversion rate” of OpenAI’s weekly active users — a statement that is much like dividing the number 52 by the letter A. This is not an honest or reasonable way to evaluate the success of ChatGPT’s (still unprofitable) software business, because the actual metric would have to be divided paying subscribers by MONTHLY active users, a number that would be considerably higher than 400 million.

Based on data from market intelligence firm Sensor Tower, OpenAI’s ChatGPT app (on Android and iOS) is estimated to have had more than 339 million monthly active users, and based on traffic data from market intelligence company Similarweb, ChatGPT.com had 246 million unique monthly visitors. There’s likely some crossover, with people using both the mobile and web interfaces, though how big that group is remains uncertain. 

Though every single person that visits ChatGPT.com might not become a user, it’s safe to assume that ChatGPT’s Monthly Active Users are somewhere in the region of 500-600 million.  

That’s good, right? Its actual users are higher than officially claimed? Er, no. First, each user is a financial drain on the company, whether they’re a free or paid user. 

It would also suggest a conversion rate of 2.583% from free to paid users on ChatGPT — an astonishingly bad number, one made worse by the fact that every single user of ChatGPT, regardless of whether they pay, loses the company money.

It also feeds into a point I’ve repeatedly made in this newsletter, and in my podcast. Generative AI isn’t that useful. If Generative AI was genuinely this game-changing technology that makes it possible to simplify your life and your work, you’d surely fork over the $20 monthly fee for unlimited access to OpenAI’s more powerful models. I imagine many of those users are, at best, infrequent, opening up ChatGPT out of curiosity or to do basic things, and don’t have anywhere near the same levels of engagement as with any other SaaS app. 

While it's quite common for Silicon Valley companies to play fast and loose with metrics, this particular one is deeply concerning, and I hypothesize that OpenAI choosing to go with Weekly versus Monthly Active Users is an intentional attempt to avoid people calculating the conversion rate of its subscription products. As I will continue to repeat, these subscription products lose the company money.

Mea Culpa: My previous piece focused entirely on web traffic to ChatGPT.com, and did not have the data I now have related to app downloads. Nevertheless, it isn't obvious whether OpenAI is being honest about its weekly active users, because it won't even define how it measures them.

On Product Strategy

  • OpenAI makes most of its money from subscriptions (approximately $3 billion in 2024) and the rest on API access to its models (approximately $1 billion).
  • As a result, OpenAI has chosen to monetize ChatGPT and its associated products in an all-you-can-eat software subscription model, or otherwise make money by other people productizing it. In both of these scenarios, OpenAI loses money.
  • OpenAI's products are not fundamentally differentiated or interesting enough to be sold separately. It has failed — as with the rest of the generative AI industry — to meaningfully productize its models due to their massive training and operational costs and a lack of any meaningful "killer app" use cases.
  • The only product that OpenAI has succeeded in scaling to the mass market is the free version of ChatGPT, which loses the company money with every prompt. This scale isn't a result of any kind of product-market fit. It's entirely media-driven, with reporters making "ChatGPT" synonymous with "artificial intelligence."
    • As a result, I do not believe that generative AI is a "real" industry — which I define as one with multiple competitive companies with sustainable revenue streams and meaningful products with actual market penetration — because it is entirely subsidized by a combination of venture capital and hyperscaler cloud credits.
    • ChatGPT is popular because it is the only well-known product, one that's mentioned in basically every article on artificial intelligence. If this were a "real" industry, other competitors would have similar scale — especially those run by hyperscalers — but as I'll get to later, data suggests that OpenAI is the only company with any significant user base in the entire generative AI industry, and it is still wildly unprofitable and unsustainable.
  • OpenAI's models have been almost entirely commoditized.  Even its reasoning model o1 has been commoditized by both DeepSeek's R1 model and Perplexity's R1 1776 model, both of which offer similar outcomes at a much-discounted price, though it's unclear (and in my opinion unlikely) that these models are profitable to run.
  • OpenAI, as a company, is piss-poor at product. It's been two years and ChatGPT mostly does the same thing as it used to, still costs more to run than it makes, and ultimately does the same thing as every other LLM chatbot from every other generative AI company.
  • Moreover, OpenAI (like every other generative AI model developer) is incapable of solving the critical flaw with ChatGPT, namely its tendency to hallucinate — where it asserts something to be true, when it isn’t. This makes it a non-starter for most business customers, where (obviously) what you write has to be true. 
    • Case in point: A BBC investigation just found that half of all AI-generated news articles have some kind of “significant” issue, whether that be hallucinated facts, editorialization, or references to outdated information. 
    • And the reason why OpenAI hasn’t fixed the hallucination problem isn’t because it doesn’t want to, but because it can’t. They’re an inevitable side-effect of LLMs as a whole. 
  • The fact that nobody has managed to make a mass market product by connecting OpenAI's models also suggests that the use cases just aren't there for mass market products powered by generative AI.
  • Furthermore, the fact that API access is such a small part of its revenue suggests that the market for actually implementing Large Language Models is relatively small. If the biggest player in the space only made a billion dollars in 2024 selling access to its models (unprofitably), and that amount is the minority of its revenue, there may not actually be a real industry here.
  • These realities — the lack of utility and product differentiation — also mean that OpenAI can’t raise its prices above the breakeven point, which would also likely make its generative AI unaffordable and unattractive to both business and personal customers. 

Counterpoint: OpenAI has a new series of products that could open up new revenue streams such as Operator, its "agent" product, and "Deep Research," their research product.

  • On costs: Both of these products are very compute intensive.
  • On Product-Market Fit:
    • To use Operator or Deep Research currently requires you to pay $200 a month for OpenAI's ChatGPT Pro, a $200-a-month subscription.
    • As a product, Operator barely works. As I covered a few weeks ago, this product — which claims to control your computer and does not appear to be able to do so consistently — is not even close to ready for prime time, nor do I think it has a market.
    • Deep Research has already been commoditized, with Perplexity and xAI launching their own versions almost immediately.
    • Deep Research is also not a good product. As I covered last week, the quality of writing that you receive from a Deep Research report is terrible, rivaled only by the appalling quality of its citations, which include forum posts and Search Engine Optimized content instead of actual news sources. These reports are neither "deep" nor well researched, and cost OpenAI a great deal of money to deliver.
  • On Revenue
    • Both Operator and Deep Research currently require you to pay for a $200-a-month subscription that loses the company money.
    • Neither product is sold on its own, and while they may drive revenue to the ChatGPT Pro product, as said above, said product loses OpenAI money. 
    • These products are compute-intensive and have questionable outputs, making each prompt from a user both expensive and likely to be followed up with further prompts to get the outputs the user desired. As generative models don't "know" anything and are probabilistically generating answers, they are poor arbiters of quality information.

In summary, both Operator and Deep Research are expensive products to maintain, are sold through an expensive $200-a-month subscription that (like every other service provided by OpenAI) loses the company money, and due to the low quality of their outputs and actions are likely to increase user engagement to try and get the desired output, incurring further costs for OpenAI.

On The Future Prospects for OpenAI

  • A week or two ago, Sam Altman announced the "updated roadmap for GPT-4.5 and GPT-5.
    • GPT-4.5 will be OpenAI's "last chain-of-thought model," referring to the core functionality of its reasoning models.
    • GPT-5 will be, and I quote Altman, "a system that integrates a lot of our technology, including o3."
      • Altman also vaguely suggests that paid subscribers will be able to run GPT-5 at "a higher level of intelligence," which likely refers to being able to ask the models to spend more time computing an answer. He also suggests that it will "incorporate voice, canvas, search, deep research, and more." 
    • Both of these statements vary from vague to meaningless, but I hypothesize the following:
      • GPT-4.5 will be an upgraded version of GPT-4o, OpenAI's foundation model, now codenamed Orion.
      • GPT-5 (which used to be called Orion) could be just about anything, but one thing that Altman mentioned in the tweet is that OpenAI's model offerings had gotten too complicated, and that it would be doing away with the ability to pick what model you used, gussying this up by claiming this was "unified intelligence.'
      • As a result of doing away with the model picker, I hypothesize that OpenAI will now attempt to moderate costs by picking what model will work best for a prompt — a process it will automate to questionable results.
    • I believe that this announcement is a very bad omen for OpenAI. Orion has been in the works for more than 20 months and was meant to be released at the end of last year, but was delayed due to multiple training runs that resulted in, to quote the Wall Street Journal, "software [that] fell short of the results researchers were hoping for."
      • As an aside, The Wall Street Journal refers to Orion as "GPT-5," but based on the copy and Altman's comments, I believe "Orion" refers to the foundation model. OpenAI appears to be calling a hodgepodge of different other models "GPT-5" now.
      • The Journal further adds that as of December Orion "perform[ed] better than OpenAI’s current offerings, but [hadn't] advanced enough to justify the enormous cost of keeping the new model running," with each six-month-long training run — no matter its efficacy — costing around $500 million. 
      • OpenAI also, like every generative AI company, is running out of high-quality training data necessary to make the model "smarter" (based on benchmarks specifically made to make LLMs seem smart) — and note that "smarter" doesn't mean "new functionality."
      • Sam Altman deputizing Orion from GPT-5 to GPT-4.5 suggests that OpenAI has hit a wall with making its next model, requiring him to lower expectations for a model that OpenAI Japan president Tagao Nagasaki had suggested would "aim for 100 times more computational volume than GPT-4," which some took to mean "100 times more powerful" when it actually means "it will take way more computation to train or run inference on it."
      • If Sam Altman, a man who loves to lie, is trying to reduce expectations for a product, you should be worried.
    • Large Language Models — which are trained by feeding them massive amounts of training data and then reinforcing their understanding through further training runs — are hitting the point of diminishing returns. In simple terms, to quote Max Zeff of TechCrunch, "everyone now seems to be admitting you can’t just use more compute and more data while pretraining large language models and expect them to turn into some sort of all-knowing digital god."
    • It's unclear what the functionality of GPT-4.5 or GPT-5 will be. Does the market care about an even-more-powerful Large Language Model if said power doesn't lead to an actual product? Does the market care if "unified intelligence" just means stapling together various models to produce outputs?

As it stands, OpenAI has effectively no moat beyond its industrial capacity to train Large Language Models and its presence in the media. It can have as many users as it wants, but it doesn't matter because it loses billions of dollars, and appears to be continuing to follow the money-losing Large Language Model paradigm, guaranteeing it’ll lose billions more.

Is Generative AI A Real Industry?

The Large Language Model paradigm is also yet to produce a successful, mass market product, and no, Large Language Models are not successful or mass market. I know, I know, you're going to say ChatGPT is huge, we've already been through that, but surely, if generative AI was a real industry, there'd be multiple other players with massive customer bases as a result of how revolutionary it was, right?

Right?

Wrong!

Let's look at some estimated numbers from data intelligence firm Sensor Tower (monthly active users on apps) and Similarweb (unique monthly active visitors) for the biggest players in AI in January 2025:

  • OpenAI's ChatGPT: 339 million monthly active users on the ChatGPT app, 246 million unique monthly visitors to ChatGPT.com.
  • Microsoft Copilot: 11 million monthly active users on the Copilot app, 15.6 million unique monthly visitors to copilot.microsoft.com.
  • Google Gemini: 18 million monthly active users on the Gemini app, 47.3 million unique monthly visitors.
  • Anthropic's Claude: Two million (!) monthly active users on the Claude app, 8.2 million unique monthly visitors to claude.ai.
  • Perplexity: Eight million monthly active users on the Perplexity app, 10.6 million unique monthly visitors to Perplexity.ai.
  • DeepSeek: 27 million monthly active users on the DeepSeek app, 79.9 million unique monthly visitors to DeepSeek.com.
    • This figure doesn’t capture DeepSeek’s China-based users, who (at least, on mobile) access the app through a variety of marketplaces. From what I can tell, the DeepSeek app has nearly 10 million downloads on the Vivo store — just one of many Android app marketplaces serving Mainland China, and not even one of the biggest.
    • This isn’t surprising. China is a huge market, and it’s also one that’s incredibly hard for non-Chinese companies to enter, especially when you’re potentially dealing in content that’s incredibly sensitive or prohibited in China. That’s why Western social media and search companies are nowhere to be found in China, and the same is true for AI.
    • For the sake of simplicity, assume that all these numbers mentioned earlier refer to users outside of China, where most — if not all — of the Western-made chatbots are blocked by the Great Firewall. 

To put this in perspective, the entire combined monthly active users of the Copilot, Claude, Gemini, DeepSeek, and Perplexity apps amount to 66 million, or 19.47% of the entire monthly active users of ChatGPT's mobile app. Web traffic slightly improves things (I say sarcastically), with the 161.6 million unique monthly visitors that visited the websites for Copilot, Claude, Gemini, DeepSeek and Perplexity making up 65.69% of all of the traffic that went to ChatGPT.com.

However, I'd argue that including DeepSeek vastly over-inflates these numbers. It’s an outlier, and it’s also a relatively new company that’s enjoying its moment in the sun, basking in the glow of a post-launch traffic spike, and a flood of favorable media coverage. I imagine that when the dust settles in a few months, we’ll get a more reliable idea of its market share and consistent user base. 

Without DeepSeek, the remaining generative AI services made up a total of 39 million monthly active users across their apps, and a grand total of 81.7 million unique monthly web visitors.

Without ChatGPT, it appears that the entire generative AI app market is a little more than half the size of Pokémon Go at its peak, when it had 147 million monthly active users. While one can say I missed a few apps — xAI's Grok, Amazon's Rufus, or Character.ai — there isn't a chance in hell they cover the shortfall.

These numbers aren't simply piss poor, they're a sign that the market for generative AI is incredibly small, and based on the fact that every single one of these apps only loses money, is actively harmful to their respective investors or owners.

I do not think this is a real industry, and I believe that if we pulled the plug on the venture capital aspect tomorrow it would evaporate.

On API Calls

Another counter to my argument is that API calls are a kind of “hidden adoption” — that there is this massive swell of engaged, happy customers using generative AI that aren’t using any of the major apps, and that the connection to these models is the real secret success story.

This isn’t the case.

OpenAI, as I’ve established, is the largest player in generative AI, making more revenue (roughly $4 billion in 2024, though it lost $5 billion after revenue — again, OpenAI lost $9 billion in 2024) than any other private AI company. The closest I can get to an estimate on how many actual developers integrate their applications is a statement from their October 2024 dev day where OpenAI said over three million developers are building apps using its models.

Again, that’s a very fuzzy — and unreliable — metric. I imagine a significant chunk of those developers are hobbyists working on personal projects, or simply playing around with the service out of sheer curiosity, spending a few bucks to write the generative AI equivalent of “Hello World,” and then moving on with their lives. Those developers actually using OpenAI’s APIs in actual commercial projects likely represent a vanishingly small percentage of that three million.

As I’ve discussed in the past, OpenAI’s revenue is heavily weighted toward its subscription business, with licensing access to models like GPT-4o making up less than 30% (around $1 billion) of their its, and subscriptions to their premium products (ChatGPT Plus, Teams, Business, Pro, the newly-released Government plan, etc) making up the majority — around $3 billion in 2024.

My argument is fairly simple. OpenAI is the most well-known player in generative AI, and thus we can extrapolate from it to draw conclusions about the wider industry. In the event that there was a huge, meaningful industry integrating generative AI into distinct products with mass-market consumer adoption, OpenAI’s API business would be doing far, far more revenue.

Let me be a little more specific about why API calls matter.

When a business plugs OpenAI’s models into its apps and a customer triggers a feature that uses it — such as asking the app to summarize an email — OpenAI charges the business both for the prompt (the input) and the result (the output). As a result, where “weekly active users” might be indicative of attention to OpenAI’s products, API calls are far more indicative of consumer and enterprise adoption.

To be clear, I acknowledge that there are a lot — a non-specific amount, but a fair amount — of app developers and companies adopting generative AI. However, judging on the revenue both to OpenAI’s developer-focused business and the lack of any real revenue for any business integrating generative AI, I hypothesize that customers — which include developers integrating OpenAI’s models into both consumer-facing apps and enterprise-focused apps — are not actually using these features that much.

I should also add that OpenAI makes about $200 million a year selling its models through Microsoft, meaning that its API business may be as small as $800 million. Again, this is not profit, it is revenue.

Sidebar: There is, of course, an alternative: that OpenAI is charging way, way less for their models than it should — an argument I made in The Subprime AI Crisis last year — but accepting this argument means that at some point OpenAI will either have to become profitable (it has shown no signs of doing so) or charge the actual cost of operating their unprofitable models.

How Bad Is This?

For Anthropic, It's Pretty Disastrous

The Information reported last week that Anthropic has projected (made up) that it will make at least $12 billion in revenue in 2027, despite making $918 million in 2024 and losing $5.6 billion somehow.

Anthropic is currently raising $2 billion at a $60 billion valuation for a business that loses billions of dollars a year with an app install base of 2 million people and a web presence smaller than some niche hobbyist news outlets.

Based on reporting from the Information from two weeks ago, Anthropic made approximately $918 million in 2024 (and lost $5.6 billion), with CNBC reporting that 60-75% of that revenue came from API calls (though that number was from September 2024). In that respect, it’s the reverse of OpenAI — which, itself, points to the relative obscurity of Anthropic and the fact that OpenAI has become accepted as the default consumer entrypoint to generative AI.

This company is not worth $60 billion.

Anthropic has raised $14.7 billion to create an also-ran Large Language Model company that some developers like more than OpenAI, with a competing consumer-facing Large Language Model (Claude) that has an install base of maybe 2% of the five free-to-play games made by Clash of Clans developer Super Cell.

Anthropic, much like OpenAI, has categorically failed to productize its Large Language Model, with the only product it appears to have pushed being Computer Use, a similarly-useless AI model that can sometimes successfully do in minutes what it takes you to do in seconds using a web browser.

Anthropic, like OpenAI, has no moat. While it provides chain-of-thought reasoning in its models, that too has been commoditized by DeepSeek. Its models, again like OpenAI, are unprofitable, unsustainable and heavily-dependent on training data that's either running out or has already run out.

Its CEO is also a sleazy conman who, like Sam Altman, continually promises that his company's AI systems will become powerful and autonomous in a way that they have never shown any possibility of becoming.

Any investor in Anthropic needs to seriously consider what it is they're investing in. Anthropic has, other than iterating on its Large Language Model Claude, shown little fundamental differentiation from the rest of the industry.

Anthropic's business, again like OpenAI, is entirely propped up by venture capital and hyperscaler (Google, Amazon) money, and without it would die almost immediately, because it has only ever lost money.

Its products are both unpopular and commoditized, and it lost $5.6 billion last year! Stop dancing around this fact! Stop it!

For Perplexity, Who Cares?

Perplexity, a company valued at $9 billion toward the end of 2024, has eight million people a month using its app, with the Financial Times reporting it has a grand total of 15 million monthly active users for its unprofitable search engine. Perplexity, like every generative AI company, only ever loses money, and its product — generative AI-powered search — is so commoditized that it's actually remarkable the company still exists. 

Other than a slick design, there is little to be excited about here — and 8 million monthly active users is a pathetic, embarrassing number for a company with the majority of its users on mobile.

Aravind Srivinas is a desperate man with questionable intentions that made a half-hearted offer to merge with TikTok in January and a product that rips off journalists to spit out its mediocre content.

Any investor in Perplexity needs to ask themselves — what is it I'm investing in? An unprofitable search engine? An unprofitable Large Language Model company? A company that has such poor adoption of its product that it was prepared to become the shell corporation for TikTok?

Personally, I'd be concerned about the bullshit numbers it keeps making up. The Information reported that Perplexity said it would make $127 million in 2025, and $656 million in 2026.

How much money did it make in 2024? Just over $56 million! Is it profitable? Hell no!

Its product is commoditized, and it makes less than a quarter of the revenue of the Oakland Athletics in 2024, though its app is marginally more popular.

It's time to stop humoring these companies!

For The Hyperscalers, Apocalyptic

The Wall Street Journal reports that Microsoft intends to spend $93.7 billion on capital expenditures in 2025 — or roughly $8,518 per monthly active user on the Copilot app in January 2025. Those figures, however, may already be out of date with Bloomberg reporting the company is cancelling some leases for AI data centers. If true, it would suggest the company is pulling back from its drunken AI spending binge — although it’s not clear to what extent.  

Sidenote: For what it’s worth, Microsoft responded by saying it stands by its original capex plans, although “may strategically pace or adjust [its] infrastructure in some areas.” Take from that what you will, while also noting that a plan isn’t the same as a definitive commitment, and that the company paused construction on a data center in January that was reportedly intended to support OpenAI. It’s also worth noting that as part of these cuts, Microsoft has pulled back from so-called statements of qualifications - the financial rundown and statements that say how they'll intend to pay for the lease (this might also include financing terms) - a document that's a precursor for future data center agreements. In short, they may have pulled out from further data centers they hadn't fully committed to.

Google is currently planning to spend $75 billion on capital expenditures, or roughly $4,167 per monthly active user of the Gemini app in January 2025. Sundar Pichai wants Gemini to be "used by 500 million people before the end of 2025," a number so unrealistic that someone at Google should have been fired, and that someone is Sundar Pichai.

The fact of the matter is that if Google and Microsoft can't make generative AI apps with meaningful consumer penetration, this entire industry is screwed. There really are no optimistic ways to look at these numbers (and yes, I'm repeating from the above):

  • Microsoft Copilot: 11 million monthly active users on the Copilot app, 15.6 million unique monthly visitors to copilot.microsoft.com.
  • Google Gemini: 18 million monthly active users on the Gemini app, 47.3 million unique monthly visitors.

These are utterly pathetic considering Microsoft and Google's scale, especially given the latter's complete dominance over web search and the ability to funnel customers to Gemini. For millions — perhaps billions — Google is the first page they see when they open a web browser. It should be owning this by now. 

47.3 million unique monthly visitors is a lot of people, but considering that Google spent $52.54 billion in capital expenditures in 2024, it's hard to see where the return is, or even see where a return could possibly be.

Google, like most companies, does not break out revenue from AI, though it loves to say stuff like "a strong quarter was driven by our leadership in AI and momentum across the business." As a result of its unwillingness to share hard numbers, all we have to look at are numbers like those I've received from Similarweb and Sensor Tower, and it's fair to suggest that Gemini and its associated products have been a complete flop.

Worse still, it spent $127.54 billion in capital expenditures in 2023 and 2024 combined, with an estimated $75 billion forecast for 2025. What the fuck is going on?

Yes, it is likely making revenue from people running generative AI models on Google Cloud, and yes, it is likely making revenue from forcing AI upon Google Workspace customersBut Google, like every single other generative AI player, is losing money on every single generative AI prompt, and based on these monthly active user numbers, nobody really cares about Gemini.

Actually, I take that back. Some people care about Gemini — not that many, but some! — and it's far more fair to say that nobody cares about Microsoft Copilot despite Microsoft shoving it in every corner of our lives. 11 million monthly active users for its unprofitable, heavily-commoditized Large Language Model app is a joke — as are the 15.6 million monthly active users to its web presence — probably because it does exactly the same shit that every other LLM does.

Microsoft's Copilot app isn't just unpopular, it's irrelevant. For comparison, Microsoft Teams has, according to a post from the end of 2023, over 320 million monthly active users. That’s more than ten times the amount of monthly active users of their Copilot app and the Copilot website combined, and unlike Copilot, Teams actually makes Microsoft money.

Now, I obviously don't have the numbers on the people that accidentally click the Copilot button in Microsoft Office or on Bing.com, but I do know that Microsoft isn't making much money on AI at all. Microsoft reported in its last earnings that it was making "$13 billion of annual revenue" — a projected number based on current contracts — on its "artificial intelligence products."

Now, I've made this point again and again, but revenue is not the same thing as profit, and Microsoft does not have an "artificial intelligence" part of its earnings. These numbers are cherry-picked from across the entire suite of Microsoft products — such as selling Copilot add-ons to its Microsoft 365 enterprise suite (The Information reported in September 2024 that Microsoft had only sold Copilot to around 1% of their 365 customers), selling access to OpenAI's models on Azure (roughly a billion in revenue), and people running their own models on their Microsoft Azure Cloud.

For context, Microsoft made $69.63 billion in revenue in its last quarter. $13 billion of annual revenue (NOT profit) is about $3.25 billion in quarterly revenue off of upwards of $200 billion of capital expenditures since 2023.

The fact that neither Gemini nor Copilot has any meaningful consumer penetration isn't just a joke. It should be sending alarm bells throughout Wall Street. While Microsoft and Google may make money outside of consumer software, both companies have desperately tried to cram Copilot and Gemini down consumers' throats, and they have categorically, unquestionably failed, all while burning billions of dollars to do so.

"BUT ED, WHAT ABOUT GITHUB COPILOT."

According to a report from the Wall Street Journal from October 2023, Microsoft was losing on average more than $20 a month per user on the paid version of Github, with some users costing them more than $80 a month. Microsoft said a year later that Github Copilot had 1.8 million paid customers, which is pretty good, except like all generative AI products, it loses money.

I must repeat that Microsoft will have spent over $200 billion in capital expenditures by the end of 2025. In return, it got 1.8 million paying customers for a product that — like everything else I'm talking about — is heavily-commoditized (basically every LLM can generate code, though some are better than others, by which I mean they all introduce security issues into your code, but some produce stuff that’ll actually compile) and loses Microsoft money even when the user pays.

Am I getting through to you yet? Is it working?

On The Prevalence of “AI”

One of the arguments people make is that “AI is everywhere,” but it’s important to remember that the prevalence of AI is not proof of its adoption, but the intent of companies shoving it into everything, and the same goes for “business integrating AI” that are really just mandating people dick around with Copilot or ChatGPT.

No, really, KPMG bought 47,000 Microsoft Copilot subscriptions last year (at a significant discount) “to be familiar with any AI-related questions [its] customers may have. Management consultancy PwC bought 100,000 enterprise subscriptions — becoming OpenAI’s largest customer in the process, as well as its first reseller, and have created their own internal generative AI called ChatPWC that PwC staff absolutely hate

While you may “see AI everywhere,” integrations of generative AI are indicative of the decision making of the management behind the platforms and the demands of “the market” more than any consumer demand. Enterprise software is more often than not sold in bulk to managers or C-suite executives tasked less with company operations and more with seeming “on the forefront of technology.” 

In practical terms, this means there’s a lot of demand to put AI in stuff and some demand to buy stuff with AI on it by enterprises buying software, but little evidence to suggest significant user adoption or usage, I’d argue because Large Language Models do not lend themselves to features that provide meaningful business returns.

Where Large Language Models Work

To be clear, and to deal with the “erm, actually” responses, I am not saying Large Language Models have no use cases or no customers.

People really do use them for coding, for searching defined libraries of documents, for generating draft materials, for brainstorming, and for summarizing and searching documents. These are useful, but they are not magical. 

These are also — and I do not believe there are any use cases that justify this — not a counterbalance for the ruinous financial and environmental costs of generative AI. It is the leaded gasoline of tech, where the boost to engine performance didn’t outweigh the horrific health impacts it inflicted.

On “Agents”

When a company uses the term “agent,” they are intentionally trying to be deceitful, because the term “agent” means “autonomous AI that does stuff without you touching it.” The problem with this definition is that everybody has used it to refer to “a chatbot that can do some things while connected to a database,” which is otherwise known as a chatbot.

In OpenAI and Anthropic’s case, “agents” refer to a model that controls your computer and performs tasks based on a prompt. This is closer to “the truth,” other than the fact it’s so unreliable as to be disqualifying, and the tasks it succeeds at (like searching on Tripadvisor) are remarkably simple. 

Next time you hear the term “agent,” actually look at what the product does. 

On Artificial General Intelligence

Generative AI is probabilistic, and Large Language Models do not “know” anything, because they are guessing what the next part of a particular output would be based on an input. They are not “making decisions.” They are probability machines, which in turn makes them only as reliable as probability can be, and as conscious — no matter how intricate a system may be or how much infrastructure is built — as a pair of dice. 

We do not understand how human intelligence works, and as a result it’s laughable to imagine we’d be able to simulate it. Large Language Models do not create “artificial intelligence” — they are the most powerful parrots in the world, trained to respond to stimulus with what they guess is the correct answer.

In simpler terms, imagine if you made a machine that threw a bouncy ball down a hallway, and got really, really good at dialing it in to throw the ball so that it followed a fairly exact trajectory. Would you consider the arm intelligent? How about the ball? 

The point I am making is that Large Language Models — a cool concept with some interesting things they can do — have been used as a cynical marketing vehicle to raise money for OpenAI by lying about what they’re capable of doing, starting with calling them “artificial intelligence.” 

No, Really, Where's The Money?

Revenue is not the same as profit.

I'll say it again — revenue is not the same as profit.

And even then, Google, Amazon and (to an extent Microsoft), the companies making the most investments in AI, do not want to state what that revenue is. I hypothesize the reason that they do not want to disclose it is that it’s pretty god damn small. 

It is extremely worrying that so few companies are willing to directly disclose their revenue from selling services that are allegedly revolutionary. Why? Salesforce says it closed “200 AI related deals” in its last earnings. How much money did it make? Why does Google get away with saying it has “growing demand for AI” without clarifying what that means? Is it because nobody is making that much money? 

Sidebar: I can find — and I’ve really looked! — one company that appears to be making profit from generative AI. Turing, a consultancy that helps generative AI companies find people to train their models that made $300 million in revenue in 2024 and reached an indeterminate amount of profitability

While Microsoft may “disclose” it “made $13 billion in AI revenue,” that’s annualized — so projected based on current contracts rather than booked revenue — and does not speak to the specific line items like one would if said line items were not going to make the markets say “hey, what the fuck?”

Put aside whatever fantastical beliefs you may have about the future and tell me, right now, what business use case exists that justifies burning hundreds of billions of dollars, damaging our power grid, hurting our planet, and stealing from millions of people?

Even if you can put troublesome things like “morals” or “the basic principles of finance” aside, can AI evangelists not see that their dream is failing? Can they not see that nothing is really happening? That generative AI, at best, can be kind of cool yet mostly sucks and comes at an unbearable moral, financial and environmental cost? Is any of this really worth it?

And where exactly does this end? Do you truly, gun to your head, your life contingent on the truth leaving your lips, believe that this goes much further than you see today?

Do you not see that this kind of sucks? Do you not see that generative AI runs contrary to the basic tenets of what makes science fiction cool? It doesn’t make humans better, it reduces their work to a stagnant, unremarkable slop in every way it can, and reduces the cognition of those who come to rely on it, and it costs hundreds of billions of dollars and a return to fossil fuels for some reason.

It isn’t working. The users aren’t there. The revenue isn’t there. The best time to stop this was two years ago, and the next best time is as soon as humanly possible.

I have said that generative AI is a group delusion in the past, and I repeat that claim today. What you are seeing in the news is not the “success“ of the artificial intelligence industry, but a runaway narrative created by and sustained by Sam Altman and OpenAI. 

What you are watching is not a revolution, but a repetitious public relations campaign for one company that accidentally timed the launch of ChatGPT with a period of deep desperation in big tech, one so profound that it will likely drag half a trillion dollars’ worth of capital expenditures along with it.

This bubble will only burst when either the markets or the hyperscalers accept that they have chased their own tails toward oblivion. There is no justification for any of the capital expenditures related to generative AI — we are approaching the limit of what the transformer-based architecture can do, if we haven’t already reached it. No amount of beating off about test-time compute and connecting Large Language Models to other Large Language Models is going to create a new use case for this technology, and even if it did, it’s unlikely that it ever makes enough money to make it profitable.

I will keep writing this stuff until I’m proven wrong. I do not know why more people aren’t more worried about this. The financials are truly damning, the user numbers so small as to be insignificant, the costs so ruinous that they will likely cost tens of thousands of people their jobs and one of the hyperscalers CEOs their job (although, admittedly, I’m less upset about that), and inflict damage on tech valuations that may rival the dot com boom.

And if the last point feels distant to you, ask yourself: What’s in your retirement savings? That’s right. Google and Microsoft, and hundreds of other companies that will be hurt by the contagion of an AI bubble imploding, just as they were in the 2008 financial crash, when the failure of the banking system trickled down into the wider economy. 

I should also not be the person saying this, or at least I should not be the first. These numbers are horrifying, and I have no idea why nobody else is worried. There is no industry here. There is no money. There is no proof that this will ever turn into a real industry, and far more proof that it will cost more money than it will ever make in perpetuity. 

OpenAI and Anthropic are not real companies — they are free-riders, living on venture-backed welfare for an indeterminate amount of time because the entire tech industry has agreed to rally around the world’s most unprofitable software. And like any free rider that doesn’t actually produce anything, when the money goes away, they’re fucked. 

Seriously, why are investors funding OpenAI? Do they seriously believe it’s necessary to let Sam Altman and OpenAI continue to burn 5 or more billion dollars a year on the off chance he’s able to create something that’s…alive? Profitable? What’s the endpoint here? How many more billions? Where is the fucking money, Sam Altman? Where is the god damn money?

Because generative AI is OpenAI. The consumer adoption of this software has completely failed, and appears to be going nowhere fast. ChatGPT is sustained entirely on deranged, specious hype drummed up by a media industry that thinks it’s more remarkable to write down the last lie that Sam Altman told than say that OpenAI has lost $9 billion dollars in the last year and intends to more than double that number in 2025 for absolutely no reason.

It is time to stop humouring OpenAI, and time to start directly stating that it is a bad business without a meaningful product. The generative AI industry does not exist without OpenAI, and thus this company must justify its existence.

And let’s be abundantly clear: OpenAI cannot exist any further without further venture capital investment. This company has absolutely no path to sustain itself, no moat, and loses so much money that it will need more than $50 billion to continue in its current form.

I don’t know how I’m wrong, and I have sat and thought a great deal about how I might be. I can find no compelling arguments. I don’t know what to do but tell you what I think, and why I think that way, and hope that you, the reader, understand a little bit more about what I think is going on.


I’ll leave you with one thought — and one particular thing that bothers me about generative AI.

Regular people, for the most part, do not seem to want this. While there are occasional people I’ll meet who use ChatGPT to rewrite part of an email, most of the people I meet feel like AI was forced into their lives. 

With that in mind, I believe that Apple is radicalizing millions of people against generative AI by forcing them to reckon with the terrible summaries, awful suggested texts and horribly-designed user interface elements of Apple Intelligence. 

Something about generative AI has caused the hyperscalers to truly lose it, and the intrusion of generative AI into both Microsoft Office and Google Docs has turned just about everybody I know in the business world against it. 

The resentment boiling against this software is profound because the tech industry has become desperate and violative, showing such contempt for their customers that even Apple will force an inferior experience upon them to please the will of the Rot Economy and the growth-at-all-cost mindset of the markets. 

Let’s be frank: nobody really needs anything generative AI does. Large Language Models hallucinate too much to be truly reliable, a problem that will require entire new branches of mathematics to solve, and their most common consumer-facing functions like summarizing an article, “practicing for a job interview,” or “write me a business plan” are not really things people need or massively benefit from, even if these things weren’t ruinously expensive or damaging to the environment. 

I believe regular people are turning on the tech industry thanks to their frenzied attempts to make us all buy into their latest bad idea. 

Yet it isn’t working. Consumers don’t want this shit. They’re intrigued by the idea, then mostly immediately bouncing off of it once they see what it can (or can’t) do. This software is being forced on people at scale by corporations desperate to seem futuristic without any real understanding as to why they need it, and whatever use cases may exist for Large Language Models are dwarfed by how utterly unprofitable this whole fiasco is. 

I want you to remember the names Satya Nadella, Tim Cook, Mark Zuckerberg, Sam Altman, Dario Amodei and Sundar Pichai, because they are the reason that this farce began and they must be the ones who are blamed for how it ends.