Confessions of an AI Skeptic, Part 4 (of 5)

(Part 1, Part 2, Part 3)

Last time out the discussion was focused on AI’s appetites for compute power and energy – appetites that are difficult to describe with any available superlative.  All that compute power costs money.  So does the electricity to run the computers and the infrastructure necessary to keep them cool.  And to run LLMs like ChatGPT, AI companies are such profligate spenders that they could embarrass a U.S. congressman during budget negotiations.  Also, like the spending of our own government (assuming you’re in the U.S.), the spending for AI is unsustainable.

Where Are the Profits?

Since generative AI burst onto the scene with the making of the commonly used AI platforms available to the public, the hype has become out of control.  Google CEO Sundar Pichai even went so far as to say that AI was a more profound technological development than fire or electricity (a smart-aleck yet astute commenter on YouTube asked about what happens to AI when we cut off the electricity – touché).  Part of me thinks he truly believes that ridiculous assertion, given his delivery.  But a more cynical side of me thinks he and others are engaging in the hype cycle to keep the investment dollars coming in – dollars that are desperately needed to keep the AI train rolling.

One recent financial example is instructive as to why they need the hype to keep investors interested.  Chipmaker NVIDIA, the premier maker of the processors used for AI workloads, recently just committed to invest $100 billion in OpenAI (through 2030), the creators of ChatGPT.  For the same period, OpenAI committed to buying $300 billion of cloud compute from Oracle.  Meanwhile, Oracle committed to buy $40 billion worth of chips from NVIDIA.  Now, I’m not exactly a business titan here, but by my back of the envelop math, Oracle is the only company making out in this deal – assuming this deal every completes.  Open AI, in the first half of 2025, had $4.3 billion in income, but still had a $13.5 billion loss.  That comes on the heels of a $5 billion loss in 2024.  Not exactly moving in the right direction.  How then, will Open AI come up with the $300 billion for the deal with Oracle?  Investors aren’t going to lose money forever.

The problem with AI companies is they cannot bring in the revenue to cover their costs.  Right now, OpenAI allows free access to ChatGPT, but with some fairly strict limits.  There are paid plans for $20/month, and while not as limited as the free version, it has limits nonetheless.  This revenue structure doesn’t even come close to covering their costs.  This video speculates that ChatGPT would have to raise prices to $500/month to cover their costs.  Who the heck is going to pay $500/month for ChatGPT??  That is not a path to profitability, and this cost structure is not exclusive to OpenAI.

Now one could point to a company like Amazon, which took a long time to report a profit (in large part because they were pumping revenues back into investments).  But all the while Amazon was losing money, they were making customers happy, selling products customers wanted and providing unparalleled convenience.  Is AI making customers happy?  According to this MIT report,  95% of companies – 95%!!! – are not seeing any returns on their AI investments. 

Fact is, AI is in a huge bubble right now.  Even Open AI boss Sam Altman – in a rare display of honesty rather than ridiculous hype – thinks AI is due for an implosion.  And another fact is that AI is a very long way from being profitable.  That’s hardly a surprise, with AI’s insatiable appetite for compute and the power and infrastructure to run it, a lot of unhappy customers, and little if any agreement on what makes a good business case for AI.  Read this for a good overview of where we’re at.  If you want a video explanation, watch this

Much of this bubble was preventable if the tech bros and others hadn’t hyped AI to the moon without knowing if they could deliver.  The first problem is the “I” in AI, as discussed in previous installments of this series.  What we call AI is most decidedly not intelligent, general or otherwise.  Further, it was hyped without full disclosure regarding its appetite for resources, and thus, money.  Maybe, if instead of calling it AI, they could have simply called their software by its true name – large language models (LLMs) – and said it mimics human intelligence although it isn’t truly intelligent.  They probably wouldn’t have had the ridiculous sums of money invested in it, and maybe “progress” would have been slower.  But they also wouldn’t have this bubble that is going to burst and cause a lot of people to take a bath, and not the kind that leaves them feeling clean and refreshed.  For a lot of people, it’s going to get ugly.

The financial status of AI and the bubble created around it reveals what might be the most critical barrier to all the hyped-up predictions, whether such predictions are doom and gloom or naïve utopianism – money.  AI is expensiveVery expensive.  The money needed to provide the compute resources, energy, and other utilities is staggering, and because of the limits of computer technology I discussed previously, this picture is not going to improve.  For companies like OpenAI to become profitable, they would have to raise their prices an order of magnitude for their paid plans.  And that, of course, would lead to rapid decline in usage and worsen OpenAI’s already bleak financial picture.

Even worse (for AI companies) is that AI has, unlike most technologies, become more expensive as it has advanced.  OpenAI trained their GPT-3 model for about $4.6 million.  Their next big advancement, GPT-4, was trained for an estimated $80 million – $100 million.  GPT-5 (which was supposed to be a huge advancement over GPT-4 – it was anything but) was trained for a price estimated between $1.25 billion and $2.5 billion.  Thus, the training of GPT-4 was 1-2 orders of magnitude more costly than GPT-3, while training GPT-5 cost yet another order of magnitude over GPT-4 (and three orders of magnitude more than GPT-3). 

Even if a compute technology more suitable for AI was in existence, it would take a paradigm shift at the most fundamental level – away from silicon-based computing and the Von Neuman computer architecture that underlies virtually every computer from the room-sized ENIAC of yesteryear up to the smartphone you carry in your pocket today.  With trillions of dollars invested in that compute paradigm, nobody is going to be eager to suddenly abandon it and shift to a new one.  Practically speaking, it would be impossible.  But that’s a moot point anyway, because there is no such other paradigm.

AI is, quite simply, financially unsustainable.  And the prospects for that changing any time soon are virtually nil.

The AI Apocalypse Will Not Be Televised (Because it’s Not Going to Happen)

The discussion about the financial picture of AI leads me to another topic that is the product of AI hype – the doomsday scenarios where everybody loses their jobs and the world is ruled by a few evil megacorporations that have all the money while the rest of us live as feral beings looking for scraps while barely scraping by on universal basic income (UBI).  Scenarios that, if subject to any scrutiny, are quickly exposed as being beyond ridiculous.

Let’s put this to the test with some hypotheticals.  Let’s assume here in the U.S. that AI, in a relatively short time (maybe 5 years or so), was able to eliminate 125 million jobs – a little over 75% of the approximate number of people currently employed in this country.  Such a loss of jobs would result in a massive economic depression, one which would absolutely dwarf the Great Depression of the 1930’s (when U.S. unemployment peaked around 25%).  With so many people out of work, revenues to all businesses, evil megacorporations included, would plummet.  In such an economic shock, many businesses would close.  And recall from the above and previous installments of this series the insatiable appetite of AI for compute resources, power, and thus money.  With collapsing revenues, how are corporations (assuming they can even stay in business) going to pay for the huge costs associated with running all the AI necessary to replace those employees that were put out of work?  And even if all this AI could be kept running, where would the demand come from for the output when 75% of the people are jobless?  Why is any business going to pay for AI to produce all that output when they will never have enough consumers to pay for its staggering costs in the first place?

A lot of people when presented with the above scenario bring up the magical UBI that will suddenly appear without ever examining the underlying assumptions, so let’s do that now.  The U.S. is already $39 trillion – with a ‘t’ – in debt with spending levels that are far lower than those that would be required to finance UBI.  With 75% of the workforce out of work, tax revenues will collapse.  The federal government wouldn’t be able to finance many of its most basic functions, much less spending at its current levels, which is about $7 trillion for the most recent fiscal year.  The spending to support UBI would require the federal government to issue debt at rates that dwarf the present.  But who is going to buy that debt in a crushing economic depression, when corporate revenues are in the tank and when tax revenues reduce the prospects of repayment?

But can’t we increase taxes on the rich?  Oh sure, we can, but it still won’t be anywhere near enough to support UBI over the long term.  If you took all the wealth (not just income, but every last penny of wealth from the richest 100 Americans) you would net about $3.27 trillion, which is less than half our current annual spending.  Expanding that list to the Forbes 400, the amount of wealth adds up to about $6.6 trillion – still less than one year of annual spending.  And again, this is total wealth, not annual income that is significantly less. 

Conclusion?  There is no way to finance UBI.

Another conclusion?  There is no way to finance the attendant resources that must be consumed to support the widespread deployment of AI necessary to replace such a large number of jobs. A massive, short-term replacement of hundreds of millions of jobs with AI is doomed to collapse not only the economy and but also collapse its own prospects for ever being successful.  An AI employment apocalypse that eats all our jobs will end up eating itself, and in very short order.

This isn’t to say that AI will never replace any jobs.  And how many it will replace over the long haul remains to be seen.  It just means that simple economic reality, especially when paired with the economic realities of running AI, make the AI-will-take-your-job apocalypse scenario one that is so self-limiting that it’s a non-starter.  Think of it as an airplane that is so overloaded that it weighs too much to get off the ground.

More likely, AI will augment a lot of jobs.  But it’s simply too unreliable and too expensive to replace jobs en masse. 

This doesn’t mean we are out of the woods.  The coming AI apocalypse may be the bursting of the AI bubble and the collateral economic damage.  That one is far more plausible – and likely – than AI replacing everybody’s job.   

And returning to something hinted at above, AI isn’t always the most reliable thing in the world.  In fact, it can be wildly unreliable at times, too much so to risk replacing a human.  We’ll discuss that in the next installment.

Leave a comment