Confessions of an AI Skeptic, Part 5 (of 5)

(Part 1, Part 2, Part 3, Part 4)

So far, I’ve discussed the lack of any real intelligence in AI, the fact that it’s a compute hog of the highest order, and as a result, also a power/infrastructure hog and a money hog.  But at least, for the enormous price of all that computer/power/infrastructure, AI gives us accurate information and quality answers on a regular basis, right?  Right??

Um, not so fast.

AI Follies

You’d hope for all the resources AI consumes, it would at least be reliable.  And at times, it can provide some quality answers.  But at other times, the information it returns is just bonkers.  A bad query, one that is too long, or one that generates too much information can end up causing AI to hallucinate like a Grateful Dead fan in Haight-Asbury circa Summer 1967. 

Take for example this recent incident: lawyers submitted a motion that cited nine different cases, with the motion itself being generated by AI.  Problem?  Of the nine cases cited, eight were complete fiction.  They didn’t exist, and were wholly the creation of the AI that prepared the motion.   The lawyers were sanctioned.  And despite this, it’s happened numerous times since then, as other lawyers have failed to learn from the mistakes of their predecessors.

OpenAI has a transcription tool called Whisper which is used in hospitals.  It is known to hallucinate, inserting words or even entire phrases into text that were never actually uttered.  Do you want to take medical advice based on a hallucination-riddled transcript?  I’ll pass.

The Chicago Sun-Times once used AI to generate a summer reading list of 15 books.  We cannot fault anyone for not completing the list, since only 5 of the 15 were actual books, while the rest were pure fabrications.  However, the fabrications were attributed to real authors.  One wonders if they could get real royalty checks from the AI-run accounting department of their publishers.

Go play around with one of the prominent AI chatbots yourself.  Give it a few off-kilter queries and see what it does.  With a little effort, you can make it hallucinate too. 

How worried about your job should you be when AI makes such massive mistakes?  And if you are thinking about replacing workers with AI, should you?  Should you rely on a tool that makes stuff up out of thin air?  You wouldn’t hire a person who does that, so why would you pay for a machine to do the same thing?

Even worse, some people are using AI as therapists, girlfriend/boyfriend, and so on.  Those are terrible, awful, no good very bad uses of AI and NOBODY should do any of those things.  If you think AI is actually conscious and your friend, you couldn’t be more wrong and I urge you to both stop right away and also really look at what’s under the AI hood.  AI is not your therapist, and it’s not your friend.  It’s not even a conscious entity, and if you think otherwise, you are badly mistaken.  Seek help – human help.

But What About All the Neat Stuff I’ve Heard AI Can Do?

When you look at these hallucinations, you wonder why there is all of this hype about how AI is going to take over everything.  Some of it has to do with some other feats that AI has accomplished.  However, one needs to look beyond the surface to figure out why.  Take chess for example – AI has been known to be able to defeat some of the world’s best chess players.  But is that because it actually thinks through the game?  No.  It’s because it uses computational brute force and database access.  Using various mathematical algorithms, the AI can calculate probabilities and also accesses databases that include information about opening and endgame moves.  Meanwhile, the human opponent is limited to what’s inside their head.

But here’s another aspect of the AI-chess nexus.  Chess has predictable, stable rules.   It has a very sturdy and well-defined framework, which suits it well to the types of training that is performed on AI models.  But the real world, the world we live in is messy, not always stable, with the rules changing all the time.  And when edge cases are encountered, they require actual thought – not merely predictions based on massive numbers of matrix multiplications until some convergence is reached.  When the street you drive down unencumbered every day suddenly has road construction, you need to actually think your way through how to get through safely.  AI can be good with those things that have a solid, well-defined and unchanging framework.  But with fluid and changing circumstances that required nuanced thinking, AI falls apart very fast.

So is AI Good for Anything?

Yeah, I’ve been pretty tough on AI.  I hate all the hype from the tech bros, especially snake-oil salesmen like Sam Altman and Dario Amodei (of Anthropic, maker of Claude AI), as these guys have massively oversold AI’s capabilities while spreading irrational fears of the AI jobs apocalypse with everybody ending up out of work.   But that thing we call AI does have some uses, and it’s not going to go away.

I’ve used a few AI tools in my line of work (patents).  And some of them are excellent tools that are very useful.  Are they perfect?  No, not even close.  Would I trust them to fully finish a patent application or a response to the US Patent and Trademark office during the prosecution of a patent application?  No way.  What these tools are great at is augmenting my efforts.  They are wholly inadequate for fully replacing them though.  The tools are particularly useful in sifting through reams of documentation and rapidly accelerating the finding of a metaphorical needle in a haystack.  And sometimes, these tools provide some extra insight into a topic that I hadn’t really thought about. 

I’ve heard of others in various lines of work having similar tools at their disposal, expressing opinions similar to those expressed here.  The commonality of all of these tools is they are not directed to, nor trying to emulate anything approaching general intelligence.  Instead, they are narrowly focused on a particular area of inquiry.  They are focused and limited to certain tasks where they can produce mostly repeatable results.  Some of them use what are called small language models (SLMs), as opposed to the LLMs underlying AI chatbots like ChatGPT, Claude, Microsoft Copilot, and so forth.

These limited-focus tasks are where I think the current generative AI may become legitimately useful and make its biggest impact.  Even AI chatbots can sometimes be useful for limited inquiries in carrying out technical research.  Problems that can be reduced in some manner to computational mathematics will lend themselves well to the current AI regime.

But revolutionary?  Something that is going to eliminate some huge percentage of jobs, become self-aware, and go Skynet on us?  That’s not happening, particularly not with the current trajectory of AI development and the computer hardware technology upon which it runs.  You can train an AI model with lots of data, amounts that are incomprehensible to most of us.  But you can give it neither wisdom, nor common sense.  You certainly cannot give it emotion.  And therefore, you can’t ever make it truly intelligent.

Wrapping Up

When I started this, I originally intended to write only a single installment about AI.  As I wrote though, I found I had more and more to say about the topic, especially in the onslaught of ridiculous AI hype, both utopian and dystopian.  I still have more to say, but I’ve said enough for now.  And over the time I have been writing these various installments, cracks have begun to appear in the armor of the AI hype machine.    

One of the cracks is the admission by Sam Altman that GPT5 did not result in AGI.  There is more open talk about there being an AI bubble and the financial headwinds the industry is facing.  Michael Burry, the investor made famous by short-selling the U.S. housing market during the bubble that popped in 2008, has made a $1 billion dollar bet (in the form of short selling) that the AI bubble will also pop.  People are waking up.

Those that are not waking up usually base their belief in the eventual omnipotence of AI (for good, evil, or both) in the idea that technology advances linearly, if not exponentially.  But such is not the case.  In mathematical terms, technologies usually advance somewhat logarithmically, i.e., big gains in the early days, followed by diminishing returns in future advances.  Think of other technologies, such as household appliances.  Early on, the mere appearance of appliances such as dishwashers, refrigerators, etc., represented big technological leaps.  But as time has gone on, the rate of improvement in the basic functioning of these machines has slowed to a crawl, so much so that their manufacturers are adding all kinds of other technological bells and whistles to make people think they are improving, even though such improvements are at best incremental (and small increments at that).  You can apply this same idea to many things, such as automobiles, aircraft, smartphones, and so on.

The same trajectory applies to the generative AI that has burst onto the scene in the last several years.  The early improvement in successive models was huge and significant.  But as the release of GPT5 illustrated, such improvements are slowing.   And as discussed in earlier installments, there are hard, physical limits on how far the current generative AI can advance. 

Again, I don’t think AI is simply going to disappear.  It’s here to stay.  Nor do I think AI will be useless, as it will definitely result in some very useful tools.  But to the utopians that think AI will lead to a labor-free future in which you are simply provided sufficient income for merely existing?  Well, I’ve got disappointing news for you.  And to the dystopians who think that AI is going to lead to a bleak, cyberpunk future where most of us eke out an existence while under the rule of corporate overlords?  You can probably breathe a little easier.

For both the utopian and dystopian scenarios, a huge dose of skepticism is warranted.  In fact, skepticism is warranted for most of the area between these two extremes.  For all the amazing things AI seems to do, it isn’t magic and it isn’t truly intelligent, no matter how well it seems to mimic intelligence.  It’s just a tool.  Humans, however, are still irreplaceable.  People who are dazzled by the AI hype forget that at their own peril.  Let’s hope it doesn’t imperil the rest of us.

Leave a comment