Tag Archives: Artificial Intelligence

Confessions of an AI Skeptic, Part 5 (of 5)

(Part 1, Part 2, Part 3, Part 4)

So far, I’ve discussed the lack of any real intelligence in AI, the fact that it’s a compute hog of the highest order, and as a result, also a power/infrastructure hog and a money hog.  But at least, for the enormous price of all that computer/power/infrastructure, AI gives us accurate information and quality answers on a regular basis, right?  Right??

Um, not so fast.

AI Follies

You’d hope for all the resources AI consumes, it would at least be reliable.  And at times, it can provide some quality answers.  But at other times, the information it returns is just bonkers.  A bad query, one that is too long, or one that generates too much information can end up causing AI to hallucinate like a Grateful Dead fan in Haight-Asbury circa Summer 1967. 

Take for example this recent incident: lawyers submitted a motion that cited nine different cases, with the motion itself being generated by AI.  Problem?  Of the nine cases cited, eight were complete fiction.  They didn’t exist, and were wholly the creation of the AI that prepared the motion.   The lawyers were sanctioned.  And despite this, it’s happened numerous times since then, as other lawyers have failed to learn from the mistakes of their predecessors.

OpenAI has a transcription tool called Whisper which is used in hospitals.  It is known to hallucinate, inserting words or even entire phrases into text that were never actually uttered.  Do you want to take medical advice based on a hallucination-riddled transcript?  I’ll pass.

The Chicago Sun-Times once used AI to generate a summer reading list of 15 books.  We cannot fault anyone for not completing the list, since only 5 of the 15 were actual books, while the rest were pure fabrications.  However, the fabrications were attributed to real authors.  One wonders if they could get real royalty checks from the AI-run accounting department of their publishers.

Go play around with one of the prominent AI chatbots yourself.  Give it a few off-kilter queries and see what it does.  With a little effort, you can make it hallucinate too. 

How worried about your job should you be when AI makes such massive mistakes?  And if you are thinking about replacing workers with AI, should you?  Should you rely on a tool that makes stuff up out of thin air?  You wouldn’t hire a person who does that, so why would you pay for a machine to do the same thing?

Even worse, some people are using AI as therapists, girlfriend/boyfriend, and so on.  Those are terrible, awful, no good very bad uses of AI and NOBODY should do any of those things.  If you think AI is actually conscious and your friend, you couldn’t be more wrong and I urge you to both stop right away and also really look at what’s under the AI hood.  AI is not your therapist, and it’s not your friend.  It’s not even a conscious entity, and if you think otherwise, you are badly mistaken.  Seek help – human help.

But What About All the Neat Stuff I’ve Heard AI Can Do?

When you look at these hallucinations, you wonder why there is all of this hype about how AI is going to take over everything.  Some of it has to do with some other feats that AI has accomplished.  However, one needs to look beyond the surface to figure out why.  Take chess for example – AI has been known to be able to defeat some of the world’s best chess players.  But is that because it actually thinks through the game?  No.  It’s because it uses computational brute force and database access.  Using various mathematical algorithms, the AI can calculate probabilities and also accesses databases that include information about opening and endgame moves.  Meanwhile, the human opponent is limited to what’s inside their head.

But here’s another aspect of the AI-chess nexus.  Chess has predictable, stable rules.   It has a very sturdy and well-defined framework, which suits it well to the types of training that is performed on AI models.  But the real world, the world we live in is messy, not always stable, with the rules changing all the time.  And when edge cases are encountered, they require actual thought – not merely predictions based on massive numbers of matrix multiplications until some convergence is reached.  When the street you drive down unencumbered every day suddenly has road construction, you need to actually think your way through how to get through safely.  AI can be good with those things that have a solid, well-defined and unchanging framework.  But with fluid and changing circumstances that required nuanced thinking, AI falls apart very fast.

So is AI Good for Anything?

Yeah, I’ve been pretty tough on AI.  I hate all the hype from the tech bros, especially snake-oil salesmen like Sam Altman and Dario Amodei (of Anthropic, maker of Claude AI), as these guys have massively oversold AI’s capabilities while spreading irrational fears of the AI jobs apocalypse with everybody ending up out of work.   But that thing we call AI does have some uses, and it’s not going to go away.

I’ve used a few AI tools in my line of work (patents).  And some of them are excellent tools that are very useful.  Are they perfect?  No, not even close.  Would I trust them to fully finish a patent application or a response to the US Patent and Trademark office during the prosecution of a patent application?  No way.  What these tools are great at is augmenting my efforts.  They are wholly inadequate for fully replacing them though.  The tools are particularly useful in sifting through reams of documentation and rapidly accelerating the finding of a metaphorical needle in a haystack.  And sometimes, these tools provide some extra insight into a topic that I hadn’t really thought about. 

I’ve heard of others in various lines of work having similar tools at their disposal, expressing opinions similar to those expressed here.  The commonality of all of these tools is they are not directed to, nor trying to emulate anything approaching general intelligence.  Instead, they are narrowly focused on a particular area of inquiry.  They are focused and limited to certain tasks where they can produce mostly repeatable results.  Some of them use what are called small language models (SLMs), as opposed to the LLMs underlying AI chatbots like ChatGPT, Claude, Microsoft Copilot, and so forth.

These limited-focus tasks are where I think the current generative AI may become legitimately useful and make its biggest impact.  Even AI chatbots can sometimes be useful for limited inquiries in carrying out technical research.  Problems that can be reduced in some manner to computational mathematics will lend themselves well to the current AI regime.

But revolutionary?  Something that is going to eliminate some huge percentage of jobs, become self-aware, and go Skynet on us?  That’s not happening, particularly not with the current trajectory of AI development and the computer hardware technology upon which it runs.  You can train an AI model with lots of data, amounts that are incomprehensible to most of us.  But you can give it neither wisdom, nor common sense.  You certainly cannot give it emotion.  And therefore, you can’t ever make it truly intelligent.

Wrapping Up

When I started this, I originally intended to write only a single installment about AI.  As I wrote though, I found I had more and more to say about the topic, especially in the onslaught of ridiculous AI hype, both utopian and dystopian.  I still have more to say, but I’ve said enough for now.  And over the time I have been writing these various installments, cracks have begun to appear in the armor of the AI hype machine.    

One of the cracks is the admission by Sam Altman that GPT5 did not result in AGI.  There is more open talk about there being an AI bubble and the financial headwinds the industry is facing.  Michael Burry, the investor made famous by short-selling the U.S. housing market during the bubble that popped in 2008, has made a $1 billion dollar bet (in the form of short selling) that the AI bubble will also pop.  People are waking up.

Those that are not waking up usually base their belief in the eventual omnipotence of AI (for good, evil, or both) in the idea that technology advances linearly, if not exponentially.  But such is not the case.  In mathematical terms, technologies usually advance somewhat logarithmically, i.e., big gains in the early days, followed by diminishing returns in future advances.  Think of other technologies, such as household appliances.  Early on, the mere appearance of appliances such as dishwashers, refrigerators, etc., represented big technological leaps.  But as time has gone on, the rate of improvement in the basic functioning of these machines has slowed to a crawl, so much so that their manufacturers are adding all kinds of other technological bells and whistles to make people think they are improving, even though such improvements are at best incremental (and small increments at that).  You can apply this same idea to many things, such as automobiles, aircraft, smartphones, and so on.

The same trajectory applies to the generative AI that has burst onto the scene in the last several years.  The early improvement in successive models was huge and significant.  But as the release of GPT5 illustrated, such improvements are slowing.   And as discussed in earlier installments, there are hard, physical limits on how far the current generative AI can advance. 

Again, I don’t think AI is simply going to disappear.  It’s here to stay.  Nor do I think AI will be useless, as it will definitely result in some very useful tools.  But to the utopians that think AI will lead to a labor-free future in which you are simply provided sufficient income for merely existing?  Well, I’ve got disappointing news for you.  And to the dystopians who think that AI is going to lead to a bleak, cyberpunk future where most of us eke out an existence while under the rule of corporate overlords?  You can probably breathe a little easier.

For both the utopian and dystopian scenarios, a huge dose of skepticism is warranted.  In fact, skepticism is warranted for most of the area between these two extremes.  For all the amazing things AI seems to do, it isn’t magic and it isn’t truly intelligent, no matter how well it seems to mimic intelligence.  It’s just a tool.  Humans, however, are still irreplaceable.  People who are dazzled by the AI hype forget that at their own peril.  Let’s hope it doesn’t imperil the rest of us.

Confessions of an AI Skeptic, Part 3 (of 5)

(Part 2 can be found here)

(Part 1 can be found here)

One thing you never saw in any of the movies of the Terminator franchise (featuring some of the most menacing villains in all of sci-fi) was any of the various models having to stop for a recharge.  I can’t really blame James Cameron for that.  How cinematically compelling would it have been had the Cyberdyne Systems Model 101 portrayed by Arnold Scwharzenegger had to spend a significant amount of downtime recharging his battery?  Yet, if we were seeking a realistic portrayal of such a machine, it would have had to spend most of its time recharging.  And escaping the Terminator? Keep running, because his battery is going to be dead in very short order.

All of this is another way of saying that AI is a resource hog.  It is a voracious consumer of power and compute resources like the world has never seen.  The U.S. federal government looks positively judicious with taxpayer funds when compared to the way AI consumes resources.  However, what we call AI is bumping up against some hard physical limits, limits which present a Mt. Everest-sized obstacle to scaling.

A Compute Hog:

When a computer runs a program, it executes instructions, and in particular, machine level instructions, most often generated by a compiler that translates high-level language code into something it can understand.  The programs you run day-to-day, on your PC, your laptop, that computer you carry in your pocket called a “phone” can run programs that consume billions of processor cycles, where a cycle is the execution of an instruction.  But those software programs don’t even scratch the surface of what modern AI consumes.

Each of the tokens we mentioned in Part 2 places demands on a processor.  How much?  A prompt to an LLM that generates about 100 tokens in Open AI’s GPT-4 model (the latest model is GPT-5 now) can consume between 50-100 teraflops.  “Flops” in this context are floating-point operations per second, where floating-point is a type of data computer systems work with (basically a number that includes a mantissa and an exponent, digitally represented).  Tera means a trillionTrillion.  Also keep in mind that a prompt to an LLM includes two phases – a prefill phase (where the text you entered is broken down into tokens) and a decode phase (where the LLM generates tokens in response to your prompt).  So, for a relatively small prompt-and-answer, an LLM can consume between 50 and 100 trillion execution cycles.  Now consider longer conversations with an LLM.  These can easily run into the thousands of teraflops or more. 

Because of the astronomical amount of computing power AI workloads consume, the heavy lifting is done in data centers having the requisite amount.  Modern data centers include row upon row of servers, each with a number of GPUs.  As an aside, “GPU” stands for graphics processing unit, and while such processors were originally designed for graphics workloads, they are massively parallel and thus particularly well-suited for AI workloads. Some computers that process AI workloads also use a more specialized chip called a tensor processing units or TPU (which unlike a GPU, is specifically designed for AI workloads).  In addition to all the GPUs/TPUs, each server also includes a large amount of memory, the capacity of which is measured in terabytes.

In a sense, we’ve come full circle with computing.  Up until the 1970’s, we used to think of computers as room-sized behemoths, which they were.  That was the amount of space required to run the computing workloads of the time.  It was the advent of microprocessors and Moore’s Law (which is now deader than Francisco Franco) that started to shrink the size of computers down to something you can put on your desk or even carry in your pocket.  But now, with AI workloads, we are back up to gargantuan sizes again, with whole data centers that dwarf the large computers of yesteryear.  And we’re there because that’s the kind of space required to implement computing setups that can run compute-hogging AI. 

A Power Hog:

It doesn’t take a leap of imagination to realize that the requirement of that much computing power necessitates the consumption of a lot of electrical power.  But how much is a lot?  For this part, I turned to AI itself to tell me how much power it might use, and lacking any sense of modesty, it spit the answer right out.  It gave me the assumption of 750 giga-flops per token (750 billion instructions executed using floating-point data), with about 0.0001 kWh (kilowatt-hours) per token based on typical GPU/TPU energy usage (doesn’t sound like much, so far, does it? You just wait …).  The number of flops and the energy consumed scale linearly with token count.  Thus, a query that produces 1000 tokens would use, under this scenario, 0.01 kWh.  Moving the decimal place a couple spots to the right, that’s 10 Wh – i.e. enough energy to power a 10-watt LED bulb for an entire hour.  That’s for one very small conversation (compare that to what a human brain can do in an hour, running on about 14 Watts of power).

It’s not hard to see how some AI conversations use more power than Clark Griswold’s Christmas lights

And yet, we’re not done.  So far, we’ve only talked about the energy consumed by the computers themselves.  Thanks to the Laws of Why We Can’t Have Nice Things (sometimes referred to as “the Laws of Thermodynamics”), using that much compute power and thus that much electricity means a lot of excess heat is generated.  Something must be done about that heat, otherwise the computers in these data centers won’t run long before all the electronics are fried like a chicken in the kitchen of your local KFC (btw, Original Recipe >> Extra Krispy). 

We need to bring in cooling water, and lots of it.  That requires pumps to move the water in and then to move it out.  Some data centers also utilize large refrigerant systems to circulate cool air as well.  There has been some improvement on this front. Old data centers had about 30-40% energy overhead for cooling, while newer data centers have about 10-20% overhead.  Nevertheless, that’s still a lot of energy.

A recent story serves as an illustrative anecdote regarding AI energy consumption.  The story, linked here, refers to a planned AI data center for the state of Wyoming, one that will consume five times the amount of electricity as all the residents of Wyoming combined.  Not merely more energy, but five times more.  Not merely a few residents, but all residents of the state.

All that physical space, all that compute power, and all that energy, and yet this AI is still not intelligent, it still can’t think, and requires multiple orders of magnitude more energy to accomplish many of the same things humans can do.  Sure, it’s particularly well-suited for computational mathematics, more so than humans, but that’s not thinking, that’s just number crunching.  And of course, it took humans to design computers to be good at such things – humans that have, in their own skulls, a brain that can do amazing things running on about a mere 14 watts of power (or, in an hour, 14 Wh).  And with that 14 Wh, we have consciousness and true intelligence. 

The Wall:

Above, I wrote that AI faces a Mt. Everest-sized obstacle to scaling.  But more accurately, AI is racing head on into a wall, one that will kill scaling.

Let’s return to Moore’s Law, which was mentioned above.  The idea behind Moore’s Law was the product of Intel’s Gordon Moore, who postulated that the number of transistors on a given unit area of silicon would double every 18 months.  And for decades, that was true.  It’s because of Moore’s Law that you can carry in your pocket computing power, run off a battery, that is equivalent to a room-sized computer of the 1970’s.  But you can only get so small (sorry, Steve Martin). 

When transistor feature sizes were in the thousands, then the hundreds, and even the tens of nanometers, the progress of packing more functionality onto the same chip area marched onward, largely unabated.  But on the most advanced chips now – such as the GPUs/TPUs that run AI workloads – the smallest features sizes are in the single-digit nanometers.  You know what else has a size measured in single-digit nanometers?  Atoms.  Yes, atoms, the fundamental building block of all matter.  And you know what that means?  It means you have run into yet another wall. You are not going to build transistors smaller than atoms.  That is a hard, non-negotiable physical limitation.  And that means the end of Moore’s Law.

Furthermore, the top clock speeds for chips haven’t increased for about 15 years now.  The maximum speed at which an execution unit in one of these chips can execute instructions is therefor also facing a hard limit due to the material properties of the silicon upon which such chips are fabricated.

Now you will still get some denialists saying Moore’s Law is not dead, and they will point to chips where vertical stacking is conducted, but that’s not packing more transistors into a given area, that’s just using the vertical dimension to create more area.  Moore’s Law only works if individual transistors themselves can get smaller, and with the smallest feature sizes bumping up against atomic dimensions, that is no longer possible.  Moore’s Law has been dead for at least a decade. 

The denialists might also opine that there is some other technology on the horizon that will transcend the limitations placed on transistor sizes, while remaining vague about what those technologies are.  Some might cite different materials for chipmaking.  But most of these materials have some sort of fatal flaw.  Take for example graphene – the wonder material that is effectively a flat sheet of carbon atoms.  Graphene has been used to make transistors in laboratories, and those transistors can operate at significantly higher clock speeds (at least an order of magnitude more) while having much better properties than silicon regarding heat dissipation.  But there is a huge problem – graphene lacks something known as a bandgap.  Without getting into device physics, we’ll simplify thing by saying the lack of a bandgap means that such a transistor can never fully turn off, thereby making it useless for functioning as a switch, and therefore useless as the basis for a digital computer.

Analog computing is another technology championed by some.  And while it can be very useful in certain applications (as it can almost instantaneously do large matrix multiplications that hog computer cycles in the digital domain), it nevertheless suffers from the limitations from which all analog circuits suffer.  Analog circuits are more susceptible to noise, error cascading, and lack the necessary precision for many workloads.  Analog computing circuits are also much larger than the digital circuits of the GPUs/TPUs.

Quantum computers are the great hope for some, but we are a long way from a practical quantum computer.  Meanwhile, they are currently very error prone, of limited stability, and require cryogenic cooling (meaning hundreds of degrees below zero, and that’s true whether you are talking in Fahrenheit or Celsius).  There are questions as whether they could provide any advantage over the current computing paradigm for many workloads.  Most of the promise is in specialized workloads, but until we get practical, reliable quantum computers, we can do no more than speculate.

So the upshot of the above is that AI as we know it has, due to the various physical limits discussed above, has ran head-long into a wall.  However, that wall is imposed by physical limits.  We haven’t talked about financial limits yet.  If you think AI is a compute hog and a power hog, wait until you find out how much of a money hog it is.  The U.S. government has nothing on AI when it comes to burning through cash.

Confessions of an AI Skeptic, Part 2 (of 5)

(Part 1 can be found here)

Last time, the discussion focused largely on what happens at the circuit level of a computer system, and whether, starting from that, intelligence and consciousness could arise.  For this installment, I wanted to delve a little more into how we define intelligence.  Much of the hype surrounding AI is that we are soon going to see AGI – artificial general intelligence – as well as ASI – artificial super intelligence.  My skepticism remains solid that neither of these milestones will ever be achieved, certainly not with current computing architectures, if ever.

What is AI Doing?

Instead of focusing on the circuit-level, it’s instructive to go a few rungs up the abstraction ladder and discuss what happens when one sends a prompt to an LLM, or large language model (which encompasses the basis most of the well-known AI chatbots today – ChatGPT, Google Gemini, Grok, etc.).  This is a somewhat simplified explanation, but it’s enough to obtain a basic understanding.

When you send a prompt to say, ChatGPT, the words of that prompt are broken down into tokens.  These tokens can be full words, chunks of words (sub-words), or even symbols.  These tokens are then turned into numbers, in a process called embedding.  The numbers are then turned into numerical vectors that can have thousands of dimensions.  The numerical vectors are then fed into a transformer layer, where many, many matrix multiplications are performed.  Since a matrix multiplication comprises many individual (scalar) multiplications, the number of total multiplications becomes astronomical.  In other words, it’s doing a mountain of math under the hood.  

Each step involves multiplying huge grids of numbers together, and every one of those multiplications expands into millions or even billions of tiny arithmetic operations. Processing a single word can require hundreds of billions of multiplications and additions. To put that in perspective, if you sat with a calculator and did one multiplication every second, it would take you thousands of years to do what the model does in a fraction of a second for just one word.

In doing these kajillion multiplications, the AI model is predicting the next word, based on weights applied during said multiplications.  After all these multiplications are done, the resulting numbers are turned back into words for display on your computer screen.

While the operations described above may be algorithmically new, from the perspective of computers, the individual operations – namely, the multiplications – are nothing new at all.  Electronic computers of all kinds have been doing multiplications since they’ve existed.  This isn’t confined to your desktop or the room-sized behemoths of yesteryear, but also includes the pocket calculators that people like myself relied upon during engineering school before things like smartphones were as ubiquitous as they are today.

There are a couple of upshots to the above.  Thie first is, that while an LLM like ChatGPT may appear to understand language, in reality it does no such thing at all.  It just crunches numbers.  And not only that, the computer doesn’t even know it’s crunching numbers – refer back to the first installment – the number crunching is just the causing of basic switching circuits of the computer system to switch between logic 1 and logic 0 – high voltage and low voltage – really, really, really fast.

So if the computer doesn’t understand language, and doesn’t even know it’s crunching numbers to mimic the understanding of language, can it be considered intelligent?  If the answer is no, then how will computers become intelligent by simply making bigger, more computationally intensive models? 

How do you Define Intelligence?

This is a trickier question than it may appear.  We can recognize intelligence to be sure, which is exemplified by the fact that we can ponder and debate what exactly the term means.  But defining it with precision, drawing a hard line between intelligent and not intelligent?  That’s a much more difficult task.

We define humans in general as being intelligent (not to be confused with being wise).  And yet we still have a hard time drawing that line between what is intelligence and what is not, despite most of us being pretty sure that computers running AI haven’t yet reached intelligence.

And that’s the rub.  The people trying to create artificial general intelligence (AGI) – or any intelligence at all in computers/AI, are trying to solve it as an engineering problem.  But engineering problems require well-defined solutions.  If you want to put a satellite into an orbit with a perigee of 150 miles above the Earth’s surface and apogee of 160 miles, the solution is well-defined.  If you want to design an amplifier circuit that can take an input signal with an amplitude of 2 volts and output a corresponding signal having an amplitude of 12 volts, we know how to do that because, again, the solution is well defined.  There may be different ways to get to the same solution, but having a firm definition of the solution provides a framework and a guide for getting there. 

This is true even for some engineering problems that we haven’t solved, like nuclear fusion.  We know what man-made nuclear fusion it will look like, in terms of inputs and outputs, should we ever get there .  But that illustrates another point: even when we know what the solution looks like, it can be maddeningly difficult to achieve. 

With intelligence, general or otherwise, we can’t even agree on a definition. Not even AI’s biggest proponents can agree on a definition of intelligence, much less what would constitute true AGI.  What they all have in common is that they are trying to find an engineering solution to something that is essentially a philosophical problem.  And because the definition of intelligence is essentially a philosophical, it will continue to defy an engineering solution. 

So far, we’ve spent a lot of time talking about intelligence, the difficulty in defining what intelligence is, and stating why I believe computers running AI workloads are not even remotely intelligent.  What hasn’t been discussed so far will be the topic of the next installment – the rapacious appetite of AI in terms of resources.

Before I go, however, Apple has published a paper about AI entitled “The Illusion of Thinking.”  If you want to dig a little deeper, it can be found here

Confessions of an AI Skeptic, Part 1 (of 5)

Artificial Intelligence, or AI, is all the rage these days.  “It’s going to eliminate all of our jobs!!”  “It’s going to become more intelligent than humans!!!” “It’s going to become sentient and turn into Skynet!!” 

Pffffft.

It’s not going to do any of those things.  Not even close.

Now don’t get me wrong – AI (and note – only the ‘A’ part of that is accurate) is here to stay.  And it’s going to lead to some very powerful tools, some of which can be very useful.  Of course, it will also lead to some tools that are not so useful.  And it will be misused and abused, which might be its most frightening prospect.

But if you are worried about Skynet, I’m here to tell you, don’t – The Terminator is a great action movie but not much more.  Nor should you worry about AI eliminating all the jobs, a notion that can be dispensed with in multiple ways, including with simple arithmetic.  We should once and for all dispense with the idea that AI will become conscious. Similarly, the notion that AI exhibits true intelligence should also be tossed in the wastebasket.  To understand why, we’ll start with the point that the rubber meets the road (or where software meets hardware) in computers.

The 1’s and 0’s of Artificial “Intelligence”

When I observe certain people hyping AI, namely those with a technical background, I notice they are mostly software engineers or programmers.  Many of these software engineers are extremely intelligent, and can make a computer do things – through programming – that I (also a technical person, but with a hardware/circuit orientation) could never dream of doing.  Nevertheless many of these AI-hypesters have a huge gap in their understanding of how computers actually work.  Their interactions with computers are through high-level programming languages, several layers of abstraction away from what is happening at the hardware/software interface.  Because of that they are only vaguely aware, at best, of the hard physical limits of computing.

For the non-technical, a little explanation is warranted here.  Almost all software programming – and AI is software – is done using what are called high-level languages – Python, Perl, C, … and for those of you who are old geezers (as am I), Fortran, Basic, Pascal, etc.  High-level programming languages are essential, as the practically infinite variety of software we use today would not be possible without them.  But the processor in your computer system cannot understand these languages directly – it needs what is known as a compiler that translates (“compiles”) the high-level language program into machine language that the computer understands.  And ultimately that means, in the digital computer systems we use, it gets converted into 1’s and 0’s. 

But even the 1’s and 0’s are somewhat of an abstraction – the processors used in computer systems are electronic circuits, and as such work with voltages and currents that represent these 1’s and 0’s, rather than working with the digits themselves.  Thus, in the chips used to implement computer systems, these 1’s and 0’s are represented by corresponding voltages – e.g., a “high” voltage for a logic 1, and a “low” (or no) voltage for a logic 0.  I’m not going to delve into the actual circuits as to how this is achieved (although they are relatively simple), other than to say you can think of these circuits as 2-position switches.  A single switch in this analogy can generate a logic 1, or high voltage in one position, and a logic 0, or low voltage in another position.  These switches, constructed using transistors, can be combined to form logic gates, and logic gates can be combined to form even more complex structures.  But at the heart of it all, at the lowest level, all you have are a bunch of switches that produce the voltage levels and corresponding binary logic levels.

Just about every computer system you own – from your smartphone, to your tablet, your laptop, your desktop – has billions of transistors, and thus billions of switches.  And they are nothing even remotely like neurons in the human brain.  Putting more of them together doesn’t turn them into neurons either.

Hey, I Came Here to Read About AI, not this Switch Stuff!

Ok, so you ask now, “if this essay is about AI, then where is he going with all this switch stuff?”  Where I’m going with this is to show you what AI – indeed what any software does – at the fundamental circuit level.   At the circuit level, it is, depending on the input voltage, making the output voltage change between a high voltage and a low voltage – between a logic 1 and a logic 0.  On circuits used in the chips of a computer system, this switching behavior can occur billions of times per second.  Multiply that by billions of these switching circuits, and you’ve a whole lotta switching going on.  And in AI computing workloads?  You have orders of magnitude more switching than the most processor-taxing game your kid runs on his gaming PC. 

But can true intelligence (much less consciousness) arise from this mere switching behavior, having billions of circuits switch between a logic 1 and logic 0 (a high voltage and a low voltage) billions of times per second?  Digital computers have been operating this way for decades now.  There is nothing remotely intelligent about the way they function.  Simply and adding more of switches and making them do it faster and faster doesn’t move ball even a nanometer closer to intelligent behavior, because the transistors used to create these switches are not neurons, and never will be neurons.  They’re just switches.  On or off.  Logic 1 or logic 0.  Putting more of these switches together into a more complex structure does not suddenly make them into neurons.  And because of this, computers will continue to understand language and human thought in the same way a radio understands music, i.e., not at all because they have no such capability of “understanding.”

If someone disagrees with me, and truly believes that AI can be truly intelligent and can truly become conscious, I’d love to hear their explanation as to how we are going to get there based on making more of these switching circuits and making them switch faster.  I’m all ears.  All I’ve ever heard from those that believe AI will become some sort of machine messiah (nod and wink to my progrock friends) are pure underpants gnomes-level leaps of logic.  As AI gets “better,” real intelligence and consciousness will just magically occur, they believe.

What an absolute load of bull-shinola.

The only surefire way I know to make electronic computers truly intelligent is this: convince God to “miracle” intelligence into computers.  If God wants computers to by intelligent, then by God (sorry) they will be.  But absent that, there is no other way.  Not with the computing systems we have now, not with CMOS switching circuits even in the billions of trillions, not with simply manipulating voltages to make 1’s and 0’s.  Ever more complex software programs – even what is called AI – isn’t going to suddenly cause intelligence, much less consciousness, to spring forth from silicon or some other substrate that may be used in the future.  If that’s all it took, we’d be there by now.

If you want to explore the topic of intelligence in man-made machines (or our inability to accomplish that), you can also explore Kurt Gödel’s incompleteness theorems.  I’m not going to get into the discussion about that here, other than to note that when Gödel came up with these theorems, it freaked him out a little bit as he thought he might have proven the existence of God.  But that’s pushing the limits of this discussion, so you’re on your own here.

Intelligent, sentient computers of the electronic variety make for great science fiction.  HAL from Arthur C. Clarke’s 2001: A Space Odyssey is one of Sci-Fi’s most memorable characters.  My personal favorite – Mike, from Robert A. Heinlein’s The Moon is a Harsh Mistress – is another one that seeped into the consciousness of many Sci-Fi aficionados.  But those computers are fiction, and intelligent electronic computers like them will remain so, absent divine intervention.

Notice I said “electronic computers.”  Biological computers are also a thing, and they can be very intelligent.  And better yet, there is a way to make intelligent biological computers – it’s very old tech, a time-tested technique known as “having babies.”  But that’s also another discussion.

“But hey, you didn’t address it taking all our jobs and all the other things AI is going to do, good and bad!”  This piece is getting kind of longish, but I will return with more confessions of my AI skepticism, and soon.  Or, as another AI character once said, “I’ll be back.”