All posts by Erik Heter

In my professional life I am a patent agent, writing and prosecuting patent applications in the field of electrical engineering for high-tech corporate clients. In my home life I am a husband and a father of one son, football fan (the American kind, that is), a reader of history and many other things (avidly when time permits) and a lover of music (progressive rock in particular), among other things. I'm also a former submariner in the U.S. Navy.

Confessions of an AI Skeptic, Part 5 (of 5)

(Part 1, Part 2, Part 3, Part 4)

So far, I’ve discussed the lack of any real intelligence in AI, the fact that it’s a compute hog of the highest order, and as a result, also a power/infrastructure hog and a money hog.  But at least, for the enormous price of all that computer/power/infrastructure, AI gives us accurate information and quality answers on a regular basis, right?  Right??

Um, not so fast.

AI Follies

You’d hope for all the resources AI consumes, it would at least be reliable.  And at times, it can provide some quality answers.  But at other times, the information it returns is just bonkers.  A bad query, one that is too long, or one that generates too much information can end up causing AI to hallucinate like a Grateful Dead fan in Haight-Asbury circa Summer 1967. 

Take for example this recent incident: lawyers submitted a motion that cited nine different cases, with the motion itself being generated by AI.  Problem?  Of the nine cases cited, eight were complete fiction.  They didn’t exist, and were wholly the creation of the AI that prepared the motion.   The lawyers were sanctioned.  And despite this, it’s happened numerous times since then, as other lawyers have failed to learn from the mistakes of their predecessors.

OpenAI has a transcription tool called Whisper which is used in hospitals.  It is known to hallucinate, inserting words or even entire phrases into text that were never actually uttered.  Do you want to take medical advice based on a hallucination-riddled transcript?  I’ll pass.

The Chicago Sun-Times once used AI to generate a summer reading list of 15 books.  We cannot fault anyone for not completing the list, since only 5 of the 15 were actual books, while the rest were pure fabrications.  However, the fabrications were attributed to real authors.  One wonders if they could get real royalty checks from the AI-run accounting department of their publishers.

Go play around with one of the prominent AI chatbots yourself.  Give it a few off-kilter queries and see what it does.  With a little effort, you can make it hallucinate too. 

How worried about your job should you be when AI makes such massive mistakes?  And if you are thinking about replacing workers with AI, should you?  Should you rely on a tool that makes stuff up out of thin air?  You wouldn’t hire a person who does that, so why would you pay for a machine to do the same thing?

Even worse, some people are using AI as therapists, girlfriend/boyfriend, and so on.  Those are terrible, awful, no good very bad uses of AI and NOBODY should do any of those things.  If you think AI is actually conscious and your friend, you couldn’t be more wrong and I urge you to both stop right away and also really look at what’s under the AI hood.  AI is not your therapist, and it’s not your friend.  It’s not even a conscious entity, and if you think otherwise, you are badly mistaken.  Seek help – human help.

But What About All the Neat Stuff I’ve Heard AI Can Do?

When you look at these hallucinations, you wonder why there is all of this hype about how AI is going to take over everything.  Some of it has to do with some other feats that AI has accomplished.  However, one needs to look beyond the surface to figure out why.  Take chess for example – AI has been known to be able to defeat some of the world’s best chess players.  But is that because it actually thinks through the game?  No.  It’s because it uses computational brute force and database access.  Using various mathematical algorithms, the AI can calculate probabilities and also accesses databases that include information about opening and endgame moves.  Meanwhile, the human opponent is limited to what’s inside their head.

But here’s another aspect of the AI-chess nexus.  Chess has predictable, stable rules.   It has a very sturdy and well-defined framework, which suits it well to the types of training that is performed on AI models.  But the real world, the world we live in is messy, not always stable, with the rules changing all the time.  And when edge cases are encountered, they require actual thought – not merely predictions based on massive numbers of matrix multiplications until some convergence is reached.  When the street you drive down unencumbered every day suddenly has road construction, you need to actually think your way through how to get through safely.  AI can be good with those things that have a solid, well-defined and unchanging framework.  But with fluid and changing circumstances that required nuanced thinking, AI falls apart very fast.

So is AI Good for Anything?

Yeah, I’ve been pretty tough on AI.  I hate all the hype from the tech bros, especially snake-oil salesmen like Sam Altman and Dario Amodei (of Anthropic, maker of Claude AI), as these guys have massively oversold AI’s capabilities while spreading irrational fears of the AI jobs apocalypse with everybody ending up out of work.   But that thing we call AI does have some uses, and it’s not going to go away.

I’ve used a few AI tools in my line of work (patents).  And some of them are excellent tools that are very useful.  Are they perfect?  No, not even close.  Would I trust them to fully finish a patent application or a response to the US Patent and Trademark office during the prosecution of a patent application?  No way.  What these tools are great at is augmenting my efforts.  They are wholly inadequate for fully replacing them though.  The tools are particularly useful in sifting through reams of documentation and rapidly accelerating the finding of a metaphorical needle in a haystack.  And sometimes, these tools provide some extra insight into a topic that I hadn’t really thought about. 

I’ve heard of others in various lines of work having similar tools at their disposal, expressing opinions similar to those expressed here.  The commonality of all of these tools is they are not directed to, nor trying to emulate anything approaching general intelligence.  Instead, they are narrowly focused on a particular area of inquiry.  They are focused and limited to certain tasks where they can produce mostly repeatable results.  Some of them use what are called small language models (SLMs), as opposed to the LLMs underlying AI chatbots like ChatGPT, Claude, Microsoft Copilot, and so forth.

These limited-focus tasks are where I think the current generative AI may become legitimately useful and make its biggest impact.  Even AI chatbots can sometimes be useful for limited inquiries in carrying out technical research.  Problems that can be reduced in some manner to computational mathematics will lend themselves well to the current AI regime.

But revolutionary?  Something that is going to eliminate some huge percentage of jobs, become self-aware, and go Skynet on us?  That’s not happening, particularly not with the current trajectory of AI development and the computer hardware technology upon which it runs.  You can train an AI model with lots of data, amounts that are incomprehensible to most of us.  But you can give it neither wisdom, nor common sense.  You certainly cannot give it emotion.  And therefore, you can’t ever make it truly intelligent.

Wrapping Up

When I started this, I originally intended to write only a single installment about AI.  As I wrote though, I found I had more and more to say about the topic, especially in the onslaught of ridiculous AI hype, both utopian and dystopian.  I still have more to say, but I’ve said enough for now.  And over the time I have been writing these various installments, cracks have begun to appear in the armor of the AI hype machine.    

One of the cracks is the admission by Sam Altman that GPT5 did not result in AGI.  There is more open talk about there being an AI bubble and the financial headwinds the industry is facing.  Michael Burry, the investor made famous by short-selling the U.S. housing market during the bubble that popped in 2008, has made a $1 billion dollar bet (in the form of short selling) that the AI bubble will also pop.  People are waking up.

Those that are not waking up usually base their belief in the eventual omnipotence of AI (for good, evil, or both) in the idea that technology advances linearly, if not exponentially.  But such is not the case.  In mathematical terms, technologies usually advance somewhat logarithmically, i.e., big gains in the early days, followed by diminishing returns in future advances.  Think of other technologies, such as household appliances.  Early on, the mere appearance of appliances such as dishwashers, refrigerators, etc., represented big technological leaps.  But as time has gone on, the rate of improvement in the basic functioning of these machines has slowed to a crawl, so much so that their manufacturers are adding all kinds of other technological bells and whistles to make people think they are improving, even though such improvements are at best incremental (and small increments at that).  You can apply this same idea to many things, such as automobiles, aircraft, smartphones, and so on.

The same trajectory applies to the generative AI that has burst onto the scene in the last several years.  The early improvement in successive models was huge and significant.  But as the release of GPT5 illustrated, such improvements are slowing.   And as discussed in earlier installments, there are hard, physical limits on how far the current generative AI can advance. 

Again, I don’t think AI is simply going to disappear.  It’s here to stay.  Nor do I think AI will be useless, as it will definitely result in some very useful tools.  But to the utopians that think AI will lead to a labor-free future in which you are simply provided sufficient income for merely existing?  Well, I’ve got disappointing news for you.  And to the dystopians who think that AI is going to lead to a bleak, cyberpunk future where most of us eke out an existence while under the rule of corporate overlords?  You can probably breathe a little easier.

For both the utopian and dystopian scenarios, a huge dose of skepticism is warranted.  In fact, skepticism is warranted for most of the area between these two extremes.  For all the amazing things AI seems to do, it isn’t magic and it isn’t truly intelligent, no matter how well it seems to mimic intelligence.  It’s just a tool.  Humans, however, are still irreplaceable.  People who are dazzled by the AI hype forget that at their own peril.  Let’s hope it doesn’t imperil the rest of us.

Confessions of an AI Skeptic, Part 4 (of 5)

(Part 1, Part 2, Part 3)

Last time out the discussion was focused on AI’s appetites for compute power and energy – appetites that are difficult to describe with any available superlative.  All that compute power costs money.  So does the electricity to run the computers and the infrastructure necessary to keep them cool.  And to run LLMs like ChatGPT, AI companies are such profligate spenders that they could embarrass a U.S. congressman during budget negotiations.  Also, like the spending of our own government (assuming you’re in the U.S.), the spending for AI is unsustainable.

Where Are the Profits?

Since generative AI burst onto the scene with the making of the commonly used AI platforms available to the public, the hype has become out of control.  Google CEO Sundar Pichai even went so far as to say that AI was a more profound technological development than fire or electricity (a smart-aleck yet astute commenter on YouTube asked about what happens to AI when we cut off the electricity – touché).  Part of me thinks he truly believes that ridiculous assertion, given his delivery.  But a more cynical side of me thinks he and others are engaging in the hype cycle to keep the investment dollars coming in – dollars that are desperately needed to keep the AI train rolling.

One recent financial example is instructive as to why they need the hype to keep investors interested.  Chipmaker NVIDIA, the premier maker of the processors used for AI workloads, recently just committed to invest $100 billion in OpenAI (through 2030), the creators of ChatGPT.  For the same period, OpenAI committed to buying $300 billion of cloud compute from Oracle.  Meanwhile, Oracle committed to buy $40 billion worth of chips from NVIDIA.  Now, I’m not exactly a business titan here, but by my back of the envelop math, Oracle is the only company making out in this deal – assuming this deal every completes.  Open AI, in the first half of 2025, had $4.3 billion in income, but still had a $13.5 billion loss.  That comes on the heels of a $5 billion loss in 2024.  Not exactly moving in the right direction.  How then, will Open AI come up with the $300 billion for the deal with Oracle?  Investors aren’t going to lose money forever.

The problem with AI companies is they cannot bring in the revenue to cover their costs.  Right now, OpenAI allows free access to ChatGPT, but with some fairly strict limits.  There are paid plans for $20/month, and while not as limited as the free version, it has limits nonetheless.  This revenue structure doesn’t even come close to covering their costs.  This video speculates that ChatGPT would have to raise prices to $500/month to cover their costs.  Who the heck is going to pay $500/month for ChatGPT??  That is not a path to profitability, and this cost structure is not exclusive to OpenAI.

Now one could point to a company like Amazon, which took a long time to report a profit (in large part because they were pumping revenues back into investments).  But all the while Amazon was losing money, they were making customers happy, selling products customers wanted and providing unparalleled convenience.  Is AI making customers happy?  According to this MIT report,  95% of companies – 95%!!! – are not seeing any returns on their AI investments. 

Fact is, AI is in a huge bubble right now.  Even Open AI boss Sam Altman – in a rare display of honesty rather than ridiculous hype – thinks AI is due for an implosion.  And another fact is that AI is a very long way from being profitable.  That’s hardly a surprise, with AI’s insatiable appetite for compute and the power and infrastructure to run it, a lot of unhappy customers, and little if any agreement on what makes a good business case for AI.  Read this for a good overview of where we’re at.  If you want a video explanation, watch this

Much of this bubble was preventable if the tech bros and others hadn’t hyped AI to the moon without knowing if they could deliver.  The first problem is the “I” in AI, as discussed in previous installments of this series.  What we call AI is most decidedly not intelligent, general or otherwise.  Further, it was hyped without full disclosure regarding its appetite for resources, and thus, money.  Maybe, if instead of calling it AI, they could have simply called their software by its true name – large language models (LLMs) – and said it mimics human intelligence although it isn’t truly intelligent.  They probably wouldn’t have had the ridiculous sums of money invested in it, and maybe “progress” would have been slower.  But they also wouldn’t have this bubble that is going to burst and cause a lot of people to take a bath, and not the kind that leaves them feeling clean and refreshed.  For a lot of people, it’s going to get ugly.

The financial status of AI and the bubble created around it reveals what might be the most critical barrier to all the hyped-up predictions, whether such predictions are doom and gloom or naïve utopianism – money.  AI is expensiveVery expensive.  The money needed to provide the compute resources, energy, and other utilities is staggering, and because of the limits of computer technology I discussed previously, this picture is not going to improve.  For companies like OpenAI to become profitable, they would have to raise their prices an order of magnitude for their paid plans.  And that, of course, would lead to rapid decline in usage and worsen OpenAI’s already bleak financial picture.

Even worse (for AI companies) is that AI has, unlike most technologies, become more expensive as it has advanced.  OpenAI trained their GPT-3 model for about $4.6 million.  Their next big advancement, GPT-4, was trained for an estimated $80 million – $100 million.  GPT-5 (which was supposed to be a huge advancement over GPT-4 – it was anything but) was trained for a price estimated between $1.25 billion and $2.5 billion.  Thus, the training of GPT-4 was 1-2 orders of magnitude more costly than GPT-3, while training GPT-5 cost yet another order of magnitude over GPT-4 (and three orders of magnitude more than GPT-3). 

Even if a compute technology more suitable for AI was in existence, it would take a paradigm shift at the most fundamental level – away from silicon-based computing and the Von Neuman computer architecture that underlies virtually every computer from the room-sized ENIAC of yesteryear up to the smartphone you carry in your pocket today.  With trillions of dollars invested in that compute paradigm, nobody is going to be eager to suddenly abandon it and shift to a new one.  Practically speaking, it would be impossible.  But that’s a moot point anyway, because there is no such other paradigm.

AI is, quite simply, financially unsustainable.  And the prospects for that changing any time soon are virtually nil.

The AI Apocalypse Will Not Be Televised (Because it’s Not Going to Happen)

The discussion about the financial picture of AI leads me to another topic that is the product of AI hype – the doomsday scenarios where everybody loses their jobs and the world is ruled by a few evil megacorporations that have all the money while the rest of us live as feral beings looking for scraps while barely scraping by on universal basic income (UBI).  Scenarios that, if subject to any scrutiny, are quickly exposed as being beyond ridiculous.

Let’s put this to the test with some hypotheticals.  Let’s assume here in the U.S. that AI, in a relatively short time (maybe 5 years or so), was able to eliminate 125 million jobs – a little over 75% of the approximate number of people currently employed in this country.  Such a loss of jobs would result in a massive economic depression, one which would absolutely dwarf the Great Depression of the 1930’s (when U.S. unemployment peaked around 25%).  With so many people out of work, revenues to all businesses, evil megacorporations included, would plummet.  In such an economic shock, many businesses would close.  And recall from the above and previous installments of this series the insatiable appetite of AI for compute resources, power, and thus money.  With collapsing revenues, how are corporations (assuming they can even stay in business) going to pay for the huge costs associated with running all the AI necessary to replace those employees that were put out of work?  And even if all this AI could be kept running, where would the demand come from for the output when 75% of the people are jobless?  Why is any business going to pay for AI to produce all that output when they will never have enough consumers to pay for its staggering costs in the first place?

A lot of people when presented with the above scenario bring up the magical UBI that will suddenly appear without ever examining the underlying assumptions, so let’s do that now.  The U.S. is already $39 trillion – with a ‘t’ – in debt with spending levels that are far lower than those that would be required to finance UBI.  With 75% of the workforce out of work, tax revenues will collapse.  The federal government wouldn’t be able to finance many of its most basic functions, much less spending at its current levels, which is about $7 trillion for the most recent fiscal year.  The spending to support UBI would require the federal government to issue debt at rates that dwarf the present.  But who is going to buy that debt in a crushing economic depression, when corporate revenues are in the tank and when tax revenues reduce the prospects of repayment?

But can’t we increase taxes on the rich?  Oh sure, we can, but it still won’t be anywhere near enough to support UBI over the long term.  If you took all the wealth (not just income, but every last penny of wealth from the richest 100 Americans) you would net about $3.27 trillion, which is less than half our current annual spending.  Expanding that list to the Forbes 400, the amount of wealth adds up to about $6.6 trillion – still less than one year of annual spending.  And again, this is total wealth, not annual income that is significantly less. 

Conclusion?  There is no way to finance UBI.

Another conclusion?  There is no way to finance the attendant resources that must be consumed to support the widespread deployment of AI necessary to replace such a large number of jobs. A massive, short-term replacement of hundreds of millions of jobs with AI is doomed to collapse not only the economy and but also collapse its own prospects for ever being successful.  An AI employment apocalypse that eats all our jobs will end up eating itself, and in very short order.

This isn’t to say that AI will never replace any jobs.  And how many it will replace over the long haul remains to be seen.  It just means that simple economic reality, especially when paired with the economic realities of running AI, make the AI-will-take-your-job apocalypse scenario one that is so self-limiting that it’s a non-starter.  Think of it as an airplane that is so overloaded that it weighs too much to get off the ground.

More likely, AI will augment a lot of jobs.  But it’s simply too unreliable and too expensive to replace jobs en masse. 

This doesn’t mean we are out of the woods.  The coming AI apocalypse may be the bursting of the AI bubble and the collateral economic damage.  That one is far more plausible – and likely – than AI replacing everybody’s job.   

And returning to something hinted at above, AI isn’t always the most reliable thing in the world.  In fact, it can be wildly unreliable at times, too much so to risk replacing a human.  We’ll discuss that in the next installment.

Confessions of an AI Skeptic, Part 3 (of 5)

(Part 2 can be found here)

(Part 1 can be found here)

One thing you never saw in any of the movies of the Terminator franchise (featuring some of the most menacing villains in all of sci-fi) was any of the various models having to stop for a recharge.  I can’t really blame James Cameron for that.  How cinematically compelling would it have been had the Cyberdyne Systems Model 101 portrayed by Arnold Scwharzenegger had to spend a significant amount of downtime recharging his battery?  Yet, if we were seeking a realistic portrayal of such a machine, it would have had to spend most of its time recharging.  And escaping the Terminator? Keep running, because his battery is going to be dead in very short order.

All of this is another way of saying that AI is a resource hog.  It is a voracious consumer of power and compute resources like the world has never seen.  The U.S. federal government looks positively judicious with taxpayer funds when compared to the way AI consumes resources.  However, what we call AI is bumping up against some hard physical limits, limits which present a Mt. Everest-sized obstacle to scaling.

A Compute Hog:

When a computer runs a program, it executes instructions, and in particular, machine level instructions, most often generated by a compiler that translates high-level language code into something it can understand.  The programs you run day-to-day, on your PC, your laptop, that computer you carry in your pocket called a “phone” can run programs that consume billions of processor cycles, where a cycle is the execution of an instruction.  But those software programs don’t even scratch the surface of what modern AI consumes.

Each of the tokens we mentioned in Part 2 places demands on a processor.  How much?  A prompt to an LLM that generates about 100 tokens in Open AI’s GPT-4 model (the latest model is GPT-5 now) can consume between 50-100 teraflops.  “Flops” in this context are floating-point operations per second, where floating-point is a type of data computer systems work with (basically a number that includes a mantissa and an exponent, digitally represented).  Tera means a trillionTrillion.  Also keep in mind that a prompt to an LLM includes two phases – a prefill phase (where the text you entered is broken down into tokens) and a decode phase (where the LLM generates tokens in response to your prompt).  So, for a relatively small prompt-and-answer, an LLM can consume between 50 and 100 trillion execution cycles.  Now consider longer conversations with an LLM.  These can easily run into the thousands of teraflops or more. 

Because of the astronomical amount of computing power AI workloads consume, the heavy lifting is done in data centers having the requisite amount.  Modern data centers include row upon row of servers, each with a number of GPUs.  As an aside, “GPU” stands for graphics processing unit, and while such processors were originally designed for graphics workloads, they are massively parallel and thus particularly well-suited for AI workloads. Some computers that process AI workloads also use a more specialized chip called a tensor processing units or TPU (which unlike a GPU, is specifically designed for AI workloads).  In addition to all the GPUs/TPUs, each server also includes a large amount of memory, the capacity of which is measured in terabytes.

In a sense, we’ve come full circle with computing.  Up until the 1970’s, we used to think of computers as room-sized behemoths, which they were.  That was the amount of space required to run the computing workloads of the time.  It was the advent of microprocessors and Moore’s Law (which is now deader than Francisco Franco) that started to shrink the size of computers down to something you can put on your desk or even carry in your pocket.  But now, with AI workloads, we are back up to gargantuan sizes again, with whole data centers that dwarf the large computers of yesteryear.  And we’re there because that’s the kind of space required to implement computing setups that can run compute-hogging AI. 

A Power Hog:

It doesn’t take a leap of imagination to realize that the requirement of that much computing power necessitates the consumption of a lot of electrical power.  But how much is a lot?  For this part, I turned to AI itself to tell me how much power it might use, and lacking any sense of modesty, it spit the answer right out.  It gave me the assumption of 750 giga-flops per token (750 billion instructions executed using floating-point data), with about 0.0001 kWh (kilowatt-hours) per token based on typical GPU/TPU energy usage (doesn’t sound like much, so far, does it? You just wait …).  The number of flops and the energy consumed scale linearly with token count.  Thus, a query that produces 1000 tokens would use, under this scenario, 0.01 kWh.  Moving the decimal place a couple spots to the right, that’s 10 Wh – i.e. enough energy to power a 10-watt LED bulb for an entire hour.  That’s for one very small conversation (compare that to what a human brain can do in an hour, running on about 14 Watts of power).

It’s not hard to see how some AI conversations use more power than Clark Griswold’s Christmas lights

And yet, we’re not done.  So far, we’ve only talked about the energy consumed by the computers themselves.  Thanks to the Laws of Why We Can’t Have Nice Things (sometimes referred to as “the Laws of Thermodynamics”), using that much compute power and thus that much electricity means a lot of excess heat is generated.  Something must be done about that heat, otherwise the computers in these data centers won’t run long before all the electronics are fried like a chicken in the kitchen of your local KFC (btw, Original Recipe >> Extra Krispy). 

We need to bring in cooling water, and lots of it.  That requires pumps to move the water in and then to move it out.  Some data centers also utilize large refrigerant systems to circulate cool air as well.  There has been some improvement on this front. Old data centers had about 30-40% energy overhead for cooling, while newer data centers have about 10-20% overhead.  Nevertheless, that’s still a lot of energy.

A recent story serves as an illustrative anecdote regarding AI energy consumption.  The story, linked here, refers to a planned AI data center for the state of Wyoming, one that will consume five times the amount of electricity as all the residents of Wyoming combined.  Not merely more energy, but five times more.  Not merely a few residents, but all residents of the state.

All that physical space, all that compute power, and all that energy, and yet this AI is still not intelligent, it still can’t think, and requires multiple orders of magnitude more energy to accomplish many of the same things humans can do.  Sure, it’s particularly well-suited for computational mathematics, more so than humans, but that’s not thinking, that’s just number crunching.  And of course, it took humans to design computers to be good at such things – humans that have, in their own skulls, a brain that can do amazing things running on about a mere 14 watts of power (or, in an hour, 14 Wh).  And with that 14 Wh, we have consciousness and true intelligence. 

The Wall:

Above, I wrote that AI faces a Mt. Everest-sized obstacle to scaling.  But more accurately, AI is racing head on into a wall, one that will kill scaling.

Let’s return to Moore’s Law, which was mentioned above.  The idea behind Moore’s Law was the product of Intel’s Gordon Moore, who postulated that the number of transistors on a given unit area of silicon would double every 18 months.  And for decades, that was true.  It’s because of Moore’s Law that you can carry in your pocket computing power, run off a battery, that is equivalent to a room-sized computer of the 1970’s.  But you can only get so small (sorry, Steve Martin). 

When transistor feature sizes were in the thousands, then the hundreds, and even the tens of nanometers, the progress of packing more functionality onto the same chip area marched onward, largely unabated.  But on the most advanced chips now – such as the GPUs/TPUs that run AI workloads – the smallest features sizes are in the single-digit nanometers.  You know what else has a size measured in single-digit nanometers?  Atoms.  Yes, atoms, the fundamental building block of all matter.  And you know what that means?  It means you have run into yet another wall. You are not going to build transistors smaller than atoms.  That is a hard, non-negotiable physical limitation.  And that means the end of Moore’s Law.

Furthermore, the top clock speeds for chips haven’t increased for about 15 years now.  The maximum speed at which an execution unit in one of these chips can execute instructions is therefor also facing a hard limit due to the material properties of the silicon upon which such chips are fabricated.

Now you will still get some denialists saying Moore’s Law is not dead, and they will point to chips where vertical stacking is conducted, but that’s not packing more transistors into a given area, that’s just using the vertical dimension to create more area.  Moore’s Law only works if individual transistors themselves can get smaller, and with the smallest feature sizes bumping up against atomic dimensions, that is no longer possible.  Moore’s Law has been dead for at least a decade. 

The denialists might also opine that there is some other technology on the horizon that will transcend the limitations placed on transistor sizes, while remaining vague about what those technologies are.  Some might cite different materials for chipmaking.  But most of these materials have some sort of fatal flaw.  Take for example graphene – the wonder material that is effectively a flat sheet of carbon atoms.  Graphene has been used to make transistors in laboratories, and those transistors can operate at significantly higher clock speeds (at least an order of magnitude more) while having much better properties than silicon regarding heat dissipation.  But there is a huge problem – graphene lacks something known as a bandgap.  Without getting into device physics, we’ll simplify thing by saying the lack of a bandgap means that such a transistor can never fully turn off, thereby making it useless for functioning as a switch, and therefore useless as the basis for a digital computer.

Analog computing is another technology championed by some.  And while it can be very useful in certain applications (as it can almost instantaneously do large matrix multiplications that hog computer cycles in the digital domain), it nevertheless suffers from the limitations from which all analog circuits suffer.  Analog circuits are more susceptible to noise, error cascading, and lack the necessary precision for many workloads.  Analog computing circuits are also much larger than the digital circuits of the GPUs/TPUs.

Quantum computers are the great hope for some, but we are a long way from a practical quantum computer.  Meanwhile, they are currently very error prone, of limited stability, and require cryogenic cooling (meaning hundreds of degrees below zero, and that’s true whether you are talking in Fahrenheit or Celsius).  There are questions as whether they could provide any advantage over the current computing paradigm for many workloads.  Most of the promise is in specialized workloads, but until we get practical, reliable quantum computers, we can do no more than speculate.

So the upshot of the above is that AI as we know it has, due to the various physical limits discussed above, has ran head-long into a wall.  However, that wall is imposed by physical limits.  We haven’t talked about financial limits yet.  If you think AI is a compute hog and a power hog, wait until you find out how much of a money hog it is.  The U.S. government has nothing on AI when it comes to burning through cash.

Confessions of an AI Skeptic, Part 2 (of 5)

(Part 1 can be found here)

Last time, the discussion focused largely on what happens at the circuit level of a computer system, and whether, starting from that, intelligence and consciousness could arise.  For this installment, I wanted to delve a little more into how we define intelligence.  Much of the hype surrounding AI is that we are soon going to see AGI – artificial general intelligence – as well as ASI – artificial super intelligence.  My skepticism remains solid that neither of these milestones will ever be achieved, certainly not with current computing architectures, if ever.

What is AI Doing?

Instead of focusing on the circuit-level, it’s instructive to go a few rungs up the abstraction ladder and discuss what happens when one sends a prompt to an LLM, or large language model (which encompasses the basis most of the well-known AI chatbots today – ChatGPT, Google Gemini, Grok, etc.).  This is a somewhat simplified explanation, but it’s enough to obtain a basic understanding.

When you send a prompt to say, ChatGPT, the words of that prompt are broken down into tokens.  These tokens can be full words, chunks of words (sub-words), or even symbols.  These tokens are then turned into numbers, in a process called embedding.  The numbers are then turned into numerical vectors that can have thousands of dimensions.  The numerical vectors are then fed into a transformer layer, where many, many matrix multiplications are performed.  Since a matrix multiplication comprises many individual (scalar) multiplications, the number of total multiplications becomes astronomical.  In other words, it’s doing a mountain of math under the hood.  

Each step involves multiplying huge grids of numbers together, and every one of those multiplications expands into millions or even billions of tiny arithmetic operations. Processing a single word can require hundreds of billions of multiplications and additions. To put that in perspective, if you sat with a calculator and did one multiplication every second, it would take you thousands of years to do what the model does in a fraction of a second for just one word.

In doing these kajillion multiplications, the AI model is predicting the next word, based on weights applied during said multiplications.  After all these multiplications are done, the resulting numbers are turned back into words for display on your computer screen.

While the operations described above may be algorithmically new, from the perspective of computers, the individual operations – namely, the multiplications – are nothing new at all.  Electronic computers of all kinds have been doing multiplications since they’ve existed.  This isn’t confined to your desktop or the room-sized behemoths of yesteryear, but also includes the pocket calculators that people like myself relied upon during engineering school before things like smartphones were as ubiquitous as they are today.

There are a couple of upshots to the above.  Thie first is, that while an LLM like ChatGPT may appear to understand language, in reality it does no such thing at all.  It just crunches numbers.  And not only that, the computer doesn’t even know it’s crunching numbers – refer back to the first installment – the number crunching is just the causing of basic switching circuits of the computer system to switch between logic 1 and logic 0 – high voltage and low voltage – really, really, really fast.

So if the computer doesn’t understand language, and doesn’t even know it’s crunching numbers to mimic the understanding of language, can it be considered intelligent?  If the answer is no, then how will computers become intelligent by simply making bigger, more computationally intensive models? 

How do you Define Intelligence?

This is a trickier question than it may appear.  We can recognize intelligence to be sure, which is exemplified by the fact that we can ponder and debate what exactly the term means.  But defining it with precision, drawing a hard line between intelligent and not intelligent?  That’s a much more difficult task.

We define humans in general as being intelligent (not to be confused with being wise).  And yet we still have a hard time drawing that line between what is intelligence and what is not, despite most of us being pretty sure that computers running AI haven’t yet reached intelligence.

And that’s the rub.  The people trying to create artificial general intelligence (AGI) – or any intelligence at all in computers/AI, are trying to solve it as an engineering problem.  But engineering problems require well-defined solutions.  If you want to put a satellite into an orbit with a perigee of 150 miles above the Earth’s surface and apogee of 160 miles, the solution is well-defined.  If you want to design an amplifier circuit that can take an input signal with an amplitude of 2 volts and output a corresponding signal having an amplitude of 12 volts, we know how to do that because, again, the solution is well defined.  There may be different ways to get to the same solution, but having a firm definition of the solution provides a framework and a guide for getting there. 

This is true even for some engineering problems that we haven’t solved, like nuclear fusion.  We know what man-made nuclear fusion it will look like, in terms of inputs and outputs, should we ever get there .  But that illustrates another point: even when we know what the solution looks like, it can be maddeningly difficult to achieve. 

With intelligence, general or otherwise, we can’t even agree on a definition. Not even AI’s biggest proponents can agree on a definition of intelligence, much less what would constitute true AGI.  What they all have in common is that they are trying to find an engineering solution to something that is essentially a philosophical problem.  And because the definition of intelligence is essentially a philosophical, it will continue to defy an engineering solution. 

So far, we’ve spent a lot of time talking about intelligence, the difficulty in defining what intelligence is, and stating why I believe computers running AI workloads are not even remotely intelligent.  What hasn’t been discussed so far will be the topic of the next installment – the rapacious appetite of AI in terms of resources.

Before I go, however, Apple has published a paper about AI entitled “The Illusion of Thinking.”  If you want to dig a little deeper, it can be found here

Confessions of an AI Skeptic, Part 1 (of 5)

Artificial Intelligence, or AI, is all the rage these days.  “It’s going to eliminate all of our jobs!!”  “It’s going to become more intelligent than humans!!!” “It’s going to become sentient and turn into Skynet!!” 

Pffffft.

It’s not going to do any of those things.  Not even close.

Now don’t get me wrong – AI (and note – only the ‘A’ part of that is accurate) is here to stay.  And it’s going to lead to some very powerful tools, some of which can be very useful.  Of course, it will also lead to some tools that are not so useful.  And it will be misused and abused, which might be its most frightening prospect.

But if you are worried about Skynet, I’m here to tell you, don’t – The Terminator is a great action movie but not much more.  Nor should you worry about AI eliminating all the jobs, a notion that can be dispensed with in multiple ways, including with simple arithmetic.  We should once and for all dispense with the idea that AI will become conscious. Similarly, the notion that AI exhibits true intelligence should also be tossed in the wastebasket.  To understand why, we’ll start with the point that the rubber meets the road (or where software meets hardware) in computers.

The 1’s and 0’s of Artificial “Intelligence”

When I observe certain people hyping AI, namely those with a technical background, I notice they are mostly software engineers or programmers.  Many of these software engineers are extremely intelligent, and can make a computer do things – through programming – that I (also a technical person, but with a hardware/circuit orientation) could never dream of doing.  Nevertheless many of these AI-hypesters have a huge gap in their understanding of how computers actually work.  Their interactions with computers are through high-level programming languages, several layers of abstraction away from what is happening at the hardware/software interface.  Because of that they are only vaguely aware, at best, of the hard physical limits of computing.

For the non-technical, a little explanation is warranted here.  Almost all software programming – and AI is software – is done using what are called high-level languages – Python, Perl, C, … and for those of you who are old geezers (as am I), Fortran, Basic, Pascal, etc.  High-level programming languages are essential, as the practically infinite variety of software we use today would not be possible without them.  But the processor in your computer system cannot understand these languages directly – it needs what is known as a compiler that translates (“compiles”) the high-level language program into machine language that the computer understands.  And ultimately that means, in the digital computer systems we use, it gets converted into 1’s and 0’s. 

But even the 1’s and 0’s are somewhat of an abstraction – the processors used in computer systems are electronic circuits, and as such work with voltages and currents that represent these 1’s and 0’s, rather than working with the digits themselves.  Thus, in the chips used to implement computer systems, these 1’s and 0’s are represented by corresponding voltages – e.g., a “high” voltage for a logic 1, and a “low” (or no) voltage for a logic 0.  I’m not going to delve into the actual circuits as to how this is achieved (although they are relatively simple), other than to say you can think of these circuits as 2-position switches.  A single switch in this analogy can generate a logic 1, or high voltage in one position, and a logic 0, or low voltage in another position.  These switches, constructed using transistors, can be combined to form logic gates, and logic gates can be combined to form even more complex structures.  But at the heart of it all, at the lowest level, all you have are a bunch of switches that produce the voltage levels and corresponding binary logic levels.

Just about every computer system you own – from your smartphone, to your tablet, your laptop, your desktop – has billions of transistors, and thus billions of switches.  And they are nothing even remotely like neurons in the human brain.  Putting more of them together doesn’t turn them into neurons either.

Hey, I Came Here to Read About AI, not this Switch Stuff!

Ok, so you ask now, “if this essay is about AI, then where is he going with all this switch stuff?”  Where I’m going with this is to show you what AI – indeed what any software does – at the fundamental circuit level.   At the circuit level, it is, depending on the input voltage, making the output voltage change between a high voltage and a low voltage – between a logic 1 and a logic 0.  On circuits used in the chips of a computer system, this switching behavior can occur billions of times per second.  Multiply that by billions of these switching circuits, and you’ve a whole lotta switching going on.  And in AI computing workloads?  You have orders of magnitude more switching than the most processor-taxing game your kid runs on his gaming PC. 

But can true intelligence (much less consciousness) arise from this mere switching behavior, having billions of circuits switch between a logic 1 and logic 0 (a high voltage and a low voltage) billions of times per second?  Digital computers have been operating this way for decades now.  There is nothing remotely intelligent about the way they function.  Simply and adding more of switches and making them do it faster and faster doesn’t move ball even a nanometer closer to intelligent behavior, because the transistors used to create these switches are not neurons, and never will be neurons.  They’re just switches.  On or off.  Logic 1 or logic 0.  Putting more of these switches together into a more complex structure does not suddenly make them into neurons.  And because of this, computers will continue to understand language and human thought in the same way a radio understands music, i.e., not at all because they have no such capability of “understanding.”

If someone disagrees with me, and truly believes that AI can be truly intelligent and can truly become conscious, I’d love to hear their explanation as to how we are going to get there based on making more of these switching circuits and making them switch faster.  I’m all ears.  All I’ve ever heard from those that believe AI will become some sort of machine messiah (nod and wink to my progrock friends) are pure underpants gnomes-level leaps of logic.  As AI gets “better,” real intelligence and consciousness will just magically occur, they believe.

What an absolute load of bull-shinola.

The only surefire way I know to make electronic computers truly intelligent is this: convince God to “miracle” intelligence into computers.  If God wants computers to by intelligent, then by God (sorry) they will be.  But absent that, there is no other way.  Not with the computing systems we have now, not with CMOS switching circuits even in the billions of trillions, not with simply manipulating voltages to make 1’s and 0’s.  Ever more complex software programs – even what is called AI – isn’t going to suddenly cause intelligence, much less consciousness, to spring forth from silicon or some other substrate that may be used in the future.  If that’s all it took, we’d be there by now.

If you want to explore the topic of intelligence in man-made machines (or our inability to accomplish that), you can also explore Kurt Gödel’s incompleteness theorems.  I’m not going to get into the discussion about that here, other than to note that when Gödel came up with these theorems, it freaked him out a little bit as he thought he might have proven the existence of God.  But that’s pushing the limits of this discussion, so you’re on your own here.

Intelligent, sentient computers of the electronic variety make for great science fiction.  HAL from Arthur C. Clarke’s 2001: A Space Odyssey is one of Sci-Fi’s most memorable characters.  My personal favorite – Mike, from Robert A. Heinlein’s The Moon is a Harsh Mistress – is another one that seeped into the consciousness of many Sci-Fi aficionados.  But those computers are fiction, and intelligent electronic computers like them will remain so, absent divine intervention.

Notice I said “electronic computers.”  Biological computers are also a thing, and they can be very intelligent.  And better yet, there is a way to make intelligent biological computers – it’s very old tech, a time-tested technique known as “having babies.”  But that’s also another discussion.

“But hey, you didn’t address it taking all our jobs and all the other things AI is going to do, good and bad!”  This piece is getting kind of longish, but I will return with more confessions of my AI skepticism, and soon.  Or, as another AI character once said, “I’ll be back.”

Abiding Wisdom from a Lunatic Soul:  Our Latest Interview with The Duda, aka Mariusz

Mariusz Duda, or as we call him around these parts, The Duda, never stays still for very long.  From various solo projects and his band Riverside, it seems that he always has something going on.  The fruits of his most recent labor, The World Under Unsun by Lunatic Soul, can be found in your trick-or-treat bag upon its release tomorrow, Halloween 2025. 

Ahead of The Duda’s latest release, we were fortunate once again to catch up with him and talk for a while.  We spent plenty of time discussion the new album, but also delved into the fate of Lunatic Soul itself, the creative process, and some of the other future possibilities for The Duda’s artistic output. 

I can tell you it was a great conversation, but you would be better served to just read on and see for yourself as you dig into your Halloween candy.  So let’s get on with it.

SoC:     In a recent Facebook post, you said the current album is somewhat of a prequel to Walking on a Flashlight Beam.  Can you elaborate on both albums and the connection between the two?

MD:     Oh my goodness. I’m not sure if they have time for that, but long story short. The whole concept, it’s called The Circle of Life and Death. That have six main albums, and we have two additional ones. Their albums are put on the circle, are in the circle, and three of them are on the side of death, and three of them are on the side of life. Okay, it’s kind of definitely side of life, because this story is about the journey, about the hero who dies. He travels to the afterlife, then he revives, going back to life. And then he dies again, and then going back to life and stuff like that.  He is just in a loop. Okay. That is why on The World Under Unsun, there is a song which is called Loop of Fate. Anyway, I wanted something about the loop. And there’s the thing, that he wants to escape from this loop, and that’s the main plot of the whole album.

In general, if you listen in a proper order, it’s like: the main character dies at the beginning of Lunatic Soul I, then Lunatic Soul II, then Through Shaded Woods, and then he goes on the side of life. We have Fractured, we have Under the Fragmented Sky, and some of the most depressing ones. In work on the plot, the main character, you know, jumps off the cliff into the waves, as you can see on the prophecy. He dies in the water.  It’s basically a story about reviving and changing.

The thing is The World Under Unsun is post‑Fractured why the first song sounds like Fractured and is a prequel to Walking on a Flashlight Beam.  That why the waves you hear at the start of this album are the same waves you hear at the beginning of Walking the Flashlight Beam. Everything is connected. We don’t have to go into all the details, but the main character is an artist who always has a choice: does he want to remember his previous life or not? He always chooses to lose his memories, which is why he is forgotten in the whole world.

On the Impressions album, there is the song “Gravestone Hill” which reveals the main character’s choice. Imagine you’re an artist: you don’t want to lose your memories because you want to remember the best things you created and develop them across lives. However, you always remember how you die—that’s the problem. He asks himself, should I be afraid of the waves this time again? Long story short, he’s in a circle. One time, when he’s on the side of life, he realizes the sun doesn’t look like the sun anymore;  something has changed and the world becomes darker and darker. It’s like Back to the Future II when Marty doesn’t belong in the place he knows. The album reflects that feeling. The title The World Under Unsun reflects the hero’s mental state: he doesn’t feel well, he’s in a toxic relationship and wants to leave it. The whole process of trying to get out of this place is on the album.

SoC: Is that kind of a metaphor for something in your life as an artist?

MD:  I guess there is always something connecting the fiction with the truth.  I usually use music as a form of therapy, and the fiction is always mixed with fragments of my personal life. I don’t keep an exact ratio, you don’t need to know the exact percentage of that mixture.  That’s only my own thing.

SoC: Do you even know it?

MD: I know it (laughs).  It’s like, in one song, there is 16% of my personal life and 84% fiction  – I’m just joking.  No, but I try to balance it in a proper way.

SoC: Shifting gears, when you come up with a concept, how do you decide on the style of music that is the foundation?  For example, on the previous Lunatic Soul, Through Shaded Wood, it was very folky.  This one is more electronic.  So what is it that drives you that says “I’m going to go this way with the music”?

MD:  I guess I started this project mostly to fill it with my favorite genres. If you, uh, think about it, it’s always connected with ambient cinematic kind of stuff, a bit of electronic music, folk oriental things, and rock, maybe a bit of metal type of thing.  So that’s it. So this is the whole Lunatic Soul.  And, I think the new album shows the entire range of genres because you can find all these elements in the music.  And then there were the albums that were more oriental than the others, like Lunatic Soul 1 and 2 more like that condensed. Then there was Fractured, which was more electronic. And Through Shaded Would was more folky, more organic.

Yeah. I just wanted to, you know, this is just like, some albums should have their own identities., I was really close to one border on another album. I was really close to another border. But it’s more about this connection between electronic music, folk oriental stuff and rock.

I believe that the new album is more um, rock oriented or even alternative oriented. I don’t know. There’s more distortion. With some exceptions, of course. And it’s dark.

When I start doing an album, I always start from the story, the cover, the title, but it’s just like writing the script of the movie that you want to direct or just preparing a concept for book that you want to write.  And this is what I do. I don’t think about, I don’t keep coming up with the ideas first and then I, oh, maybe I should do something with that. No, I just, I’m telling stories. I’m just creating the stories. And then I always, I want to make them a bit different than the others. So I said, okay, so this one should be more electronic because it’s about fractured. Uh, it’s a world, uh, so if it’s fractured, there’s a, there’s lots of sharp objects. So I see this more like electronic stuff.

And if it comes from the green color connected with woods, trees, organic stuff, let’s make it more organic or folky. So everything starts with the, you know, the title, the main vibe. And I’m just following this and that’s it.

SoC:  So you did say that, you know, out of the 8 Lunatic Soul albums, 6 of them are telling the story. Which ones are not part of the story?

All albums are the part of the story, but the Impressions album that was released after Lunatic Soul 1 and 2 and Under the Fragmented Sky, they are sort of like the bonus material for the albums, the main albums.  Impressions is like something connected with Lunatic Soul 1 and 2.  And the three of them are kind of connected. And Under the Fragmented Sky, these are the leftovers from Fractured. And the bonus for Through Shaded Woods was already on the album. I didn’t do the separate release because I have it ready already on the album at that time. Yeah.  The leftovers for Impressions or Under the Fragmented Sky were not ready yet. So that’s why I just released that later on.

And this time, I didn’t want to do another bonus material. I wanted to create the classic double album, for people who have time to listen to music these days.

SoC:  So you set out to create the double album?

MD: Yes, from the very beginning. Okay. It was very important for me because I first  wanted to fill the gaps with all these, you know, answers for the questions. Speaking of the plot, the story. And also I wanted to show all the genres, and I just thought that if I do, you know, the 50 minutes long album with all these, you know, electronic, folky mental stuff, it would be too intense, too much for it to be a pleasure to listen to. So I just said that maybe if I do more space here, do more space there, and extend this, it will be more natural. We don’t have to be in a rush. You know, we can create something longer.

So tell me, someone can tell me, these days, it’s really hard to play to record that kind of long because people don’t have time to listen to them. Then don’t listen to them!

The album for the people who have time to listen to music. So I don’t care if this is 40 minutes or 90. But on the other hand, the previous album, 3 Shade Woods, had 39 minutes, so come on, I know what, how to do short albums as well.

SoC:  Well, also, Riverside ADHD was only what, 47?

MD: 44

SoC: Getting back to the concept of the present album, I know part of it’s the story, but do you think some of it is kind of informed by what is going on in the world? There’s a lot of turmoil going on. Is that affecting your character or affecting how you’re writing these things?

Um, actually, I always have 3 layers in terms of writing lyrics. The 1st layer is actual, the fiction, the story that I’m coming up with. The 2nd phase, it’s my personal life, my personal experience. The 3rd is, uh, what’s going on all over the world. You know, the outside world. It has to be important sometimes because, you know, I don’t know.  Let’s say, if there is a war, I don’t want to record the album that doesn’t fit to this whole situation. Yeah. Well, um, all these layers are blended sort of the way. I try to avoid political subjects.  However, from time to time, I do this. ID Entity [the most recent Riverside album] was full of that. But it was also very direct. Yeah. Whereas between Riverside and Lunatic Soul is that Lunatic Soul is more metaphysical, more like, you know, inner journey. And um, uh, that is why I didn’t want to write about, like, on social media, for instance. It’s more about live, death, love, loneliness, solid, some mystical things.

And, there’s one song, uh, which is called Torn in Two. I have to admit, I wrote the lyrics after, um, the results of the presidential election in Poland.  So, yeah, the thing, like, and it was inspired by, let’s say, something that was outside than inside.

SoC:  Ok, I’ve seen some things online – word is that this is the last Lunatic Soul album – is that true?  If so, what drove the decision to make this the last album?

MD: There’s a beauty of PR, you know, beauty of public relations. This is the chapter of the story, right? Yeah. So sometimes I don’t have to add that this is the word chapter. This is the last look. So it sounds much more powerful in the news. I believe that this is the, but I agree. This is the last Lunatic Soul in that form. Okay. And if I will bring Lunatic Soul back to life, it will be different kind of form. If I can just, you know, it’s like, for instance, King Crimson. That’s a good example of the lineups, the vibes, the moods, the approach to music. So maybe he will not be so. And next life he will have electric guitar because, uh, that’s very important information for many people that they don’t know Lunatic Soul. This is the project without the electric guitar. I wanted to distinguish this from Riverside. I didn’t want to have those, you know, David Gilmour kind of solos.  In music, I made these limits mostly to trying to be try to make myself innovative and creative in different areas. So that’s a reason for that.

SoC:  So what you’re saying then is we might see something else that has the title of Lunatic Soul, but it’s not necessarily going to be within this story.

MD:  Maybe this is story of life and death is done. So, uh, I’m not sure if, for instance, if Lunatic Soul will exist somewhere else, if I would change everything: “Now I will talk about social media.” No, no. The thing is that this kind of music from different kind of dimensions, I like it very much. I would like to continue this vibe, but definitely it will be different kind of story.

SoC:  So that leads me to: What’s next? Might there be another Lunatic Soul album, but outside this story, might there by another Mariusz Duda solo album or something like that?

MD:  I don’t know that yet. Now it is completed the cycle. The last song of the last, let’s say, Lunatic Soul of this cycle is The New End. It’s a reference to the first song on the first album, which was The New Beginning. Yeah. Can you begin with the new end? So this is something that I always use, for instance, in Riverside, we had After, Before, Lost and Found, The Night Before, now we have The New Beginning and The New End. The circle is closed. What’s next? We will see what the future brings.

SoC: Do you think you’ll do more cycles? You did that with the first three  Riverside albums and then the second three could be kind of a cycle as well. And then Lunatic Soul here has been a cycle. So you really want people to listen for a long time!

MD: I also did in the pandemic, I did some kind of like very initial electronic project and I did like something just called the Lockdown Trilogy. Oh, yeah, that’s right. That was another trilogy. That’s some weird experimental music instrumental. Probably, yes. It’s always nice. It’s just like, you know, when you, I like series. Yeah. I’m a huge movie fan and I’m inspired by the cinema. I’m always cherish, you know, if the director or the creator on the scriptwriter, they have something like, you know, five parts of something, four parts of something, Rocky, Rambo, Back to the Future, Star Wars, whatever. Harry Potter. It’s cool.   It’s simply cool because on the shelf it looks nice and it’s a part of something bigger, right?

SoC: I remember when I talked to you another time after Wasteland [the Riverside album of the same name], you were, you’d been kind of inspired by the spaghetti westerns and the music had a lot of the spaciousness of the sound.

MD: We had the trilogy, right? It’s yeah. So probably, yes! All right. Probably yes.

SoC: Okay, well, looking forward to it how you record something when you’re like the only player? You’re the only guy doing an instrument. So how do you manage to have all those tracks playing in your head and then get them down on tape or recorded somehow?

MD: I have I have guests on this album, on drums, on saxophone, and to the guitar. So I didn’t play everything by myself. But most of the things I did by myself, yeah. It started with my love for electronic music when I was a kid. And my love to keyboards with sequencer. It’s the fact that I started to compose the songs by myself, having the sequencer at home.

So I just, you know, started from drums [vocalizes drum sounds].  Okay. And now it’s at bass [vocalizes bass sounds].  Okay, we’ve got it. Now keyboards [vocalizes keyboard sounds]. And I did always the same, and my sequencer is in my brain. And when I create that, I always put some layers, you know. I don’t have use keyboards anymore. I’m going to the studio and see those layers and if there’s something that I can play by myself, I do this by myself. If I want to achieve something else, then I ask someone to make it.

So that’s the thing, you know, that’s the problem with me. Sometimes I find it like a virtue, sometimes it’s a flaw. Because I’m not that kind of guy who’s just taking an instrument and let’s play for fun. I always had to create something, you know, like something bigger. But I liked it. I got used to this and of course, it’s cool stuff. During the stories, this is just like writing books, you know, when I see that Stephen King wrote another book and I said, my goodness, I need to record another album. That is why I don’t like this system that’s going on in the music business. Like, you’re listening an album, and then you have to go for two years for playing live shows because you need to earn money.  What about the art of creation? What happened? Why in the 70s, for Christ’s sake, people were releasing the albums every year or why the Beatles released two albums every year? What’s going on? Why? Why can’t I be in the studio all the time? Because I have to play shows.

So I’m being a rebel and sometimes I prefer to be more in the studio. I know that some people says, yeah, but what about the money? If you do lots of stuff in the studio, you can have the money as well.  I don’t want to have the house with the pool. in the suburbs. I’m pretty happy with my average life. It’s just the fact that I don’t have these urges inside of me that they’re crazy. So that’s the thing that I truly love doing stuff and recording. And as I said before, this is like my therapy. So it helps me. To not take pills. So I have to do this.

SoC: Well, that’s a good therapy. I’m glad you’re doing that because we can enjoy that much more than we would enjoy you taking pills!

MD: Thank you!

SoC: I guess we’ll wrap it up. Do you have any other projects lined up or anything you’re thinking about doing after this?

MD: And you know, I was thinking you’ll maybe go back to my electronic world, but this time after this album, I feel that I have an urge with me that it’s, I want to go back to songs. I want to go back to something organic, especially in the AI days. I’m not sure if I want to continue this, you know, instrumental projects because AI can make it. But I think it still struggles with the basic classic normal songs that comes from your heart. So I want to focus on the classic structures of the songs and, well, we’ll see, which name it would be.

But with Riverside I wanted to take a break now, especially from the live shows, which is really important for me. I turned 50 this year, so I kind of deserve to make a time to rethink something. Always have to be this way. Like, I don’t know, lockdown or COVID forces us to stop. Yeah. So I did force myself to just stop. Yeah, but I probably want to create something new, but maybe, as we said, the new shape of Lunatic Soul will appear. Okay.

In Poland, I do some kind of promotional meetings, meet and greet kind of stuff. I should do that in more countries. But, you know, this is another limit. And I’m going to play, I talk about the album. I’m going to play like a few tracks acoustically as well. So maybe that will be like the transition to something new. We’ll see.

SoC: Thanks again for talking to us and I hope I hope to hear from you again someday very soon.

MD: Thank you so much, Erik, for your time. Wish you all the best. Thank you very much. Bye bye.

Political Beats – Yes!!

Our founder, Brad Birzer, recently did a two-part episode of National Review’s music podcast, Political Beats. If you are not familiar, this podcast usually features a guest and a discussion of a particular band.
For this two-parter, Brad and the normal panel discuss the career of progressive rock giants Yes, album-by-album. I’ve conversed with Brad in a group chat about the episode, and he liked my comments enough to ask me to present them here. As such, here they are, unedited save for a few interjections.

First comment, after listening fully to Part 1 and a little bit of Part 2 (in italics, my additional interjections in brackets]:

Hi Brad – I just finished listening to the first Yes episode and have listened up through the discussion on GFTO in the second episode. I loved the discussion on TFTO, and I think “beautiful failure” is an apt description, although I would also add it was a necessary failure. They found their limits on that album because they tested those limits, and I think that allowed them to be more concise and focused with their next two albums. [Tales from Topographic Oceans was Yes’s most ambitious album, and to paraphrase what Jon Anderson said about it, it was the meeting of high ideals and low energy. It certainly has some brilliant music on it but also has a lot of mindless noodling. Most of the panel thought the first and last pieces of the album – The Revealing Science of God and Ritual, respectively – were the best pieces. For my money, it’s actually the second piece, The Remembering, which holds together best (although even it suffers a little from needless padding). On that note, I think the bass playing in that piece is brilliant, often subtle and understated (not often a Chris Squire trademark), and he says as much that he was proud of that in YesStories by Tim Morse]

I also liked the observation that at times on TFTO, they were fitting the art to the format instead of just letting it flow organically. That’s one reason I’m not as down on the digital formats as some are today, because it essentially removes such restraints an allows the artist to just create without having to adapt the art to the format. I think Gazpacho’s Night is a great example of that, as I just don’t think it would flow anywhere near as well if it had to be adapted to (and possibly compromised by) the LP format. [In line with the discussion above, I think a lot of the problem with TFTO was directly related to this observation. Multiple panelists stated this album could have been better with some editing, but such editing within the limitations of the LP format would have been much more difficult.]

I would have been a slightly dissenting voice in the GFTO discussion with regard to Awaken, which I think is pure, magical, utter freakin’ brilliance and even in a catalog that includes Close to the Edge, it’s my favorite Yes composition. The production, the dynamics of the piece, the playing, the shifts in mood … all of that adds up to me as just an incredible musical journey that leaves me satisfied every time I hear it, and yet wanting more of it at the same time. [This was my biggest dissent with the panel. Not that they disrespected Awaken, but they certainly didn’t see it the way I do. Progressive rock (particularly, symphonic progressive rock) was often described as the fusion of rock and classical music, and this piece more than any exemplifies that fusion in its best form to my ears. The tone and timbre of the instrumentation here (especially with the harp and the church organ) really give it a classical feel in a way that exceeds event hat of Close to the Edge. The crescendo that consumes the second half of the piece, beginning with a few quiet plucks of the harp by Anderson is brilliance, slowly, patiently building to a powerful conclusion. Give it another try. On the other hand, I loved that they all showed so much love to Parallels, my second favorite song on this album, which features incredible playing (and interplay) among Howe’s guitar, Squire’s bass, and Wakeman’s keyboards. I had a lot more to say about this album some years ago on Progarchy, that piece can be found here.]

Will let you know what I think of the rest of it when I finished. Really looking forward to the discussions of Drama and 90125.

Second Comment after listening to Part 2:

Finished the second episode now. Definitely enjoyed the discussion and agreed with a majority of the takes. After Magnification, the only Yes album that has interested me is Fly From Here: Return Trip because of the Drama connection. Drama, BTW, might be my favorite Roger Dean cover. I love the album, although I will admit that the overselling of “Yes” on Tempus Fugit wore on my after a while (but instrumentally, it’s an incredible song). [That’s about my only issue at all with Drama, which is a great album in its own right. I share the sentiments with others on the panel that wonder what might have been had that lineup continued.]

Thought the observation that some of the ideas on Tormato were good ideas poorly executed was a good one. My pick for that would by Onward, which I actually liked much better on Keys to Ascension when Howe brought in the nylon string guitar in place of the electric in the studio version. [Onward is one of many pieces by bands I love that seem to come off better live than in the studio, and Howe’s nylon string guitar on the KTA version is the reason why here. Gates of Delirium is another Yes piece l like better live than in the studio due to some production issues (although the Steven Wilson remaster seemed to fix most of those).]

As for Release Release, I’ve always preferred a cover by Shadow Gallery (from the tribute album Tales from Yesterday) to the original studio version, as it has the punch that the original was lacking. [That song just needed to rock more. While Howe was excellently versatile in many styles of guitar, he didn’t seem to have an affinity for the kind of bone-crunching power chords that song needed, or at least he saved that for Machine Messiah on the next album]

Like you and the rest of the panel, I was pretty disappointed with Big Generator, other than Shoot High Aim Low, it was pretty forgettable. Trivia note: I heard a Rabin interview where he stated that Love Will Find a Way was a song he had originally written for Stevie Nicks, but the rest of the band wanted to keep it for themselves. [Yeah, what a disappointment after 90125. On the other hand, I loved the discussion of 90125, and was happy that nobody on the panel was such a prog snob that they dismissed the album as other prog snobs are wont to do. Sure, it was a lot different from their previous work, but it was undoubtedly Yes, and it was the kind of reinvention that only a band like Yes could pull off in such a spectacular fashion.]

If you’re a Yes fan and haven’t listened to these this two-part episode, I strongly recommend you do so. You won’t be sorry!

90125 at … 40??

In the immortal words of Ferris Bueller, life comes at you fast. In this case, it was 10 years that came at us fast – for it was 10 years ago that I wrote the piece linked below about one of the seminal albums of the 1980s. Those 10 years have allowed for additional perspective to develop.

If anything, my appreciations for this album has only grown. As the original piece notes, 90125 brought in scores of new fans of both Yes the band and the genre of prog in general. In the latter area, I would be hard pressed to name an album whose ripples had more of an effect than 90125. Moving Pictures from Rush might give it a run for its money, but that’s the only one I can name that’s really in the same ballpark. 90125 attracted millions of fans who would have had no reason to pay attention to the genre and who now are aficionados of the same.

Many people (myself most definitely included) love to talk about albums that had a lasting impact. Sgt. Peppers by The Beatles is certainly one that gets a lot of ink spilled, as does Led Zeppelin IV and Pink Floyd’s Dark Side of the Moon. And by Yes themselves, Close to the Edge is often cited as an album whose impact has continued to resonate long past its release date. And now, 40 years after its release, I think its time we put 90125 on the same shelf. And now, let’s move onto the main topic of discussion to learn some of the reasons why.

The Caravel and the Starship

Prior to the 15th century, European maritime adventures were primarily limited to coastal navigation outside the Mediterranean Sea.  In the late 15th century, spearheaded by Henry the Navigator, the Portuguese developed a new type of ship called the caravel.  The caravel had capabilities beyond other sailing ships of the day, and because of its design, was capable of voyages on the open ocean.  On August 3rd, 1492, the caravels Nina, Pinta and Santa Maria departed from Palos de la Fronterra, Spain, heading westward into the Atlantic Ocean.  On October 12th, they made landfall on an island that is now part of the Bahamas.  Months later, the Nina sailed into the port of Lisbon with news of the discovery.  It was an epochal moment.  The world has never been the same.

Today, on the Gulf Shore of Southeast Texas, the world witnessed the first launch of the caravel of the Space Age.  Starship, boosted by the Super Heavy first stage (the largest, most powerful rocket ever built) cleared the pad and roared into the skies over the Gulf of Mexico.  While the flight did encounter what Elon Musk refers to as a “rapid unscheduled disassembly, one should not view this test as a failure.  This is particularly true when considering the iterative engineering process of SpaceX – and its mantra of “Move fast, break things.”  The flight hit several important milestones while also yielding valuable data which SpaceX engineers will use to further refine the design, fix flaws, and get the next iteration of this rocket on the pad within a few months.  Keep in mind that SpaceX is the same company now has over 100 consecutive successful, propulsive landings of the Falcon 9 booster – many of them re-used multiple times.  There was a time when the “smart” people said such a thing was not even possible.  And yet, here we are – propulsive landings of the Falcon 9 first stage are nearly as routine as successful airplane landings.  When a company has a track record like that, it’s foolish to bet against them.

Why is Starship significant? Just as the caravel was designed to carry people across the oceans of Earth, Starship was designed for carrying people across the oceans of empty space.  And just as the caravel took many too the new world, the motivation for designing Starship was the same, with Mars being the prime target (a variant will also take astronauts back to the moon).  It will be entirely reusable, capable of returning to the world from which its journey started, just as the Nina did.  No other such crewed spacecraft currently exists or has ever existed. Starship will be the first. Furthermore, it will further reduce launch costs.  Falcon 9 can already put approximately the same amount of payload into the same orbit as the Space Shuttle could – but at 1/20th of the cost.  A fully operational Starship promises at least another order of magnitude reduction in that cost.  Thus, in both cost and capability, Starship will be the vehicle that truly opens the final frontier, not just for a few astronauts that can meet NASA’s exacting standards, but for ordinary people.  When Starship lands on Mars with humans on board, it will be every bit as epochal as the moment when Columbus realized the significance of his discoveries.

Like the 1960’s, we live in tumultuous times.  But also, like the 1960’s, we live in exciting times, certainly when it comes to advances in spaceflight.  Whereas the previous era was driven by governments and the impetus of the Cold War, the advances of the present era are being driven by the private sector, and without many of the non-technical limitations of the former era.  While looking at some of the goings-on in the world today is rather depressing, the world of spaceflight is as exciting as it has been at any time since the build-up to Neil Armstrong’s call of “Tranquility Base here – the Eagle has landed.”  

To be sure, there is a long way to go, as the ending of today’s test flight attests.  But I am more confident than ever that we will see Starship take humans to Mars, and maybe even beyond; that we will see the first trickle of a migration that was once as inconceivable as the migrations to the New World were in 1491.  What an incredible time to be alive.

Godspeed, Starship.

Riverside … on Riverside (Drive, that is)

Riverside at Come and Take It Live, Austin, TX, February 22, 2023

Members of the band helpfully direct concertgoers to the venue

Sometime in the mid-to-late 00’s, I was surfing the internet looking for new music.  I happened upon this Polish band named Riverside who was creating a lot of buzz in the prog community.  I ended up purchasing their second album, and have been a fan ever since.  Unfortunately, the chance to see them never seemed to materialize, as what little touring they did in the U.S. never seemed to be near my home.  That almost changed in February 2022, when Riverside had a show scheduled here in Austin.  But almost as quickly as it was scheduled, it was canceled for some reason.  They promised on Facebook they would make it on the next tour, and I crossed my fingers.  And almost a year to the date after their originally scheduled show, they delivered on that promise.

Appearing at a venue with one of the most Texas names ever, Come and Take It Live (which, serendipitously, is located on East Riverside Drive in Austin), the band put on a two hour show that was just about flawless.  The setlist was quite interesting, and if there is such a thing as a concept album, I suppose this show could have been called a concept concert.  The band performed six of the seven songs off of their latest album, ID. Entity (I’m Done With You being the lone exception).  A number of other songs dovetailed nicely with the theme of ID. Entity.  These songs included the show opener #Addicted (from Love, Fear, and the Time Machine), Left Out and Egoist Hedonist (from Anno Domini High Definition), and We Got Used to Us (from Shrine of New Generation Slaves). Outside of that, the only two songs that didn’t really fit in thematically with the rest of the set were O2 Panic Room (from Rapid Eye Movement) and Conceiving You (from Second Life Syndrome). 

The performances were as excellent as one would expect from this group of musicians, delivered with high energy and intensity.  Delivery of Egoist Hedonist and Left Out were especially powerful, both including jams that extended their respective durations over their studio counterparts.  Mariusz Duda, in addition to being a great player, was engaging with the audience, and proved to be every bit the cool guy I had the good fortune of interviewing three times during my days at Progarchy.  The Duda indeed abides. 

The other musicians were in top form as well.  I continue to be impressed with Maciej Meller’s ability to play the parts of Piotr Grudziński with the right balance between faithfulness to the original and his own individual style.  Michał Łapaj was in the zone all show long, playing to the high standards prog fans expect of their keyboard heroes.  And Piotr Kozieradzki did not disappoint on drums.

In addition to enjoying the show myself, I managed to introduce Riverside to a friend and co-worker I brought along, one who is as much of a prog-head as I.  He left impressed, and was enticed by the lyrics of ID. Entity enough to spend $100 on a special edition of the album that included the main disk, the bonus disk, a 5.1 surround sound disk, vinyl-sized artwork, and booklet.  That’s a pretty nice way to start a journey of discovery of the Riverside catalog.  I’m kind of envious that he’s going to get to hear all their music for the first time.

It’s a few days after the show as I write this, but I’m still buzzing.  Their performance was so good, so tight, so energetic, and just so much fun.  There are a few other Riverside fans that contribute to this site, and a few more that read it.  So if their tour manages to stop close by, I highly recommend you go see them.  You will not be disappointed.