Confessions of an AI Skeptic, Part 3 (of 5)

(Part 2 can be found here)

(Part 1 can be found here)

One thing you never saw in any of the movies of the Terminator franchise (featuring some of the most menacing villains in all of sci-fi) was any of the various models having to stop for a recharge.  I can’t really blame James Cameron for that.  How cinematically compelling would it have been had the Cyberdyne Systems Model 101 portrayed by Arnold Scwharzenegger had to spend a significant amount of downtime recharging his battery?  Yet, if we were seeking a realistic portrayal of such a machine, it would have had to spend most of its time recharging.  And escaping the Terminator? Keep running, because his battery is going to be dead in very short order.

All of this is another way of saying that AI is a resource hog.  It is a voracious consumer of power and compute resources like the world has never seen.  The U.S. federal government looks positively judicious with taxpayer funds when compared to the way AI consumes resources.  However, what we call AI is bumping up against some hard physical limits, limits which present a Mt. Everest-sized obstacle to scaling.

A Compute Hog:

When a computer runs a program, it executes instructions, and in particular, machine level instructions, most often generated by a compiler that translates high-level language code into something it can understand.  The programs you run day-to-day, on your PC, your laptop, that computer you carry in your pocket called a “phone” can run programs that consume billions of processor cycles, where a cycle is the execution of an instruction.  But those software programs don’t even scratch the surface of what modern AI consumes.

Each of the tokens we mentioned in Part 2 places demands on a processor.  How much?  A prompt to an LLM that generates about 100 tokens in Open AI’s GPT-4 model (the latest model is GPT-5 now) can consume between 50-100 teraflops.  “Flops” in this context are floating-point operations per second, where floating-point is a type of data computer systems work with (basically a number that includes a mantissa and an exponent, digitally represented).  Tera means a trillionTrillion.  Also keep in mind that a prompt to an LLM includes two phases – a prefill phase (where the text you entered is broken down into tokens) and a decode phase (where the LLM generates tokens in response to your prompt).  So, for a relatively small prompt-and-answer, an LLM can consume between 50 and 100 trillion execution cycles.  Now consider longer conversations with an LLM.  These can easily run into the thousands of teraflops or more. 

Because of the astronomical amount of computing power AI workloads consume, the heavy lifting is done in data centers having the requisite amount.  Modern data centers include row upon row of servers, each with a number of GPUs.  As an aside, “GPU” stands for graphics processing unit, and while such processors were originally designed for graphics workloads, they are massively parallel and thus particularly well-suited for AI workloads. Some computers that process AI workloads also use a more specialized chip called tensor processing units, or TPUs (which unlike GPUs, were specifically designed for AI workloads).  In addition to all the GPUs/TPUs, each server also includes a large amount of memory, the capacity of which is measured in terabytes.

In a sense, we’ve come full circle with computing.  Up until the 1970’s, we used to think of computers as room-sized behemoths, which they were.  That was the amount of space required to run the computing workloads of the time.  It was the advent of microprocessors and Moore’s Law (which is now deader than Francisco Franco) that started to shrink the size of computers down to something you can put on your desk or even carry in your pocket.  But now, with AI workloads, we are back up to gargantuan sizes again, with whole data centers that dwarf the large computers of yesteryear.  And we’re there because that’s the kind of space required to implement computing setups that can run compute-hogging AI. 

A Power Hog:

It doesn’t take a leap of imagination to realize that the requirement of that much computing power necessitates the consumption of a lot of electrical power.  But how much is a lot?  For this part, I turned to AI itself to tell me how much power it might use, and lacking any sense of modesty, it spit the answer right out.  It gave me the assumption of 750 giga-flops per token (750 billion instructions executed using floating-point data), with about 0.0001 kWh (kilowatt-hours) per token based on typical GPU/TPU energy usage (doesn’t sound like much, so far, does it? You just wait …).  The number of flops and the energy consumed scale linearly with token count.  Thus, a query that produces 1000 tokens would use, under this scenario, 0.01 kWh.  Moving the decimal place a couple spots to the right, that’s 10 Wh – i.e. enough energy to power a 10-watt LED bulb for an entire hour.  That’s for one very small conversation (compare that to what a human brain can do in an hour, running on about 14 Watts of power).

It’s not hard to see how some AI conversations use more power than Clark Griswold’s Christmas lights

And yet, we’re not done.  So far, we’ve only talked about the energy consumed by the computers themselves.  Thanks to the Laws of Why We Can’t Have Nice Things (sometimes referred to as “the Laws of Thermodynamics”), using that much compute power and thus that much electricity means a lot of excess heat is generated.  Something must be done about that heat, otherwise the computers in these data centers won’t run long before all the electronics are fried like a chicken in the kitchen of your local KFC (btw, Original Recipe >> Extra Krispy). 

We need to bring in cooling water, and lots of it.  That requires pumps to move the water in and then to move it out.  Some data centers also utilize large refrigerant systems to circulate cool air as well.  There has been some improvement on this front. Old data centers had about 30-40% energy overhead for cooling, while newer data centers have about 10-20% overhead.  Nevertheless, that’s still a lot of energy.

A recent story serves as an illustrative anecdote regarding AI energy consumption.  The story, linked here, refers to a planned AI data center for the state of Wyoming, one that will consume five times the amount of electricity as all the residents of Wyoming combined.  Not merely more energy, but five times more.  Not merely a few residents, but all residents of the state.

All that physical space, all that compute power, and all that energy, and yet this AI is still not intelligent, it still can’t think, and requires multiple orders of magnitude more energy to accomplish many of the same things humans can do.  Sure, it’s particularly well-suited for computational mathematics, more so than humans, but that’s not thinking, that’s just number crunching.  And of course, it took humans to design computers to be good at such things – humans that have, in their own skulls, a brain that can do amazing things running on about a mere 14 watts of power (or, in an hour, 14 Wh).  And with that 14 Wh, we have consciousness and true intelligence. 

The Wall:

Above, I wrote that AI faces a Mt. Everest-sized obstacle to scaling.  But more accurately, AI is racing head on into a wall, one that will kill scaling.

Let’s return to Moore’s Law, which was mentioned above.  The idea behind Moore’s Law was the product of Intel’s Gordon Moore, who postulated that the number of transistors on a given unit area of silicon would double every 18 months.  And for decades, that was true.  It’s because of Moore’s Law that you can carry in your pocket computing power, run off a battery, that is equivalent to a room-sized computer of the 1970’s.  But you can only get so small (sorry, Steve Martin). 

When transistor feature sizes were in the thousands, then the hundreds, and even the tens of nanometers, the progress of packing more functionality onto the same chip area marched onward, largely unabated.  But on the most advanced chips now – such as the GPUs/TPUs that run AI workloads – the smallest features sizes are in the single-digit nanometers.  You know what else has a size measured in single-digit nanometers?  Atoms.  Yes, atoms, the fundamental building block of all matter.  And you know what that means?  It means you have run into yet another wall. You are not going to build transistors smaller than atoms.  That is a hard, non-negotiable physical limitation.  And that means the end of Moore’s Law.

Furthermore, the top clock speeds for chips haven’t increased for about 15 years now.  The maximum speed at which an execution unit in one of these chips can execute instructions is therefor also facing a hard limit due to the material properties of the silicon upon which such chips are fabricated.

Now you will still get some denialists saying Moore’s Law is not dead, and they will point to chips where vertical stacking is conducted, but that’s not packing more transistors into a given area, that’s just using the vertical dimension to create more area.  Moore’s Law only works if individual transistors themselves can get smaller, and with the smallest feature sizes bumping up against atomic dimensions, that is no longer possible.  Moore’s Law has been dead for at least a decade. 

The denialists might also opine that there is some other technology on the horizon that will transcend the limitations placed on transistor sizes, while remaining vague about what those technologies are.  Some might cite different materials for chipmaking.  But most of these materials have some sort of fatal flaw.  Take for example graphene – the wonder material that is effectively a flat sheet of carbon atoms.  Graphene has been used to make transistors in laboratories, and those transistors can operate at significantly higher clock speeds (at least an order of magnitude more) while having much better properties than silicon regarding heat dissipation.  But there is a huge problem – graphene lacks something known as a bandgap.  Without getting into device physics, we’ll simplify thing by saying the lack of a bandgap means that such a transistor can never fully turn off, thereby making it useless for functioning as a switch, and therefore useless as the basis for a digital computer.

Analog computing is another technology championed by some.  And while it can be very useful in certain applications (as it can almost instantaneously do large matrix multiplications that hog computer cycles in the digital domain), it nevertheless suffers from the limitations from which all analog circuits suffer.  Analog circuits are more susceptible to noise, error cascading, and lack the necessary precision for many workloads.  Analog computing circuits are also much larger than the digital circuits of the GPUs/TPUs.

Quantum computers are the great hope for some, but we are a long way from a practical quantum computer.  Meanwhile, they are currently very error prone, of limited stability, and require cryogenic cooling (meaning hundreds of degrees below zero, and that’s true whether you are talking in Fahrenheit or Celsius).  There are questions as whether they could provide any advantage over the current computing paradigm for many workloads.  Most of the promise is in specialized workloads, but until we get practical, reliable quantum computers, we can do no more than speculate.

So the upshot of the above is that AI as we know it has, due to the various physical limits discussed above, has ran head-long into a wall.  However, that wall is imposed by physical limits.  We haven’t talked about financial limits yet.  If you think AI is a compute hog and a power hog, wait until you find out how much of a money hog it is.  The U.S. government has nothing on AI when it comes to burning through cash.

Interview: Rylee McDonald of ADVENT HORIZON

This morning, I had the great and grand pleasure of interviewing Rylee McDonald of Advent Horizon. We talked for about 35 minutes. You’ll see–though Rylee is a young guy–he is fully immersed in prog and new wave. And, he’s just as kind and insightful and brilliant as I expected after hearing the lyrics to his latest album, FALLING TOGETHER. Please support these guys! They’re the real deal.

Here’s the interview:

To order the new album, please go to Band Wagon USA!

Confessions of an AI Skeptic, Part 2 (of 5)

(Part 1 can be found here)

Last time, the discussion focused largely on what happens at the circuit level of a computer system, and whether, starting from that, intelligence and consciousness could arise.  For this installment, I wanted to delve a little more into how we define intelligence.  Much of the hype surrounding AI is that we are soon going to see AGI – artificial general intelligence – as well as ASI – artificial super intelligence.  My skepticism remains solid that neither of these milestones will ever be achieved, certainly not with current computing architectures, if ever.

What is AI Doing?

Instead of focusing on the circuit-level, it’s instructive to go a few rungs up the abstraction ladder and discuss what happens when one sends a prompt to an LLM, or large language model (which encompasses the basis most of the well-known AI chatbots today – ChatGPT, Google Gemini, Grok, etc.).  This is a somewhat simplified explanation, but it’s enough to obtain a basic understanding.

When you send a prompt to say, ChatGPT, the words of that prompt are broken down into tokens.  These tokens can be full words, chunks of words (sub-words), or even symbols.  These tokens are then turned into numbers, in a process called embedding.  The numbers are then turned into numerical vectors that can have thousands of dimensions.  The numerical vectors are then fed into a transformer layer, where many, many matrix multiplications are performed.  Since a matrix multiplication comprises many individual (scalar) multiplications, the number of total multiplications becomes astronomical.  In other words, it’s doing a mountain of math under the hood.  

Each step involves multiplying huge grids of numbers together, and every one of those multiplications expands into millions or even billions of tiny arithmetic operations. Processing a single word can require hundreds of billions of multiplications and additions. To put that in perspective, if you sat with a calculator and did one multiplication every second, it would take you thousands of years to do what the model does in a fraction of a second for just one word.

In doing these kajillion multiplications, the AI model is predicting the next word, based on weights applied during said multiplications.  After all these multiplications are done, the resulting numbers are turned back into words for display on your computer screen.

While the operations described above may be algorithmically new, from the perspective of computers, the individual operations – namely, the multiplications – are nothing new at all.  Electronic computers of all kinds have been doing multiplications since they’ve existed.  This isn’t confined to your desktop or the room-sized behemoths of yesteryear, but also includes the pocket calculators that people like myself relied upon during engineering school before things like smartphones were as ubiquitous as they are today.

There are a couple of upshots to the above.  Thie first is, that while an LLM like ChatGPT may appear to understand language, in reality it does no such thing at all.  It just crunches numbers.  And not only that, the computer doesn’t even know it’s crunching numbers – refer back to the first installment – the number crunching is just the causing of basic switching circuits of the computer system to switch between logic 1 and logic 0 – high voltage and low voltage – really, really, really fast.

So if the computer doesn’t understand language, and doesn’t even know it’s crunching numbers to mimic the understanding of language, can it be considered intelligent?  If the answer is no, then how will computers become intelligent by simply making bigger, more computationally intensive models? 

How do you Define Intelligence?

This is a trickier question than it may appear.  We can recognize intelligence to be sure, which is exemplified by the fact that we can ponder and debate what exactly the term means.  But defining it with precision, drawing a hard line between intelligent and not intelligent?  That’s a much more difficult task.

We define humans in general as being intelligent (not to be confused with being wise).  And yet we still have a hard time drawing that line between what is intelligence and what is not, despite most of us being pretty sure that computers running AI haven’t yet reached intelligence.

And that’s the rub.  The people trying to create artificial general intelligence (AGI) – or any intelligence at all in computers/AI, are trying to solve it as an engineering problem.  But engineering problems require well-defined solutions.  If you want to put a satellite into an orbit with a perigee of 150 miles above the Earth’s surface and apogee of 160 miles, the solution is well-defined.  If you want to design an amplifier circuit that can take an input signal with an amplitude of 2 volts and output a corresponding signal having an amplitude of 12 volts, we know how to do that because, again, the solution is well defined.  There may be different ways to get to the same solution, but having a firm definition of the solution provides a framework and a guide for getting there. 

This is true even for some engineering problems that we haven’t solved, like nuclear fusion.  We know what man-made nuclear fusion it will look like, in terms of inputs and outputs, should we ever get there .  But that illustrates another point: even when we know what the solution looks like, it can be maddeningly difficult to achieve. 

With intelligence, general or otherwise, we can’t even agree on a definition. Not even AI’s biggest proponents can agree on a definition of intelligence, much less what would constitute true AGI.  What they all have in common is that they are trying to find an engineering solution to something that is essentially a philosophical problem.  And because the definition of intelligence is essentially a philosophical, it will continue to defy an engineering solution. 

So far, we’ve spent a lot of time talking about intelligence, the difficulty in defining what intelligence is, and stating why I believe computers running AI workloads are not even remotely intelligent.  What hasn’t been discussed so far will be the topic of the next installment – the rapacious appetite of AI in terms of resources.

Before I go, however, Apple has published a paper about AI entitled “The Illusion of Thinking.”  If you want to dig a little deeper, it can be found here

Confessions of an AI Skeptic, Part 1 (of 5)

Artificial Intelligence, or AI, is all the rage these days.  “It’s going to eliminate all of our jobs!!”  “It’s going to become more intelligent than humans!!!” “It’s going to become sentient and turn into Skynet!!” 

Pffffft.

It’s not going to do any of those things.  Not even close.

Now don’t get me wrong – AI (and note – only the ‘A’ part of that is accurate) is here to stay.  And it’s going to lead to some very powerful tools, some of which can be very useful.  Of course, it will also lead to some tools that are not so useful.  And it will be misused and abused, which might be its most frightening prospect.

But if you are worried about Skynet, I’m here to tell you, don’t – The Terminator is a great action movie but not much more.  Nor should you worry about AI eliminating all the jobs, a notion that can be dispensed with in multiple ways, including with simple arithmetic.  We should once and for all dispense with the idea that AI will become conscious. Similarly, the notion that AI exhibits true intelligence should also be tossed in the wastebasket.  To understand why, we’ll start with the point that the rubber meets the road (or where software meets hardware) in computers.

The 1’s and 0’s of Artificial “Intelligence”

When I observe certain people hyping AI, namely those with a technical background, I notice they are mostly software engineers or programmers.  Many of these software engineers are extremely intelligent, and can make a computer do things – through programming – that I (also a technical person, but with a hardware/circuit orientation) could never dream of doing.  Nevertheless many of these AI-hypesters have a huge gap in their understanding of how computers actually work.  Their interactions with computers are through high-level programming languages, several layers of abstraction away from what is happening at the hardware/software interface.  Because of that they are only vaguely aware, at best, of the hard physical limits of computing.

For the non-technical, a little explanation is warranted here.  Almost all software programming – and AI is software – is done using what are called high-level languages – Python, Perl, C, … and for those of you who are old geezers (as am I), Fortran, Basic, Pascal, etc.  High-level programming languages are essential, as the practically infinite variety of software we use today would not be possible without them.  But the processor in your computer system cannot understand these languages directly – it needs what is known as a compiler that translates (“compiles”) the high-level language program into machine language that the computer understands.  And ultimately that means, in the digital computer systems we use, it gets converted into 1’s and 0’s. 

But even the 1’s and 0’s are somewhat of an abstraction – the processors used in computer systems are electronic circuits, and as such work with voltages and currents that represent these 1’s and 0’s, rather than working with the digits themselves.  Thus, in the chips used to implement computer systems, these 1’s and 0’s are represented by corresponding voltages – e.g., a “high” voltage for a logic 1, and a “low” (or no) voltage for a logic 0.  I’m not going to delve into the actual circuits as to how this is achieved (although they are relatively simple), other than to say you can think of these circuits as 2-position switches.  A single switch in this analogy can generate a logic 1, or high voltage in one position, and a logic 0, or low voltage in another position.  These switches, constructed using transistors, can be combined to form logic gates, and logic gates can be combined to form even more complex structures.  But at the heart of it all, at the lowest level, all you have are a bunch of switches that produce the voltage levels and corresponding binary logic levels.

Just about every computer system you own – from your smartphone, to your tablet, your laptop, your desktop – has billions of transistors, and thus billions of switches.  And they are nothing even remotely like neurons in the human brain.  Putting more of them together doesn’t turn them into neurons either.

Hey, I Came Here to Read About AI, not this Switch Stuff!

Ok, so you ask now, “if this essay is about AI, then where is he going with all this switch stuff?”  Where I’m going with this is to show you what AI – indeed what any software does – at the fundamental circuit level.   At the circuit level, it is, depending on the input voltage, making the output voltage change between a high voltage and a low voltage – between a logic 1 and a logic 0.  On circuits used in the chips of a computer system, this switching behavior can occur billions of times per second.  Multiply that by billions of these switching circuits, and you’ve a whole lotta switching going on.  And in AI computing workloads?  You have orders of magnitude more switching than the most processor-taxing game your kid runs on his gaming PC. 

But can true intelligence (much less consciousness) arise from this mere switching behavior, having billions of circuits switch between a logic 1 and logic 0 (a high voltage and a low voltage) billions of times per second?  Digital computers have been operating this way for decades now.  There is nothing remotely intelligent about the way they function.  Simply and adding more of switches and making them do it faster and faster doesn’t move ball even a nanometer closer to intelligent behavior, because the transistors used to create these switches are not neurons, and never will be neurons.  They’re just switches.  On or off.  Logic 1 or logic 0.  Putting more of these switches together into a more complex structure does not suddenly make them into neurons.  And because of this, computers will continue to understand language and human thought in the same way a radio understands music, i.e., not at all because they have no such capability of “understanding.”

If someone disagrees with me, and truly believes that AI can be truly intelligent and can truly become conscious, I’d love to hear their explanation as to how we are going to get there based on making more of these switching circuits and making them switch faster.  I’m all ears.  All I’ve ever heard from those that believe AI will become some sort of machine messiah (nod and wink to my progrock friends) are pure underpants gnomes-level leaps of logic.  As AI gets “better,” real intelligence and consciousness will just magically occur, they believe.

What an absolute load of bull-shinola.

The only surefire way I know to make electronic computers truly intelligent is this: convince God to “miracle” intelligence into computers.  If God wants computers to by intelligent, then by God (sorry) they will be.  But absent that, there is no other way.  Not with the computing systems we have now, not with CMOS switching circuits even in the billions of trillions, not with simply manipulating voltages to make 1’s and 0’s.  Ever more complex software programs – even what is called AI – isn’t going to suddenly cause intelligence, much less consciousness, to spring forth from silicon or some other substrate that may be used in the future.  If that’s all it took, we’d be there by now.

If you want to explore the topic of intelligence in man-made machines (or our inability to accomplish that), you can also explore Kurt Gödel’s incompleteness theorems.  I’m not going to get into the discussion about that here, other than to note that when Gödel came up with these theorems, it freaked him out a little bit as he thought he might have proven the existence of God.  But that’s pushing the limits of this discussion, so you’re on your own here.

Intelligent, sentient computers of the electronic variety make for great science fiction.  HAL from Arthur C. Clarke’s 2001: A Space Odyssey is one of Sci-Fi’s most memorable characters.  My personal favorite – Mike, from Robert A. Heinlein’s The Moon is a Harsh Mistress – is another one that seeped into the consciousness of many Sci-Fi aficionados.  But those computers are fiction, and intelligent electronic computers like them will remain so, absent divine intervention.

Notice I said “electronic computers.”  Biological computers are also a thing, and they can be very intelligent.  And better yet, there is a way to make intelligent biological computers – it’s very old tech, a time-tested technique known as “having babies.”  But that’s also another discussion.

“But hey, you didn’t address it taking all our jobs and all the other things AI is going to do, good and bad!”  This piece is getting kind of longish, but I will return with more confessions of my AI skepticism, and soon.  Or, as another AI character once said, “I’ll be back.”

The Pineapple Thief on Insideout

InsideOutMusic announces signing of progressive art-rock group The Pineapple Thief

 

North American tour dates revealed for Nov/Dec 2026

Photo credit: Martin Bostock

InsideOutMusic is pleased to announce the signing of progressive art-rock group The Pineapple Thief.  Founded in 1999 by Bruce Soord, the band has long been one of the genre’s most successful and accomplished outfits, releasing 16 studio albums and touring worldwide. The band consisting of Soord (vocals, guitars), Jon Sykes (bass), Steve Kitch (keyboards), and Gavin Harrison (drums), is working on a new album for release in late 2026.Bruce Soord comments: “Joining Inside Out is a definitive milestone for The Pineapple Thief. Having spent the past year developing new material, it became clear that Inside Out is the perfect home for our next musical journey. We are energised by this new partnership and can’t wait to reveal what we’ve been working on!”Thomas Waber, head of InsideOutMusic, adds: “We are extremely excited to welcome The Pineapple Thief to the InsideOutMusic family. As longtime followers of the band, it feels like the right time to be working together, and we can’t wait to help bring their new material into the world.” 

The Pineapple Thief recently announced the following festival shows in Europe:June 25th  Istanbul TK – Zorlu PSM – with The GatheringJune 27th  Cornwall GB – Morvala Festival of ArtsJuly 3rd  Joensuu FIN – Ilovaari FestivalJuly 4th  Helsinki FIN – CoolHead LiveJuly 16th  Bronnoysund NO – RootsfestivalenAug. 2nd  Manchester GB – Radar Festival Now, the band is revealing headline tour dates across North America.  Tickets go on sale Friday, April 17th at 10am local time.Nov. 17th  Washington DC – Howard TheaterNov. 19th  Philadelphia PA – Theatre of Living ArtsNov. 20th  New York City NY – Gramercy TheatreNov. 21st  Somerville MA – Somerville TheaterNov. 22nd  Quebec City QC – CapitoleNov. 24th  Montreal QC – Beanfield TheatreNov. 25th  Toronto ON – Danforth Music HallNov. 27th  Chicago IL – House of BluesNov. 28th  Cleveland OH – House of BluesNov. 29th  St. Louis MO – Delmar HallDec. 1st  Dallas TX – The Bomb FactoryDec. 3rd  Denver CO – SummitDec. 5th  Phoenix AZ – Crescent BallroomDec. 6th  San Diego CA – The Observatory North ParkDec. 8th  Los Angeles CA – The BellwetherDec. 9th  San Francisco CA – August HallDec. 11th   Seattle WA – Neptune TheaterDec. 12th  Portland OR – Revolution HallDec. 13th  Vancouver BC – Hollywood TheaterMore news to come…

Roger Simon’s Emet: A Spiritual Thriller

Roger Simon is a retired Hollywood screenwriter and novelist. I’ve read his previous novel, The GOAT, about an older man who makes a deal with the devil and becomes the greatest tennis player of all time. It’s very funny and thought-provoking at the same time.

His recently released book, Emet, is more serious. It’s a thriller/fantasy tale told through the eyes of a rabbi, Benjamin Golub. He lives in Nashville, TN, and is a rabbi for a synagogue there. As a fellow Nashvillian, I really enjoyed Simon’s references to real-life locations in our city, as well as his accurate representation of its culture.

One day in 2023, a tornado rips through Nashville, and Benjamin goes to his synagogue’s storm shelter. Also sheltering with him are his wife, Maya, their tweenage grandson, Menahem (Max), and good friends Ed Ristic and Tamara Klein. Tamara is grieving the death of her 23-year-old niece, Allison, who was brutally murdered while out jogging. She blames herself, because she urged Allison to come visit her in Nashville after a difficult breakup. Ed runs a coffeeshop, The Orphanage, and has befriended Tamara. He also fancies himself as a sculptor.

The morning after the tornado blows through, they go outside to take stock of things. The backyard of the synagogue is a mess of fallen trees and mud. Ed has already been there, shoveling mud away from the back wall, and he has created a big pile of mud that is vaguely humanoid. Ed jokes that he could make a statue out of it. Maya reminds Benjamin that it looks like a statue they saw in Prague of a Golem – a mythical being that was brought to life by a Rabbi Loew around 1600. He took some inanimate materials to fashion a humanoid and engraved the word Emet on its forehead, which means “truth”. The legend is that Rabbi Loew used the Golem to protect the Jews living in Prague from pogroms, but it soon got a mind of its own and ended up causing more harm than good.

Max, who is a mathematical and technological prodigy, is fascinated by the story. He is staying with his grandparents, because he got in trouble in school and was suspended.

He had responded to a teacher asking what his pronouns were with “I identify as a donkey. He/haw.” Some of his classmates laughed, but the teacher didn’t think it was funny, and things went south from there.

Simon, Roger. EMET (pp. 32-33). Green Hills Books. Kindle Edition.

Later that evening, unable to sleep, Benjamin gets up and goes out to look at the pile of mud Ed made. You can guess what he does next: he takes a stick and engraves Emet where it looks like a forehead is.

To continue reading my review, click here.

George Schuyler’s Black No More

Imagine a black scientist discovered a way to turn black people into white people. What would happen to American society? That is the premise of George Schuyler’s 1931 novel, Black No More. It is very funny and very disturbing at the same time, portraying the extreme racism of early 20th century America in all its horror and absurdity.

To continue reading my review, click here.

Happy 25th Birthday, Burning Shed!

Burning Shed Logo

Thank You


This month, (almost) unbelievably, marks 25 years of Burning Shed.
 
We’d like to issue a heartfelt thank you for your support over the many years and provide some insight into the company and what our plans are for this anniversary year.
 
‘The Shed’ emerged out of an idea Tim Bowness had for an idealistic online / on-demand label. Peter Chilvers – one of Tim’s musical partners – was experienced in the then mysterious world of e-commerce. Pete Morgan, the final piece of the Burning Shed jigsaw, was running Noisebox, a record label and duplication company (that dealt with releases by Tim and Steven Wilson’s band No-Man).
 
Over several intense gatherings (fuelled by eggs, chips, beans and the milkiest of coffees), a plan was hatched. After six months of trying to convince a bank that the notion of selling CDs from a website wasn’t witchcraft, that plan was in motion.
 
Tim brought in the music, Peter created the coding and Pete ‘The Morganiser’ took charge of logistics.
 
Initially, the idea was to issue elegantly packaged, cost-effective CDR releases – designed by Carl Glover – that allowed artists to experiment and, crucially, generate a little income from their endeavours. 



 
Luckily, the first releases – including albums by No-ManBass CommunionRoger Eno and Hugh Hopper – proved to be more successful than anticipated and Burning Shed rapidly evolved. Soon the CDRs became CDs and via word of mouth the company was hosting official stores for artists and labels including Robert Fripp King CrimsonStewart & Gaskin / Hatfield & The NorthJethro TullXTCKscope Records and many others (including, of course, No-Man and Porcupine Tree).
 
Peter Chilvers left in 2008 to work with Brian Eno, but Tim and Pete persisted, building the company up. 25 years on, the Shed is driven by the same instincts as it was at the very beginning.
 
As a “run by artists for artists” company, we try and ensure that the musicians and labels we deal with receive as much money as they can and that deals and accounting are transparent. There are no hidden costs or binding contracts. The idea has always been to release and help globally distribute great music at reasonable prices in the best way possible (to make sure it arrives in perfect condition and on time).
 
To celebrate our 25th anniversary, from April until next March we’ll be bringing you special releases, merchandise and giveaways including more raffle winners each month.
 
We’re also putting on a number of events throughout the year, starting with three co-headline gigs by Tim Bowness with Butterfly Mind plus Bruce Soord & Jon Sykes (The Pineapple Thief):
 
Sun 24 May – Liverpool, Philharmonic Music Room
Fri 29 May – Bath Fringe Festival at Rondo Theatre
Sat 20 June – London, The 100 Club
 
Ticket links are available via https://timbowness.co.uk/live/
 
Looking forward, we’re in a much more complicated world. When we started, it was relatively easy. Shipping involved a jiffy bag, a label and a stamp. Selling online is now more complex, with electronic customs declarations, tariffs, Brexit, GPSR, GDPR, etc etc. From operating out of the corner of Pete’s office at Noisebox, we now have a warehouse and a truly superb team of people making sure everything runs smoothly.
 
None of this would have been possible without the support of all the artists and labels we have worked with over the years. Most importantly, it would not have worked without you, our customers.
 
We know there are many other places to buy music from, so that makes it all the more special that you continue to order from us. Some of you have been with us since the very beginning, some of you have just found us. We are extremely grateful to every one of you, old and new.
 
Thank you for supporting what we do.
 
Here’s to the next 25 years!
 
Tim and Pete

Crockett White’s West End: All The King’s Men, updated

I have lived in Nashville, TN, practically all of my life. My parents moved here from Milwaukee, WI, when I was less than a year old. My father was hired in 1961 by Vanderbilt University to start up its Materials Science Department in the Engineering School. Even though I could consider myself a “native” Nashvillian (especially when you take into account the thousands of California refugees that have moved here recently), I have never felt like I am truly am one. It’s a cliche that Nashville is a “big city with a small town feel”, but it’s true. There’s a relatively small circle of everyone who’s anyone, and they all know each other. Still, I managed to keep up with local politics and society gossip through reading the two newspapers, The Tennessean and The Nashville Banner.

Crockett White is a former reporter for The Tennessean, and he obviously spent his career learning all about Nashville’s prominent families’ skeletons in their closets. He utilized that inside knowledge to write West End, a thinly-veiled fictional account of John Jay Hooker’s run for senate in the early 70s. Hooker was a gifted politician who truly had charisma. That word gets thrown around a lot, but very few humans possess it. Hooker had it – even his political opponents acknowledged his gift for connecting with and inspiring practically every person he came in contact with.

To continue reading, click here.

The Branford Marsalis Quartet in Concert: Four Masters Playin’ Tunes

Hitting the St. Cecilia Music Center stage 20 years on from his last visit (and 40 years on from when I first heard him live with his brother Wynton, then with Sting), sax legend Branford Marsalis seemed relieved to have safely made it to Grand Rapids, just one night after two shows further north in snowbound Traverse City. (“Turned right at Cadillac and — whoa!! Where’s Santa?!?”)

But any fear that Marsalis’ tight quartet had been shaken by their brush with a spring blizzard soon vanished; loose and comfortable as their leader teased drummer Justin Faulkner about being the “birthday boy”, they were also focused and ready to play. With a flourish, pianist Joey Calderazzo launched into his postbop workout “The Mighty Sword” — and instantly, the band was in the moment, bringing the sold-out audience with them. Off the knotty head statement, Calderazzo built a two-handed solo to a simmering climax (both his legs were moving, too) that Branford took higher with volcanic soprano licks; meanwhile bassist Eric Revis pushed the pulse onward as Faulkner rolled and tumbled around and across his kit. On the edge of fully free expression, yet always locked into the underlying groove and listening hard to each other, the Quartet’s interplay was riveting and undeniable.

Keith Jarrett’s “‘Long As You Know You’re Living Yours” was up next. A funky highlight of Jarrett’s 1974 album Belonging (which the Quartet covered in full last year for Blue Note), it brought out a rambunctious streak in Branford, progressing from rhythmic subtones to frenetic sheets of sound; Calderazzo answered with deft, deeply swinging gospel. Which then dramatically transitioned into the rich lyricism of his “Conversation in the Ruins”, as both he and Marsalis took wing above Revis and Faulkner’s subdued, flickering rhythms.

Then, the history lesson. With Branford namechecking songwriter Fred Fisher (born Alfred Breitenbach in Germany before he emigrated to the USA), the Quartet timeslipped back to the primal years of jazz with “There Ain’t No Sweet Man That’s Worth the Salt of My Tears” (made famous by bandleader Paul Whiteman with Bix Beiderbeicke on cornet and Bing Crosby singing). Everyone soloed to powerful effect — Marsalis crooning on soprano, Revis gracefully, purposefully walking the bass, Faulkner delighting with a dynamic feast of accents and colors. It was only later that I realized: as bland, as polite – even as patronizing – as this music seems in retrospect, 100 years ago, it was on the cutting edge of American pop culture. Why not take it out for a spin today and see what happens?

“Why not?” turned out to be the throughline of everything the Marsalis Quartet did onstage, always leavened with affection for and attention to the music’s potential and each other. As the night went on, the crowd tuned into it, too: how Jarrett’s melancholic “Blossom” was elevated by Rives’ rhapsodic feature and Calderazzo and Branford’s insistent quotes from “Happy Birthday to You” (said one-upmanship bringing hysterical guffaws from Faulkner); how, nudged by the group’s thoughtful probing, Jimmy McHugh’s “On the Sunny Side of the Street” morphed from a hesitation shuffle through stop time to flat-out rock and back again.

And then, coming to an impasse onstage, Marsalis and Calderazzo asked the audience for multiple shows of hands : “Monk or Ellington?” (Branford after that vote: “Ellington wins. Ellington always wins.”) “Up or down?” (Up.) Which yielded a loping, speedy “It Don’t Mean A Thing (If It Ain’t Got That Swing)” as the last tune — and, announced as for the benefit of the “young musicians” from local high schools in the audience, a downtempo take on the same tune as the encore! Both ways, Branford smoked, Calderazzo swung, Rives flowed — but each drew on varying parts of their vocabulary, to vastly different effect. Though working in the same vein, Faulkner well and truly went to town throughout; his creatively minimalist solo choruses for the encore (first brushes on snare with quarter-note kicks, then entirely on floor tom, ranging from warm caresses to rim cracks to meaty thuds) proved an enticing riot of colors and syncopations. The standing ovations that followed each version were both earned and inevitable.

This lineup of the Branford Marsalis Quartet has worked together for more than a decade. As friend and fellow blogger Cedric Hendrix has observed, that’s rare in jazz circles; the consistent result, whether on record or live, is spectacular internal chemistry – which in turn provides extraordinary opportunities for the music to truly breathe, scaling ever-increasing heights of freshness, invention and resonance. To witness all that, generated by four masters at play, bringing a century’s worth of music to spectacular, technicolor life — well, it’s an experience I’m glad I shared with 600 + others last night!

— Rick Krueger

Music, Books, Poetry, Film