"AI" for Christians


In grad school I minored in Artificial Intelligence, basically gaining some proficiency, but not enough to get a graduate degree in it. We were told that AI is a computer doing things that if a human did them, they would be called intelligent. I think that is still the working definition used for the same name labelling what we see today. Under the hood it is very different from what I studied. Four decades ago AI tried to model the processes in human thinking. The better we got at it, the harder it became, and eventually they abandonned research in AI entirely. We now call that time "AI winter."

What passes for AI today is modelling how we imagine the individual neurons in the human brain work. It's not very close, but the results are amazing. Nobody knows how thinking happens in the human brain -- we sort of understand how neurons work, and we know what regions of the brain do what kinds of thinking, but it's a gray fog in between. Similarly, the AI developers have a simplistic model for computational neurons ("30 lines of C code") thought to be fairly close to the biological neurons, and when they put millions of these things together with a trillion or more connections and give it thousands of iterations of training, what comes out looks intelligent, but nobody knows exactly how that came about. If one person thought for one second about each one of a trillion neuron connections, it would take 300 centuries to review one iteration of the whole network. Training these things takes thousands of the fastest computers made six months or more.

Instead, they imagine that their machines are duplicating Darwinian evolution, the religion ("believing what you know ain't so") hammered into impressionable young minds in every public school in the country: All manner of system complexity can -- and did! -- arise from the mindless accumulation of random events, which therefore constitutes a natural law which can be harnessed to produce artificial system complexity ("AI"), and (so the theory goes) that is what they have done. The developers actually believe this nonsense. Sort of.

The fact is that the neo-Darwinistic hypothesis has never worked in the Real World, and scientific experiment(s) have disproved it, but that fact goes against the American National Established Religion, and is therefore suppressed. Hints of it persist, see my essay "Biological Evolution: Did It Happen?"

The software structure(s) they use in Large-Language Model (LLM) "AI" do not occur in nature, it is carefully crafted to (sort of) comply with the neo-Darwinistic hypothesis. Little is known about the connections between neurons in intelligent human brains, but we have mathematics to explain the kinds of system complexity that is computable, and the structure of all software neural nets (NNs: the underlying technology in all modern "AI") is linear from input to output, which constitutes a "Finite State Machine," the lowest and simplest of four levels in the "Chomsky Hierarchy" of computability. Human intelligence is capable of thinking as (or above, but we don't have any way to mesure it) a "Turing Machine," which is the highest of those four levels, and the best understood computer software operates at the second or third level. LLM "AI" is not even as intelligent as most computer programs. I know this because I got my PhD in this stuff.

So why does LLM seem so intelligent? The current issue of ChristianityToday invited several authors to explore the topic, and they hint at the problem without actually challenging the fundamental difference between natural intelligence and so-called "AI". All the authors trying to address this issue lack education in computational theory, so they accept at face value the claims of the developers who should know better, but apparently do not. Or maybe they are lying to us, but I doubt it. They really believe what they are telling us, because they don't understand why it cannot be so. I'll never forget one of my grad school professors dismissal of what I was trying to say, "Tom, it's a theorem." He was right. And this also is a theorem. It cannot work as advertized, not in the general case.

Google searches the internet for every website that links a word or phrase to some other website, and ranks them by the number of such links. It's a popularity contest, and the more links that point to a page, the better its score and the more likely a search using that word or phrase will turn it up in the top ten or 100 hits. The LLM is trained on the entire Wiki website and the entire GitHub site and the entire Reddit site, without any ranking or curating. ChatGPT trains something like a trillion parameters on more than a billion lines of text, so it is perfectly capable of memorizing the entire training corpus. It generates its output from random numbers filtered by the learned parameters (per their understanding of the neo-Darwinian hypothesis), but what comes out is not new intelligence, only simply a regurgitating of the memorized text, slightly randomized after the fact, but not very much because otherwise it fails the training restrictions. It is intelligent, the creation of intelligent human minds and uploaded to the internet, but it is not recognized as a copy because there are over a billion sentences memorized from the training data and nobody has ever seen more than a tiny fraction of it, and then always and only the same popular lines that Google returns. Occasionally somebody recognizes their own work, but they are so bamboozled by the "AI" claims of the developers that they think the machine intelligence is ratifying their own intelligence by re-creating the same ideas. They say as much in printed magazines that I read. I don't know for a fact that this is what's happening inside the machine -- the developers certainly are not considering it as a possibility -- but it fits both the available data and the math; their explanation does not.

I got paid well to write software that solves the easy (for me) problems and neglects the hard ones that only humans (like myself) can solve, and I was thinking of that skill when my prof put me down. I can write a program that will do the same thing as, and as well as (but with no hallucination) the current "AI", while using a lot less computing power (and thus makes less atmospheric carbon, if you care about such things), and it probably wouldn't take me much longer than these LLM researchers did in making their "AI" work, but it would disprove the neo-Darwinian hypothesis (at least for LLM-implemented "AI") so it would not get any funding. Mine would be way cheaper than LLM, but not cheap. A billion lines of internet text takes a lot of computing to process it, no matter what the algorithm.

Natural language has syntax and sentence structure restrictions, but far fewer restrictions on ideological content, so pretty much any goofy idea we can think of is already up there on the internet and memorized by the "AI" engines. Computer code is much harder to get right, and therefore easier to devise tests to disprove the machine intelligence, so perhaps the house of cards will collapse when people start asking the hard questions about so-called "generated code."

Or maybe the developers will double down and find more investor $millions to carbonate the atmosphere and devise better ad-hoc solutions (like the way I write my programs), and "AI" will make fewer "hallucinations" and eventually become the Biblical AntiChrist. But it's still all a hoax, a creation of the Father of All Lies, and maybe Christians should stay away. Or maybe they should be in there adopting all this nonsense in the futile hope of winning some of the unbelievers in that industry to Christ. But that decision is "above my pay grade."

Tom Pittman
First draft, 2025 July 30