ChristianityToday on AI


It seems a lot of public voices are chiming in on the new "Doomsday Machine" (AI). While the ChristianityToday version is not the Chicken Little panic I see elsewhere, I still could have hoped for better.

First and most important, and while engaging the Devil on his own turf is not a lost cause -- there is only one Truth, and the Devil does not have it -- preaching to the choir (last I looked, CT is not an evangelistic magazine) should always start with God: "In the beginning God created," and "In the beginning was the Word and the Word was God..."
 

The Secular Religion

The Established (government funded) Religion ("believing what you know ain't so," or more precisely, the set of beliefs and obligations non-negotiably defined to be "TRUE" for its adherents despite any objective evidence to the contrary) in the United States holds that all manner of system complexity can and did come about by the undirected accumulation of random events. That includes human intelligence. It is considered to be a Law of Nature, taught in every government-funded school and evangelistically held to be non-negotiably TRUE by every person engaged in modern "Artificial Intelligence" research and development. And being a Law of Nature means that this principle can be harnessed to recreate intelligence and consciousness as an artifice of human activity. They really believe that, nevermind that the Entropy Law of physics argues otherwise, and all human experience supports the laws of physics over against the Darwinian hypothesis, whenever and wherever it is tested. You don't hear about those scientific results because the Establishment Religion (read: the government) cannot tolerate any open expression of any contradiction to what is non-negotiably "TRUE".
 

The God Version

The Christian perspective starts with God as a Designer Who created an incredibly complicated computer program (DNA) and a chemical computer to execute that program, and then a couple days later created Adam (the Hebrew word 'Adam' means "human" everywhere else in Scripture) "in His Image" -- whatever that means (we are not told, but it probably includes the ability to construct secondary creations able to do some human-like things such as moving around and thinking) -- and here we are. Man (Adam) rebelled, and not much later God observed that "nothing the [humans] plan to do will be impossible for them" [Gen.11:6]. God put a stop to that activity, and the effects of that stoppage persist to this day, except that all of technology is now done in English (I would say "except in France and China" but they look to us for leadership). The thinking among technologists is very much like that in Babel. Maybe God will confuse their language again, and maybe not, we are not told. I can imagine several different scenarios where God just sits back and watches us destroy civilization -- think: "Dark Ages" -- leaving only the Remnant, perhaps North America and Europe get nuked, leaving the world set up for the great Wars of the Apocalypse (I read the last chapter: the USA is not in it). Whatever.

So much for theology.
 

How Modern "AI" Works

If you want to debate the Devil on his own turf, you should at least understand how his machines work.

When I went back to grad school, the rule was that PhD students had to meet a breadth requirement -- many schools call it a "minor" -- and I chose Artificial Intelligence, which (we were told) is behavior that if a human were to do it, it would be called intelligent. The focus of AI research at that time was inferential logic, best expressed in proving mathematical theorems, both in the abstract (solved not long after I graduated) and also in real-world "common sense" (which kept getting harder until they gave up). My own specialty was in Computational Linguistics, turning ideas into (programming) language, and then those programs into machine language, but I never completely lost interest in the progress of AI.

The "Good Old-Fashion AI" (GOFAI, pronounced "go-fye") of four decades ago ceased to be funded by the Establishment Church (US Government, see above) and was replaced by the modern Neural Net (NN) version, which is ("deep") a large number of layers, each layer consisting of a vast array of artificial "neurons" where each neuron sums the products of the outputs from the previous layer times individual weighting factors, then if it's above a certain threshold, passes the result (times its own weighting factors) to each neuron in the next layer. The program to do this is a tiny "30 lines of C," but all the intelligence of the NN lies in those weighting factors -- the latest GPT program is said to be over a trillion such numbers (called "parameters"). The artificial neuron functions more or less the way we understand human nerve cells to work, but their arrangement is strictly linear, front to back, input to output (except during training, when there is a secondary feedback path propagating the reward of correct results back to the neurons that led to it, also linear). Also, the training given to the NN (mostly undirected data off the internet) is not at all like the training given to human brains (which is often inferential and mostly tied to moral consequences).

The AI NN starts out completely random (it cannot learn anything otherwise) and it is fed vast amounts of tagged data -- for example, images labelled "cat" or "dog" or "house" -- and the random NN runs its calculation, and any deviation of the output from the label is fed back through to adjust the weights, hundreds or thousands of times until it gets the result right. That might not happen, and if so the initial weights are re-randomized (or something like that) and the training is restarted. I guess the more skillful AI programmers are better at getting good results without restarting the training process, which is very compute-intensive: if you cared about climate change, you could look at this process as a major contributor to atmospheric carbon.

Anyway, there is no inferential ability in the NN, only these weighting factors applied to initial data (pixels or whatever) and to all the intermediate internal results. The human brain is known to have certain regions pre-wired to do certain processes, like face recognition, and while the human eye is receiving a continuous stream of visual data whenever the eyes are open, much of that is pre-processed to give extra attention to movement or small dark objects in a light background (which to a frog, that would be lunch). Other parts of the human brain are involved in inference connecting different sensations -- including thoughts inferred from reading or hearing instruction. There is no such mechanism in the artificial NNs, nor is it considered necessary.

I tell my students, "The computer is a marvelous device, it does just exactly what you tell it to do, no more and no less, even if you did not want to tell it to do that." The NN computer is no different, but the program it was given to do is mostly defined by the tagged images.

A "generative" NN (that would be the "G" in "GPT") learns from strings of letters in text instead of image pixels, and the "tag" in each case is (so they tell us) the next letter, or (in GPT) word fragments. It thus learns which letters or word fragments constitute a valid English (or whatever language) sentence, and which sequences are not. There is nothing in this training that tells the computer what these letters and words mean or sound like or anything like that, only the probability of appearing together.

The "P" in "GPT" is "pre-trained", which is a billion lines of Reddit text, and/or all the text in Wikipedia, or all the programs in GitHub. To this the programmers add some directed training to weed out racial slurs and other undesirable content. I suppose (I never saw that they said anywhere) they also added some code to start their engine up based on the initial human input, probably picking out keywords like Google does with search strings. It's easy enough for a computer program to figure out what are keywords in Reddit and Wiki entries.

The "T" is "transformer," which is less clear, perhaps several NNs working end to end the way they describe for "adversarial NNs". In any case it is certainly some variation on the NN scheme, because when you are a hammer, the whole world is a nail, and these guys are NN experts. My hammer is programming theory and practice, and I know there are theoretical reasons why NNs creatively generating new code is not possible. But nobody cares what I think.
 

Creativity

[These next two paragraphs are a wee bit more technical, feel free to skim...]

The important thing to understand is that the NN-based system is a linear machine (called a "Finite-State Machine" or FSM), taking input, doing some sequential processing (no iteration) on that data, then spitting out the derived output. The NN has no hands or feet, no sense of smell or feeling, no emotions, no input other than the text it was trained on (including whatever tags they added to the subsequent training) plus your prompt string, no way to know what any of the words mean in the real world, only what words appeared in what sequence in the training data. The versions of GPT that do drawings, they trained on drawings on the internet, so they know what blocks of color and shape go together with what descriptor words, and they can reproduce that.

I'm an expert on computer programming -- that's what my PhD is about -- and I know that you cannot write a computer program by choosing (based on probability) what letter or word comes next, you must get the variable declarations correct, and you must balance the parentheses and braces correctly. Yet GPT claims to compose new programs, programs that look like what a teaching assistant at some university would upload to GitHub for his class to see. A FSM cannot write that kind of program, it has no understanding of matching parentheses in pairs. Particular programs have particular combinations of operations in a particular order. But a FSM can copy text exactly. If the GPT team used a compiler to validate the generated programs (by disallowing bad generated code), and if the sequences GPT is trained on extend to a paragraph or more, then GPT could simply spit out snatches of code it has already seen on the internet, the only creativity being the human who wrote and uploaded the original code in the training data. That also works for chat text, but it's harder to detect.

Does it look creative? Of course it is! The humans who uploaded those images and sentences and paragraphs and programs either created them themselves, or else copied them from somebody who did. There is no evidence at all that what you are seeing as GPT "generated" is other than an exact copy of what some other human created. Remember, a billion lines of text in Reddit and 4 billion words in the Wiki corpus and maybe another billion or more lines of code on GitHub is an astronomical amount of source text for GPT to copy and "generate" small parts without the plagiarism being recognized.

The only way to prove that happened is to search the entire training data for the "generated" text. The GPT folks won't do that, they believe you are seeing computer creativity, not copies, so why should they waste time attempting to prove otherwise? A copyright lawyer with a big budget might do the search, and I expect that to happen in the next five or ten years. But we cannot know until it happens. Statutory damages per copy could bankrupt the GPT for-profit corporation and have a chilling effect on future AI research. It will certainly make the developers look like idiots. Sin does that to the perpetrators. God said so.
 

Writing About AI

Because nobody -- not even the "AI" developers themselves -- knows what's really going on inside the "AI" engines, asking them to explain it is futile. Some of them, seeing the danger of unexplained machines running amuck, have started to think about how to do "Explainable AI", but their proposed solution is to train yet another NN in how to generate text that looks like an explanation, but (like GPT) it is nothing more than a probabilistic association of keywords in the text, totally unrelated to what is actually going on inside the computer making the supposedly "intelligent" decisions. Hmm, that was a couple years ago, maybe they already figured out it isn't working.

So here we have two articles about "AI" in the October CT. If you step back a little and look at it cross-eyed, it's not hard to see that what GPT is doing is not all that different from what these two authors (and many of their colleagues in other publications) have done: faced with a topic they do not understand, they quote other sources. There are two differences: first, GPT doesn't bother to give attribution to its source material, and then the sources the human writers quote also don't understand what they are telling us about. It's sort of funny, because Bonnie Kristian explicitly deprecated -- her subtitle: "Be not conformed to the pattern of a robot" -- what she and her staff colleague did.

A large part of the problem is that we (human beings) do not understand the nature of human "intelligence" or consciousness nor even the "soul." God did not tell us how those things work, and if He had, we probably still would not understand it. At least I have a workable metaphor for this disconnect: Like the poet Robert Browning, I often write computer programs that only I and God understand, and after a while, only God does. It is foolish for people to pontificate on what they do not understand. There was a time when nobody knew how to move people safely through the air the way birds fly, and traveling faster than a horse was deemed impossible. We -- at least the "AI" researchers -- are still pretty much in that position with respect to analytical thinking.

I started out to say we should pay attention to Ms. Kristian's warning, but that doesn't start to become useful for at least a couple decades, and probably much longer than that. Actually, we cannot see that far into the future, but the technology right now is moving in the opposite direction, away from machines that might make her concern worth our attention.

Let me make it perfectly clear: We already have -- and have had for several decades -- the technology to build machines able to do the work of most people alive today. The reason that has not happened yet is that people with suitable training can do amazingly complicated jobs, mostly without thinking very hard at all. Anything we can tell a person how to do, we can also tell ("program") a computer to do. But even a teenager just entering the workforce has over a decade of continuous learning; nothing like that is available to neural nets. GOFAI was working on it, but bogged down on the vastness of the prospect. The "Large Language Model" of GPT is attempting to leverage the internet to replace what failed in GOFAI, but they lack the computational complexity of the human brain to make it work. The result is that their GPT has orders of magnitude more speed and memory than any human alive, but the intelligence of an earthworm. They have been so bamboozled by the apparent success of creating grammatically correct (but meaningless) sentences, they are unlikely to fix the root problem any time soon. The money they have already spent will be a deterrent on anybody restarting with a better (more expensive) platform.

We already have machines that can guide a car to stay within its lane -- high school students can (and did, see the video here) do that -- or build a publishable abstract from a technical paper. Interviewing people with expert knowledge or unique experiences and knowing what questions to ask is too hard for GPT today; not even human reporters get that right every time (so there's nothing to train the NN on). Many book authors (and novelists like Louis L'Amour) write only one unique book, then publish it multiple times with different titles and slight variations in the content. Often a novelist or cartoonist retires or dies, and the publisher gets another writer to continue the trademark (usually with lower quality). GPT is pretty good at that kind of copying, but we have copyright laws to prevent other than the original author or publisher doing it, and it still takes a lot of training to get the style right. The programming and training cost probably exceeds the salary of the person doing the copying. Simpler tasks like keeping a car in its lane only add a few thousand dollars to the price of the car after amortizing the development over thousands of vehicles.

Basically, it's much easier and cheaper to tell a person "Just do it!" than to design and program a machine to cope with all the variant issues that any thinking person can figure out or ask help. The cost to design and build a machine smart enough to do what most people are able to do (and are doing today) can only be amortized over a very large number of jobs before it can be less than paying a person to do the job. Right now that works for transportation and for certain factory jobs, and perhaps cocktail party chat (think: ChatGPT) where what is being said has no significance to the listener and occasionally for the receptionist on a telephone answering system, but mostly not otherwise. Even the job of our two CT authors, the machine is not yet smart enough to know who to quote because (today) only human authors can write with a particular purpose in mind, to bring a comforting Christian message to the readers. Computers could do that (by quoting human writers, as did Kristian and Lucky) but some humans must write the original text to be quoted, because computers are nowhere near that level of intelligence, not today and probably not ever. Besides, the development cost of the machine to replace Kristian and Lucky and a dozen other Christian magazine writers like them far exceeds the combined salaries of all of them a thousand times over. The technology already exists to make computers that smart in a generalized way (so that particularizing them to do a Christian magazine writer's job might cost only thousands (not millions and billions) of dollars, but the research dollars are going in the opposite direction, and developers are unlikely to recover from that blunder in the next couple decades.

Oh, there's also the problem of employment, what will all those people the machines replaced do to earn enough to pay the higher prices that all those machines make things cost? "Let them eat cake" is not a solution. That's a moral question the hardware and software people are not considering. For now, it is irrelevant.

In my lifetime -- and probably that of anybody reading this -- Ms. Kristian's warning is and remains hypothetical and futuristic. American technology is currently headed in the opposite direction from what made us the world leader, and while there are no obvious contenders today likely to replace us in the foreseeable future, God only knows.

And what, you may ask, propelled the USA into world leadership in technology and finance? I read it in ChristianityToday, but you need to know what to look for, starting with Nancy Pearcey's book, Soul of Science, where she shows definitively that modern science began within the moral absolutes of the Christian faith and nowhere else ever. When you understand that, it's a small step to understand the implications of the February 2014 CT cover story: not only Africa made significant economic and cultural advances as a result of the Protestant Reformation, but all of northern Europe, and then the USA, our culture was built on a 500-year heritage of reading the Bible in our own language, and the moral absolutes taught in the Bible but not from the pulpits in other Christian heritages (and increasingly also not in Evangelical churches in America: witness the decline of STEM education in the USA). That world leadership is going away, and GPT is a product of the decline: what is called "AI" looks far more impressive than it really is.
 

The Bottom Line

What should you expect to learn from this page? Three points:
1. What got called "AI" in the last 25 years is not, and cannot be intelligent. They could go back to existing technology before that, which had the potential for intelligence, but the people with the biggest investment in today's "AI" technology are religiously disinclined to do that.

2. Stupid people using stupid machines can cause harm to innocent bystanders, smart machines not needed. Don't wait for "intelligent" machines to arrive before you start thinking that there may be a problem here.

3. Bonnie Kristian's advice ("Be not conformed to the pattern of a robot") sounds good, and I myself have tried to do this most of my life, but it is neither practical nor even possible most of the time for most people in the Real World, as Kristian illustrates for us in her own article.


Tom Pittman
Rev. 2023 November 4