Artificial Intelligence:

A Guide for Thinking Humans

by Melanie Mitchell


Melanie Mitchell is a remarkable person. Most women breaking into a male-dominated profession feel the need to become True Believers, and for 15 (out of 16) chapters she does a pretty good imitation. Yet...

I once heard of philosophy professors that they do a history of their profession, each time convincing you of the correctness of each new system, then they demolish it and build the next phase on its ashes. Mitchell does that with artificial intelligence (AI) dating back to the beginnings of the computer era. The later phases -- Convolutional (Deep) Neural Networks and Reinforcement Learning -- are not so much demolished as shown to be oversold. She leaves you still thinking she is a True Believer, yet honest enough to show you their dirty laundry.

Some of it, anyway. Take the word "convolution" which any dictionary will tell you means twisty or complicated. In biological contexts it refers to the twisty folds on the surface of the brain. In mathematics the term is used to refer to an obscure way of taking a function of a function, so that the result is totally unintuitive. They don't say that, but it's the only way you can understand what they do say. But the term applied to neural nets (NNs) Mitchell defines

A convolution is a very simple operation -- essentially a multiplication between the inputs to the unit and their weights [and then summing the products]
Which is no different from what a non-"convolutional" NN does, except that there are more layers -- usually expressed in publications I have seen by the word "Deep". The sole purpose of redefining a word that means "complicated" to mean a simple multiply+add operation is to give the (false) impression that this simple multiply+add operation is the basis of arbitrary complexity, eventually resulting in intelligence currently encoded only in the folds of the surface of a human brain. My experience is that people who explain what they are doing by redefining a common English word to mean something totally different are fundamentally dishonest (Darwinists do this by equating "evolution" with "change" some of the time, then switching unannounced back to "evolution" = descent from a common ancestor). I do not believe Mitchell is complicit in this deception, but as a woman she merely accepts uncritically what the intentional deceivers have told her. Yes, women do that more than men, it's built into their value system.

Me, I came into this AI topic by a more circuitous route. My minor in graduate school was AI back in the days of what Mitchell ridicules as "Good Old-Fashion AI" (GOFAI, pronounced "Go-Fye"), which I subsequently told my university students "is a fraud," and devoted my own career and teaching to carefully designed software solutions to real problems, including compilers. Four years ago I offered to mentor a computer day camp for high school programmers, in which they could implement a part of what subsequently became a reasonably successful autonomous vehicle project (until the students rebelled). Students do that. The first year a small group decided they wanted to do the same thing I was proposing (recognize pedestrians in a live video feed) using NNs instead of the designed solution I knew from experience was achievable in the time they had. Being a professional, I figured I needed to better understand what they were attempting, and after trying (and failing) to get a simple NN to work on the NIST database of handwritten numerals -- Mitchell devotes chapter 2 to her own success on the same database: she was proud of a recognition rate similar to what I considered failure -- I began looking for what exactly is wrong with NNs in general, whereas Mitchell, with a much bigger investment in the whole topic, is still trying to be a True Believer. Or at least comes off that way in the early chapters of her book. So on page 39 she admits that she did not attempt to figure out how her NN handwritten digit recognition program worked, where I spent rather more time figuring out why mine (and NNs in general) did so poorly, as compared to a designed program I could have written and debugged in half the time, and which I expect could achieve better than 99% accuracy on the same data. Maybe I will still write that program, but I had more useful things to do at the time. It's not a hard program to write, nothing like the difficulty in tuning the parameters of an NN, which Mitchell admits to being something like magic.

Anyway, about halfway through that first summer program, the students had achieved remarkable success and were now looking for ways to improve their software performance. They scoured the internet for computer vision theory and had questions that I was not prepared to answer, so Dr.Mitchell, a professor at the PSU campus hosting our program, and also mother of one of the students, came and gave an explanation even more obscure than the internet papers the students were reading. By then I was able to explain to them what they needed to know, and I guess that improved my status in their eyes. Ultimately, the ad-hoc design I told them about at the beginning (and they already had working) performed at par with the detailed mathematical stuff they were reading about, and far faster, so that is what they demonstrated at the final show-and-tell. The other group, working on a much simpler problem, had nothing running to demonstrate at the end of the four weeks. You can watch the video of both presentations (see "Summer Videos"). At my brief encounter with Melanie, I did not know how heavily she was invested in NNs, only that she was a Computer Science prof at PSU.

So here I am with good reason to be hostile to NNs, which are the only thing even considered under the moniker "AI" today, yet I'm reading a book about AI, written by a person who makes her living doing AI, so I'm looking for the flaws in what she describes. And they are there, she admits to them honestly! The only time you can really trust what a person says about something is when they tell you the negatives in what they believe in. Other people may be telling the truth, but you have no way of knowing, apart outside corroboration, the kind that Mitchell gives you in this book, which makes her own profession look bad.

Well, not really. In the final chapter she distinguishes herself and her research from the masses of DCNN snake oil (my term, not hers). She does not tell us exactly how she expects to find a way to make computers understand by analogy and metaphor, like humans and very unlike any of the present so called "artificial intelligence" which on the next-last page she approvingly quotes somebody calling it "stupidity" (a word I have been using for a couple years now -- see "WIRED Admits Flaws in NNs" -- long before I knew of her book).

As a programmer, I need to know how things work -- I tell my students they cannot tell a computer how to do something (which is the essence of programming) if they don't care how things work -- but Mitchell (like most women in this industry) has different priorities. She tells how "adversarial" attacks on DNN image processing can make the computer think an image of three cupcakes be recognized as "A dog and a cat are playing with a frisbee" or a fancy pair of glasses can fool existing face recognition software into identifying the man wearing them as a female celebrity, without any concern for how they pulled that off.

If you abandon the Darwinist hypothesis -- which Mitchell shows no sign of doing -- then this becomes a simple matter of programming by design. Humans see a cat in a picture because the image meets certain requirements of "catness." The DNN has no concept of catness or dogness at all, only that of the thousands of pictures it has been trained on, all of them have some number of pixels (few or many, it makes no difference to the NN) that are the same across all the pictures labelled "cat" and different for all pictures with a different label. An adversary (to use Mitchell's term) need only know which pixels those are and set only those pixels to match the pseudo-catness the DNN is looking for. You could determine what the DNN is looking for by tracing the high-ranking nodes back from the outputs, or much more simply by running a single pass over the (public) data set all these researchers used looking for single pixels shared by all and only images labelled "cat". In fact you could "train" your DNN in the same single pass, and forego all those thousands of training runs. But then it would also be vulnerable to the same kind of assault as the acclaimed widely touted DNNs. But Mitchell didn't say any of this. She probably didn't even think it, she still believes in the fundamental Darwinist lie that all manner of complexity arises from random chaos by time, chance, and natural causes. This particular adversarial attack can be defeated by using non-public training data, but that would be very expensive and still no safer.

Perhaps Mitchell's own research will turn up some way for computers to understand analogy and metaphor the way humans do, but she offers no optimism at her success. I further believe that DNNs will not be a part of any solution she finds, although she neither confirmed nor denied any such hope. DNNs are merely a cute trick for optimizing a large set of weights inefficiently, a lot slower than a good designed program could do a lot faster, and unrelated to biological intelligence. Mitchell did not say any of that, although she did use the word "trick" to describe them.

The book opens with a Prologue in which she quotes Douglas Hofstadter, author of Goedel, Escher, Bach: an Eternal Golden Braid, or more succinctly, "GEB" (pronounced "GEE-EE-BEE")  claims to be "terrified" at AI. It closes with that terror repeated, not at true AI, which she does not expect to happen in her lifetime, but at the vulnerable and (dare I call it? She did not) fraudulent stuff that passes for AI today, because it is given more power over people's lives than is warranted by the data. I'd have to agree.

People should read this book. It's a warning to the people in the industry that there are deep flaws in what they are selling to the public, and it is also a warning to the rest of us, to not buy that snake oil. She could have done better, but maybe she couldn't. The AI people are not the only ones selling snake oil to the public. Read the book, it's a good start.

Tom Pittman
2021 July 17a