(Important) Notes


TinyBasic was my first foray into the public software market. Mine wasn't the first TinyBasic, nor the most widely used, but it was arguably the most famous. Everybody remembers using Tom Pittman's TinyBasic, even if they never had a computer it ran on. I think that's because of all the people doing TinyBasic, I was the only person who did it as a commercial product. Not even shareware, you had to pay $5 up front to get it. I recorded the serial number with every name and address.

Back to top of main essay


I first got the Mac in 1984 as a way to do my dissertation, which required some special characters not on your average daisy-wheel printer. Mac fonts were in software, which meant I could do anything, and I did -- after I got past the initial hurdle of closed software. MacsBug (the low-level debugger) and the font editor and a terminal program written in Basic were all I needed to get going. As a programmer I already recognized that the Mac was the wave of the future, a way to leap over the current PC technology and build a business writing software for it. But it was too complete. It was 18 months after I got the Mac before I could think of anything that didn't already exist. The PC was naked, so if you wanted it to do anything at all, you had to buy third-party software (which tends to be inferior quality, especially on the PC), so there was ample opportunity to write a better version. Not so the Mac. Everything I needed it already did and did well.

Eventually I decided that a screen saver would be useful. There were third party screen savers out there, but as I said, they were inferior quality. You had to DO something to invoke them. If I get called away from the computer, I may not be able to come back to invoke the screensaver when I realize it's going to be more than a minute. These third party programs took over the computer, so if you had a long slow program running (I have lots of long slow runs), it burned its image onto the tube, or stopped. Most people wear a wristwatch, but I don't, so I like to look at a wall clock. MenuClock serves that purpose today, but not very many people have noticed that it's faster to read an analog clock than digital. To read a digital clock you need to do mental arithmetic on the numbers, while an analog clock you can capture the relative time intuitively just from the position of the hands. Why do you think people still wear analog watches, when digital is so much more accurate and cost-effective? Analog is easier to glance at quickly.

Anyway, I put all these things together and produced a system extension (they called them INITs then) which loaded first, before anything else, and took over the alternate screen hardware that Apple had put in for games to use for fast animation. Nobody ever used it, so I did. AutoBlack just sat there in the background using almost no system resources except 22K of alternate screen memory and a few cycles to see if the mouse had moved or a key pressed. If not, it just switched to the alternate screen buffer without telling anybody. Instant black, and whatever program was running kept on running, displaying whatever it was displaying on the regular screen. Move the mouse or press a key and AutoBlack switched the hardware back and there was the program still doing its thing, with no delay to redraw (some programs didn't know how to redraw, but with AutoBlack they didn't need to; nothing was lost).

AutoBlack was written in Pascal. The whole Mac system software was originally written in Pascal, but the Apple Pascal compiler didn't run on the Mac, so you had to buy an expensive Lisa to compile for the Mac. I write compilers, so I wrote a tiny Pascal compiler that actually ran on the Mac and generated native code. AutoBlack was one of the first serious programs I compiled on my compiler. It was one of the first programs compiled on the Mac for the Mac, but nobody knew that.

Like TinyBasic, I offered it for $5, but this time it was true shareware: You could download it from any bulletin board, and if you liked it, pay the $5. I had an obnoxious startup message that spoiled people's startup picture. People who paid could ask how to eliminate the startup message. I guess it didn't bother most people, because almost half of the $5 payments I got for AutoBlack came after they bought a new larger Mac that didn't have the alternate screen hardware, so AutoBlack didn't work at all. They were hoping for an upgrade, but I couldn't do it. When System/7 came out, I did a more conventional screensaver and called it AutoBlack2, but System/7 was buggy and it crashed a lot. System 7.5 finally got the bugs fixed, but by then the market was gone.

AutoBlack2 still works in MacOS/9, but OS-Ex is no longer a Mac system, so it is no longer functional there. Most of us who prefer the real Mac operating system to eunuchs are not planning that final upgrade. If you too are staying with the Mac instead of switching over to OS-Ex, you can download AutoBlack2 free.

Back to top of main essay


When Bill Atkinson wrote HyperCard, he called it a "software erector set." That was a very descriptive name. People loved it. People who were afraid of programming languages began writing programs without even realizing they were programming. It was the Macintosh answer to Basic. Atkinson knew some of the more arcane things HyperCard can do (like scripts which write scripts), and said "There will never be a HyperCard compiler." In a way he was right. I taught compiler design at a midwestern university at the time, so I took that as a challenge. Mostly I succeeded. The parts of HyperCard that cannot be compiled, I didn't compile; I just sent them back to HyperCard to interpret. The compiled code ran as HyperCard plug-ins (called "XCMD"s) so HyperCard was always there, and I could get away with it. It's an incredibly powerful concept.

MacWorld magazine ran a HyperCard stack contest in the summer of 1987, and I wrote my compiler as an entry to that contest. The philistines at MacWorld didn't understand what they were seeing, or maybe it was too buggy (I wrote it in two weeks to meet the deadline), so I didn't win. But one of the guys in our local Mac club thought it was really neat, so he mentioned it to a HyperCard stack publisher who nearly drooled into the phone when I told him what I had. Another year of rewriting to make it a commercial product, and CompileIt! was born. It eventually won a 5-mouse rating from MacUser magazine.

Apple never understood HyperCard, and started charging for what should really come free with the computer. As a result, the market for HyperCard add-on products dried up, and my publisher stopped promoting it (but maybe 1-800-888-7667 might still get through to whoever is selling it). If you use HyperCard, CompileIt is still useful and very powerful. If you don't, maybe you should look into it: it's an incredibly powerful tool. I write all my new software in CompileIt. Really. CompileIt was written in HyperCard from the start (I never used any other programming language on it) and it now compiles itself. That's how powerful it is, which was my point in doing it in the first place.

Back to top of main essay


Some ten or twelve years ago, Apple started thinking about where to go with the Macintosh product line, how to beat the PC in the numbers game. The academics in Berkeley and Stanford had come up with the idea that Reduced Instruction Set Computers (RISC) were inherently faster than the complex (CISC) instructions that drove microprocessors ever since the 8080, and IBM had an incredibly fast workstation which more or less followed that design philosophy: the RS/6000 had twice as many instructions as the 68000 (68K) family used in the Mac, but it was less than IBM mainframes, so I guess that counts as "reduced". Apple got together with IBM and Motorola (which made the 68K chips) and they worked out a three-way consortium to use the RS/6000 instruction set in a chip that Motorola could manufacture, and Apple would provide a consumer market for the chips to get the volume up. Thus was born the PowerPC.

IBM learned with the introduction of the S/360 30 years earlier that customers must be able to run their legacy code on the new machines or it won't sell, so Apple planned a 68K emulator that ran in the PowerPC. It was a very good emulator, but that's also one of my specialities, so I thought maybe I could improve its performance. Apple management was rather snotty at the time and refused to give me any help at all, so I had to buy a new PowerMac on the open market and reverse-engineer the emulator. This meant that I probably know as much about that emulator as anybody inside Apple except the programmer who did it. He still denies it, but there's a password in the ROM (his name and birthdate) for getting past the protection. I found the password.

My prototype emulator ran four times faster than Apple's -- on small programs; large programs actually ran slower. It took me a while to figure out why, which is mostly that RISC is not all that much faster than the CISC architecture for real programs. The reasons are complex, which I explained in a paper at HotChips a couple years ago. The bottom line is that CISC Pentium technology is no slower than RISC PowerPC technology, as we all have seen for the last decade. Two other developers had the same idea about the same time I did, and it took them rather longer than they thought to get their products to work well enough. So my emulator was not commercially viable and I had to scrap the whole year's work.

One of the things Apple left out of their 68K emulator was floating point (which does scientific notation numbers in hardware). It was almost impossible to get it to exactly duplicate the 68K operation, and because some Macs did not have the floating point unit (FPU), Apple just left it out of their emulator. There were software replacements that were quite slow, but at least the programs that insisted on the hardware being there could be fooled into running. Programmers can do better, and true Mac programmers do, but these programs were mostly PC and unix ports.

Anyway, I noticed in rummaging around inside Apple's emulator that they had left in the hooks for FPU emulation, but just never connected anything to the hooks. It didn't take me long to build a reasonable FPU emulator using Apple's own numeric routines and connect it to their hooks. I figured it would enhance the market value of my emulator, and when that went down in flames, I found I could isolate just the FPU emulation and plug it into Apple's emulator. It ran 20 times faster than the shareware model already out there. So now it's a commercial product called PowerFPU. Most of the programs that needed it have long since been converted over to PowerPC native code, so there's not much of a market for it any more. The 68K part of PowerFPU is written in CompileIt!

Back to top of main essay


Post-modern thinking notices that science limits itself to a certain way of thinking about the world, and that this worldview was favored by a class of "dead white males" and not by third-world cultures nor by females even in that European culture, and then supposes that the reason for this is power politics. This is a bit short-sighted, given that those non-European cultures had every opportunity to develop their own science and technology (but did not: see The Soul of Science by Nancy Piercey).

The scientific way of thinking submits to the real world out there. It does not invent it, but only invents ways of working within the constraints of that objective world. This is similar to the attitude in the Christian worldview, which submits to and works within the constraints set by an objective God. In a polytheistic (and to a lesser degree, atheistic) worldview, if you don't like these rules, go find or make up different rules. If that doesn't work, it's because a stronger god is impeding you, not because your rules are just plain wrong. This leads to a feeling of victimization rather than mastery of your own destiny. It creates ethnic divisions and wars rather than science and technology.

Classical mechanics holds to a repeatable cause-and-effect relationship for physical laws, so that celestial motions are indeed predictable from the laws of gravity and mathematics, and engineering works for building big and complex artifacts, both static (buildings and bridges) and dynamic (cars and airplanes). With very small objects the rules change, and quantum mechanics better explains things. Modern electronics must take quantum theory into consideration, but cause-and-effect relationships are still repeatable, and we can still engineer complex artifacts like computers and TV sets that work reliably. Otherwise you couldn't be reading this.

Somewhere along the line the strangeness of quantum theory got adopted into philosophy and lost its cause-and-effect relationships. Or maybe the philosophers and teachers just stopped talking to the scientists and engineers and listened instead to the eastern religions. Or maybe they just got resentful at the effective power that the scientists and engineers wielded because they knew how things really work. Whatever it came from, the result is called "post-modern" and its main tenet is that there are no absolutes. It's flat wrong. Even in quantum mechanics there are absolutes: Absolute zero is the total absence of energy. We can't get there, but we can get arbitrarily close. Black holes are similar, in that absolutely nothing comes back out. The speed of light is another absolute. There are mathematical theories that posit ways around these absolutes, but they are just mathematical theories; there is no hard physical evidence for them the way there is physical evidence for the speed of light and absolute zero.

If you carry post-modernism to its logical conclusion (and they do this) then even science is thought to be just somebody's power game. Because so much of the physics is now working with mathematical models instead of molecules and electrons (aka experimental data), it begins to look like everything works that way. It doesn't. The stuff that engineers do is still driven by the real world out there, not detached mathematical models. Computer games and science fiction and movies and (increasingly) the news media can invent any "facts" they want and who's to say they are wrong? But engineering and farming and piloting airplanes does not work that way. If you fly a real airplane into the side of a mountain, you don't just press Start and take off again. If you don't treat the soil with care and knowledge, pretty soon the crops stop growing and people starve (or buy their food from the people who do follow the rules).

In the real world people cannot live without absolutes. The government of a country as rich as the USA may be able to add up funny money and pay for programs without collecting enough revenue to do it, and even the citizens can run up huge credit card debts, but 2+2 still =4. The devil will get paid, and sooner rather than later. You still have to hold a job that gives you a regular paycheck if you want to pay the mortgage and buy food and drive a nice car. These are not power trips, they are the real world.

On top of all this, young people are increasingly becoming disenchanted with science and technology (it's so limiting!) and preferring political science and business majors. Business does not succeed on just business majors. Sooner or later somebody has to make a product that other people are willing to pay for. That works the same in the government. Unless we keep the flow of good minds into technology, there will be no technology. Technology requires an absolutist approach to the real world.

Back to Science in main essay


An artist or an author -- any creative person -- has an inherent and natural right to say what happens to his work of art. This is captured in the copyright law, because it serves the public good to encourage creative people to do their thing by making sure they get paid for their work (if they want to be). This is true also of the creative activity called programming, even if most of us sell our rights for money. Every work of art, every creation has an owner who has the right to do anything he wants with what he owns (within any limits set by higher authority, such as government). Of course, if and to the extent there is a god who created some or all of the physical universe, that god owns what he created. We Americans properly reject the idea of people being owned by another person, precisely because we are all equal. The painting is not equal to the painter, so it's not the same. The painting is obligated to conform to the wishes of its owner (the painter). That is the nature of ownership, and ownership flows from creation.

Back to Science in main essay


Rather than bore you with incomprehensible math, let's look at what entropy really means in the information domain.

The First Law of Thermodynamics (in the mass/energy domain) states that mass/energy is neither created nor destroyed. The First Law in the information domain states that information is neither created nor destroyed -- in the same sense. Recall that information is measured in bits. If you look at the memory of a computer, it is also measured in bits (or bytes, which are just 8 bits each). If you buy a 64MB computer, it does not magically acquire another 5MB of RAM after you run it a couple weeks; you have to add memory chips. The same is true of the hard drive.

The Second Law states that in a closed system, the energy available for doing work tends to decrease. Similarly, the meaningful information tends to decrease. This means that if you unplug your computer's modem and take your hands off the keyboard and mouse (stop typing), the computer will stop acquiring new information. In fact it will tend to decrease. The available information is measured by how small the file is after you run it through an optimal disk compression program. A file of 200 zero bytes can be compressed to two bytes or less: one zero byte plus a byte to hold the count of how many duplicates there are. Any time the computer copies data from one part of its memory or disk to another, it potentially loses information. The First Law says we won't get any new memory to store it in, so whatever was there before must be erased (lost), and the new copy can be compressed to a few bits saying how big the copy is and where it came from.

Note that the injection of energy into the computer is irrelevant to its information content (except for dynamic RAM, which needs a little bit of energy from time to time to keep from erasing everything). We are counting bits, not ergs. The plug in the wall socket gets only energy, not information. You inject information into the system by typing, moving and clicking the mouse, communicating over the modem, or inserting a floppy disk or CD in the slot. Without those activities, it is a truly closed system in the information domain.

Another very interesting closed system in the information domain is the earth's biosphere. DNA is a binary code, measurable in bits and (in principle) compressible to measure the available information by removing the duplications. Apart from "acts of God" there is no new information coming into the system. The Second Law thus tells us that naturalistic evolution is not possible, that we should be seeing gradual extinctions (loss of information), but never any new species. And indeed that is observed in nature. In the 140 years since the publication of Darwin's Origin of the Species we have seen hundreds (maybe thousands) of documented extinctions and not one documented new species.

The usual response from the scientific establishment to this analysis is that the earth is not a closed system, because it is receiving energy from the sun. Like the computer, the injection of energy into the earth is irrelevant to its information content. The energy is nessary to sustain the information that is already there, but energy is not information.

If the Second Law were not valid in the information domain, then computer software developers could save huge costs by turning on the computer and letting it randomly generate programs which could then be extracted and sold, just as in the middle ages people tried to build a perpetual motion machine that could power all their energy requirements without fuel. It's not possible, and every programmer knows it. We get paid fabulous salaries precisely because it's not possible. Programmers inject information into the system.

What about so-called evolutionary software, which mutates and changes itself? If you look closely, you see that it's all done with smoke and mirrors. Every evolutionary program is in fact carefully designed by a very smart programmer to behave as if it were learning, but only a small part of its data actually mutates; the program itself is carefully protected from mutation, and the mutation that is allowed is carefully controlled to remain within the limits that the programmer decided to permit. This is exactly the same variability within limits that we also see in biological mutations. This has now been documented in the case of viral mutations, which happen rapidly enough that they can be studied.

Random data is the antithesis of information. This was made very clear by Claude Shannon, who was at the same time very careful to define his information slightly differently from what I called "available information" above. The focus of Shannon's research was to transmit the information in a noisy channel, so that people could hear each other on long-distance phone calls. He was successful. A million monkeys randomly hitting keys on a million typewriters might indeed type all the works of Shakespeare, but there is no information there until you throw away all the pages that are not Shakespeare. The information is injected into the system by the process of discarding the noise. Directed selection adds information; randomness removes it.

So is Natural Selection the genie in the bottle, adding information to the biosphere? Not by its usual formulation, which says it is purposeless. Natural Selection is in fact just another form of randomizing. One biologist went so far as to call it "survival of the luckiest."

Back to Science in main essay


Atheism is fundamentally self-contradictory. The only way you can know that there is no God is search the entire universe (all at once, because otherwise the evidence for God might be moving around behind you). If you succeeded in doing that, you would BE God.

Many people recognize the illogic in claiming that there is no God, so they retreat to a kind of functional atheism, or pseudo-agnosticism. True agnosticism says "I don't know" but recognizes that if there is a God (or gods), they might make moral demands on us which we must meet. So the true agnostic is morally bound to investigate every credible claim to theophany. You cannot long remain an honest agnostic, you will either become a theist (by encountering God) or else revert to the illogic of atheism (possibly while denying it). The whole point of the exercise is to evade those external moral demands; these putative agnostics are unwilling to do the search lest they find God.

Back to Axiom3 in main essay


The Qur'an is different from the Jewish Tenach and the Christian Bible in that it offers no miracles, no violation of natural law to prove itself. I understand that Islam has some miracle stories, but they are much later than the time of the Prophet. We have only the word of Mohammed that any of his Recitations are from God. Much of the Jewish and Christian writings are just historical documents describing public miracles.

Back to Miracles in main essay


The Book of Mormon contains historical narratives (including some miracles), but all of it is (like Islam above) filtered through a single person without any corroborating miracles. How do we know that Joseph Smith accurately translated the golden plates received from the angel Moroni? We don't. Unlike the other three religious traditions, we do not have the text in its original language to verify the quality of translation. We don't even know whether to believe his story that it came from the angel Moroni. We have a single affidavit signed by three people that they saw the golden plates, but they could not read the contents, nor can anybody else because those plates are no longer available for examination. No copies were made, only one unchecked "translation" by one person who also made the mistake of "translating" an Egyptian funerary document (this was before the discovery of the Rosetta stone) into something unrelated to its now known actual contents, thus discrediting all his work.

Translation is hard to do unless you know both languages -- hey, it's hard to do even when you do know both languages. At the time of Mohammed there were already numerous translations of the Jewish and Christian holy books, and these translations showed the usual (typically minor, but who's to say?) discrepancies, so it is forbidden to publish the Qur'an in translation unless it is accompanied by the original text in Arabic. At the time Joseph Smith received his revelation in upstate New York, we knew about Egyptian hieroglyphics, but nobody knew how to translate them. Eventually the Rosetta stone was discovered, containing an international treaty in both Greek and Egyptian. Starting with the names of the kings, the archaeologists were able to figure out the Egyptian alphabet, and from there the meanings of the other words. But Joseph Smith knew none of this when he "translated" a papyrus found in an Egyptian mummy. He even copied into the margins of his text the original symbols, making it easy to match them to the characters on the papyrus.

Back to Miracles in main essay

Back to The Historical Resurrection in main essay

Rev. 00 Dec.20