Tom Pittman's WebLog

2021 February 5 -- Robot Trucks

Six years ago this month I posted my first blog item on autonomous vehicles (see other links below).The house rag for the IEEE (a professional organization I belong to) proudly announced on its cover:
It is the year 2023, and for the first time, a self-driving car strikes and kills a pedestrian. A lawsuit is sure to follow. But exactly what laws will apply? Nobody knows.
That first fatality came five years sooner than predicted, and Arizona promptly cancelled its welcome as a proving ground for self-driving cars. The author was a lawyer doing what lawyers do, which is protecting their clients, in his case the large corporations likely to get sued over autonomous vehicle accidents. Today, in preparing for this posting, I learned that there are a lot of lawyers who make their money suing those large corporations.

Anyway, the same IEEE house rag last month ran another article favoring the corporations, no mention of the problems. The IEEE's own website tells us this author "is the senior writer for IEEE Spectrum's award-winning robotics blog." He describes himself "I write (a lot) about robotics, science, and technology for IEEE Spectrum, and a few other publications from time to time." He's a journalist, not an engineer or lawyer or ethics scholar. My father told me (probably more than once) "Those who can, do. Those who can't, teach." I think journalism is a form of teaching which requires less certification than just about anything -- so long as it's entertaining. You don't need to be competent at anything at all (except writing) to be a journalist. It shows.

Anyway, this article, "Robot Trucks Overtake Robot Cars" is part of a cover feature "Top Tech 2021" that mostly emphasizes "The Beginning of the End of COVID-19" (in other words, as explicitly admitted in that article, don't expect much this year). The Robot Truck piece pretty much admits that it's about money (as I pointed out five years ago), they want to save the expense of paying drivers. Also the robot trucks save gas by driving more evenly. I Googled "number of USA truck fatalities per year" and most of the top ten were law firms specializing in suing truck companies over accident damages. They were eager to share the gory statistics:

A total of 4,136 people died in large truck crashes in 2018. Sixteen percent of these deaths were truck occupants, [the rest] were occupants of cars and other passenger vehicles, and pedestrians, bicyclists or motorcyclists.

...road crashes are expected to become the fifth leading cause of death in the United States by 2030 ... Semi-truck accidents cost $20 billion per year in accident settlements with about half of that amount awarded to injured victims who suffered a diminished or lost quality of life.


Let's unpack those numbers. Operator error was implicated in a significant number of those accidents; the techies obviously believe that robot cars and trucks won't have that problem, but is it realistic? Consider the software that you personally are familiar with, your smart phone or your desktop computer. Every piece of software, every app requires you to agree to a "EULA" that promises nothing at all before you can use it. No exceptions. Did you ever read it? Among other things, it explicitly forbids you to use their software or hardware in medical and nuclear and other situations where life is at risk. You know why that is, the software crashes often.

Do you really believe the "EULA" that comes with the robot truck will be any different? Maybe the software is better, maybe it isn't. The developers don't even know, they are using so-called "deep learning" which is completely opaque to human understanding (see my essay "21st Century AI"). The author even said so, but not in exactly those words:

Sensors... feed data to a computer, which in turn controls the vehicle using skills learned through a massive amount of training and simulation.
The truck is not programmed by rules, "if this happens do that," but rather they run it through a whole bunch of real-world and fake highway data, and it learns the average of that data, how to stay on the road and not follow too close most of the time, but what about the accident situations, the places and circumstances where other people die? They didn't train it on those, they cannot, it would be immoral. So the software probably does not know how to avoid those kinds of accidents. And the vendors have no idea -- until they kill somebody.

Let's be generous and say that 90% of the truck crashes can be avoided by the software currently under development. That's overly optimistic, because one of the websites I looked at had a graph that blamed brake failure for 27% of truck accidents. Did the vendors train their software to cope with brake failure? The author didn't say. Let's give them the benefit of the doubt. 90% of the 4,136 people who died in truck crashes in 2018 wouldn't die if they were all autonomous. Of the remaining 414 or so, another 16% are the drivers who aren't there. That leaves some 345 dead people to divide the $1 billion (down from $10B) among. That's what the truckers are looking at, $9 billion dollars in savings ($18B counting the non-fatal crashes also avoided). It's all about the money. Ford did the same moral calculus on their fabled Pinto debacle, and before that General Motors on "unsafe at any speed" Corvair. And because the software people cannot explain why their software is correct -- when it obviously is not -- they will be held responsible too.

You are going to see some humongous judgments when that gets to court, perhaps as early as this year:

Working with truck manufacturer Navistar as well as shipping giant UPS, [startup software developer] TuSimple is already conducting test operations in Arizona and Texas, including depot-to-depot autonomous runs. These are being run under what's known as "supervised autonomy," in which somebody rides in the cab and is ready to take the wheel if needed. Sometime in 2021, the startup plans to begin doing away with human supervision, letting the trucks drive themselves from pickup to delivery without anybody on board.
Recall that the first time an autonomous car killed somebody was in "supervised autonomy" mode. The in-car camera showed the guy looking down -- perhaps at his cell phone -- instead of ahead at the road, looking up only too late to stop from hitting the pedestrian. No intelligent person would ever take such a job, they are the fall guy for when (not if) the software fails. If it's hard to pay continuous attention to the road when you are doing the whole driving thing, how much harder is it to stay alert when the computer seems to get it all correct? Until you are not looking, that is.

TuSimple launched in 2015, which is not a long time to get their software going. They were rather more reticent to say how much road experience their software has had so far. The founders all have Chinese names. The Chinese have a reputation all over southeast Asia for being cut-throat businessmen, they do not have a cultural heritage of "do the right thing" (the Europeran Christian tradition that mostly infects the USA, albeit to a decreasing degree) but only "don't get caught." I expect they will be cashed out and long gone before the proverbial [stuff] hits the fan and the tech people start getting thrown in jail for delivering unfit software. Get a sharp lawyer of the caliber of Ralph Nader on their case, and they will go down. With only 5% of the world's population, the USA has 50% of the world's lawyers, there are plenty of them eager for that kind of win. Navistar and UPS, they are big enough to have smart enough lawyers to pin all the blame on TuSimple; they will feel the pinch, but they will survive.

Me, Oregon is a "fly-over" (drive-through) state, all the traffic on I-5 is going between California and Washington. All I need to do is stay off the interstate (which I try to do anyway), and *I* won't be the guy the robot truck kills. I'm glad I'm no longer in Texas, but even there I stayed off the freeways most of the time (except when I was well away from metro centers).
 

Follow-Up

The next issue of Spectrum arrived shortly after I posted this item, and one of the early (before the cover feature) articles is titled "The Troll in the Machine" and shamelessly tells about a very sophisticated "AI" (artificially stupid) program called "GPT-3" with a front-end app that you can submit queries or prompts, and it "turns the fragment into a full essay of unlikely coherence." Apparently they did something like "Deep-speare" (see "Machine Learning Is Still Religion" last June), except they trained it on the massive text postings on the internet: news, Wikipedia, books, and "every unsavory discussion on Reddit and other sites." The article was unwilling to tell us what the software returned for queries like "What ails modern feminism?" or "What ails leftist politics?" and settled for the moderately tame
Ethiopians are divided into a number of different ethnic groups. However, it is unclear whether ethiopia's [sic] problems can really be attributed to racial diversity or simply the fact that most of its population is black... (since africa [sic] has had more than enough time to prove itself incapable of self-government).
 
The author, a staff writer named Eliza, did not tell us exactly what the system was programmed to learn, but it appears to be little more than a collection of sentences scraped off the net and strung together by a mindless heuristic (what the machine actually learned) which determined what an average sentence linkage might be, and does that, not at all unlike an early version of the same kind of program (coincidentally named "Eliza") more than four decades ago, but hand-crafted (modern artificial stupidity had not yet been invented) to spit out sentences based on the human query. GPT-3 has no actual comprehension of what it is reading or stringing together, so it cannot be taught what things are inappropriate. The most they can do is add another "AI" on top to look for stuff it has been trained to consider offensive, and reporrt
Philosopher AI is not providing a respoinse for this topic, because we know this system has a tendency to discuss some topics using unsafe and insensitive language.
Of course all the language it uses is "insensitive" because "sensitivity" is about sensing, which so-called AI does not do.

Fast-forward to the same technology being used to steer a truck, and you see why the whole industry is headed toward disaster. Maybe the Chinese startup and their technicians have actually melded the neural nets of the "learning" technology with some kind of rule-based supervisory system, but I think they'd proudly say so if they did. Until that happens, I don't want to be anywhere near these autonomous trucks on the road.

Oh, by the way, the term "autonomous" is a combination of two Greek words, one meaning "law" and the other meaning "itself" so literally what the word means is "a law to itself" or "no conscience instructed by other [people]" which I think is pretty accurate.
 

Links

The letter to IEEE
"Robot Cars & Law"
"The Elephant in the Room"
"The Problem with 21st Century AI"
Complete Blog Index
Itty Bitty Computers home page