RoboCars

The letter to IEEE I didn't write, or maybe I wrote it and they didn't publish it...

Reference: http://spectrum.ieee.org/static/special-report-trusting-robots

After seeing the movie RoboCop, I was disappointed that you did not give any attention to your title question, "Can we trust robots?" It's a real question, and the real answer goes against the pressure so obvious in Nathan Greenblatt's opinion column five months earlier, where he is disclosed to be a lawyer for a firm which -- not in Spectrum, although you should have said so, but elsewhere on the internet -- brags about serving the legal interests of those large corporations looking to put robotic cars on the road, which in the American adversarial legal system means they only care about the interests of the victims killed by robocars to the extent that they can minimize the financial risk to their clients in court. His main point was that robocars should not be deemed more liable than persons in similar circumstances, but he conveniently neglected to observe that people driving their cars into pedestrians go to jail for vehicular manslaughter; what does it mean for a car to go to jail? Nothing at all, just buy a new car. The motivation is very different.

In RoboCop, the CEO of the corporation promoting robot killing machines was villainous. It happens. Ford made a financial calculation to put unsafe Pintos on the road. Before them, General Motors made the same calculation with the Corvair.

The answer to your title question is, and necessarily must be: No more than I trust the people who programmed it. The Killer Robot article ended almost saying that: "Do we trust ourselves enough to trust robots?" And maybe that's a good way to answer it for military robots, because the people getting killed are "them" not "us". With cars it's more like asking the Jihadis (or the mothers and children of Iran, as in RoboCop) whether they trust the robots. Of course they don't, the respondents didn't design the robots, the Enemy across the sea did. The CEO of GM or Ford or Google may not be "an enemy across the sea," but they're not my next-door neighbor, either. They don't get to be CEO of huge corporations by caring more about insignificant people than they do about making money.

Robot cars probably can be made safer than manually driven vehicles -- certainly those here in Texas -- but that is exceedingly difficult (as you pointed out) and therefore expensive. Did you ever look at the EULA you implicitly accepted with your smart phone or desktop computer software? It promises nothing at all! If it crashes and people die, your problem, not theirs. So why bother putting any effort into making it safer? There's no money, no personal Get-Out-of-Jail-Free card (that they don't already have) to pay for that effort. When we start to hold the programmers and their managers personally accountable for their defective products, then they will be motivated to put that effort in -- even if it makes the car cost more and drives up the price to the consumer (and it should).
 

Links

"Robot Cars & Law"
"Robot Trucks"
"The Elephant in the Room"
"The Problem with 21st Century AI"
Complete Blog Index
Itty Bitty Computers home page