Tag Archives: cars

The ethical car is a deathtrap

Non-nerds are talking about machine ethics because Google’s driverless cars show up in a New Yorker essay:

Within two or three decades the difference between automated driving and human driving will be so great you may not be legally allowed to drive your own car, and even if you are allowed, it would be immoral of you to drive, because the risk of you hurting yourself or another person will be far greater than if you allowed a machine to do the work.

That moment will be significant not just because it will signal the end of one more human niche, but because it will signal the beginning of another: the era in which it will no longer be optional for machines to have ethical systems. Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.

Heady stuff, right?! We must not be allowed to drive our cars because machines can drive them better. We’ll get standards for car ethics from a government agency.

I’m reading Bruce Schneier’s excellent “Liars and Outliers” so let’s look at this from a security perspective. If you have mandated ethics for machines, I understand that as a single set of rules that cars have to obey. We would have to decide, as a society, if it is better to save ourselves or kill the 40 kids. We have to come up with an algorithm that a machine could use that we could agree with.

The trouble is, this isn’t how ethics work in our brains. Our minds aren’t algorithms. They are a quorum. We make decisions by having different parts of our brain shout their opinion. Self preservation shouts. Pity shouts. One of them shouts louder and that’s how we decide to swerve off the bridge. Or that’s how we end up living with our decision.

So this isn’t how you’d get a machine to make a decision. You need a replicable algorithm, something that you can hold up in court to avoid liability. The problem here is you have a system – and those are hackable. Hackable systems get hacked.

What happens when the choices your car will make are all predictable? People don’t have a regular response to situations, they are very variable. Your ethical car will be making predictable responses to situations. Predictable responses are easy to manipulate. So you can expect hacking of those situations. You can expect people to manipulate the ethical responses of your car to their own ends – and you won’t have an input because you aren’t as trustworthy as a car.

But this is kind of a smokescreen, isn’t it. The article isn’t really about the idea of driverless cars having to make decisions. Driverless cars can’t make a decision between the bus full of kids and your own life. So what is this article really about?