Self Driving Go-to-market Strategy and Morality

#1

I wanted to add the topic of morality to this topic.

It seems like there have been many people who criticize the Tesla strategy on a moral basis, arguing that the release of ADAS software that might kill someone if they don’t pay close enough attention is unconscionable.

They completely discount the moral issues related to letting people kill themselves and others in cars.

They also seem to discount the value of saving people from injury heavily vs. killing someone. So even if you could save 10 people by killing one, no traditional automaker CEO would chose that route.

Lastly, it seems that this could become an important “moat” that becomes increasingly hard to cross once after the first company succeeds. In other words you can justify killing a few people if it will save a lot of people and is the only way to do that quickly. It’s much harder to justify killing a few people once there exists technology that would prevent that, and you’re just trying to improve your business position.

My thoughts in more detail below.

#2

I studied philosophy in university, so I have a good understanding of moral philosophy and ethical theory.

If you subscribe to utilitarianism, “the view that the morally right action is the action that produces the most good”, then you probably support the idea that a self-driving car that is 3x safer than the average human driver should be deployed, since it will save lives. Not everyone subscribes to utilitarianism; some subscribe to other ethical theories that say you should strictly follow certain principles even if it does more harm than not following them. For example, you should not kill an innocent security guard protecting a hydro dam in order to break in and prevent a flood that would kill many people.

Things get more complicated and murky when you get into rule utilitarianism. There are various formulations of rule utilitarianism, but the one I like the most is: by strictly following certain ethical rules (such as “don’t kill people”, “don’t steal”, “don’t pollute”, or “honour deals”), we end up producing the most good overall, even if in some particular instances it would produce more good not to follow the rule. One reason for this is that people can easily rationalize what they’re tempted to do, or hand wave away ethical objections. “For the greater good” is an ominous phrase (to my ears) because it’s associated with atrocities committed with bad justifications. Hard rules are necessary because humans too easily justify breaking them.

However, there may be extreme circumstances in which it is better to break ethical rules (e.g. injuring or killing an innocent security guard in order to stop the mad scientist inside the facility from unleashing his virus on the world). We just have to be really careful about when we make exceptions, and why.

A lot of the common objections to utilitarianism can be addressed with rule utilitarianism, or by looking at long-term, big picture consequences. We might save lives in the immediate term if we killed someone and gave their organs to dying people, but as a policy doing this wouldn’t produce the most good (e.g. it would probably create fear and distrust of the medical system), and if it’s not a policy then it has to be illegal — the law can’t arbitrarily make an exception, otherwise the preventative effects of laws would be diminished.

Similar to rule utilitarianism, in which following rules with only rare exceptions is seen as producing the most good, virtue ethics — which is ordinarily seen as an opposing ethical theory to utilitarianism — can be seen by someone who adheres to utilitarianism as the way to produce the most good. If striving to embody virtues such as empathy, courage, honesty, and so on — as virtue ethics prescibes — produces the most good, then people who adhere to utilitarianism should practice virtue ethics. Just as rule utilitarianism is a logical extension of utilitarianism, virtue ethics can be seen as a logical extension of utilitarianism. This is an unconventional view, but I think it makes perfect sense. This is my own idea, although someone else might have thought of it first.

Related to the debate between utilitarianism and other ethical theories, another question in moral philosophy is whether there is a moral difference between doing harm and allowing harm. If you merely allow someone to get killed by a human driver because you didn’t deploy a self-driving car, is that as bad as deploying a self-driving car that makes an error and kills someone? On the face of it, utilitarianism — which tells us to do whatever produces the most good — doesn’t draw any moral distinction between doing harm and allowing harm, since the difference between doing and allowing doesn’t affect the amount of harm or good produced.

As with moral rules in general, however, there may be good reasons for us to treat doing harm and allowing harm differently. For instance, maybe it makes us more cautious and thoughtful about our actions. We might be less likely to overestimate how much long-term or big picture harm we can prevent by doing immediate harm to someone. In other words, we might make the “for the greater good” mistake less.

With self-driving cars, we will be able to statistically measure the rate of crashes and the severity of crashes, so we can avoid the peril of overestimating our positive impact. We can statistically measure the long-term, big picture harm we can prevent by causing harm when self-driving cars make errors.

Also, with self-driving cars, no human is directly doing harm. You aren’t grabbing the steering wheel and deciding to ram into a vehicle. How, exactly, is it different for Tesla to deploy robotaxis that will be driven by software and kill people than for Toyota to sell cars that will be driven by humans and kill people? How are Toyota’s hands clean but Tesla’s dirty? If you know that human error is non-zero, then in a sense by selling cars you are killing people. Similarly, if you know software error is non-zero, then in a sense by deploying robotaxis you are killing people. But only in the same sense in both cases. In both cases, you know the consequence of your actions.

If it is acceptable for Toyota et al. to sell cars that they know will kill people, it should be acceptable for Tesla et al. to deploy robotaxis that they know will kill people.

I find a helpful way to reframe the conversation is ask people: would you rather have a 20% chance of being killed by a self-driving car or a 60% chance of being killed by a human? Does it really matter to you so much whether it’s a machine or a human, or does it just matter whether you live or die? I don’t think people actually care more about deaths caused by machine error than by human error. They are just worried machines will be more dangerous than humans, and that proper safety precautions aren’t being taken.

#3

would you rather have a 20% chance of being killed by a self-driving car or a 60% chance of being killed by a human?

Yes, I agree, framed this way, almost all “consumers” would chose lower chance. However, I would contend that almost all corporations would choose “not by a self-driving car of ours” even it was 99% to 1%.

  • Rahul