Some say that achieving self-driving cars is a moral imperative, but the case is murkier than it seems. Most notable of the prognosticators about driverless cars and someone who simultaneously leverages the vaunted moral imperative moniker would be Elon Musk , doing so as an explication of his efforts at Tesla . He’s not the only one saying so, and there are others that also have used that expression of faith or belief in the future for self-driving cars.
What though does it actually mean to assert that the goal of achieving self-driving driverless cars is in fact a moral imperative?
Let’s unpack the claim.
Defining Self-Driving Driverless Cars
If there is going to be a moral imperative about something, we ought to first at least agree on what the something is, otherwise the discussion or argument about the moral imperative will be meandering and confounded. YOU MAY ALSO LIKE
As such, I’d like to take a moment and clarify the meaning of self-driving driverless cars.
There are semi-autonomous cars that require a human driver that co-shares in the driving effort, typically referred to as a Level 2 and a Level 3 car . I don’t consider those kinds of cars to be truly self-driving driverless cars because they require a human driver .
For me, I’m somewhat literal and believe that the phrase “self-driving” and the word “driverless” each suggest that the car drives itself, entirely, solely, done exclusively by an AI system, and for which there is no human driver involved at all. This would be considered a Level 5 car , and somewhat a Level 4 car though the Level 4 is limited in ways that don’t make it fully autonomous in an entirely unrestricted way (it is constrained by whatever an automaker defines as its ODD’s or Operational Design Domains ).
In terms of a moral imperative, most would agree that the presumed moral imperative they are alluding to pertains to achieving fully autonomous cars, those of the Level 5 and to some degree the Level 4.
I suppose there are some that might want to extend the moral imperative to encompass the semi-autonomous cars too. You might press for such a case by arguing that if the automation on a semi-autonomous car gets better and better, it will presumably make the human co-driving the car to be a “better” driver too due to the augmentation by the automation.
As I’ve previously laid out (see my article here ), none of us yet know whether or not the augmentation by automation is going to turnout well. It could be that the advanced automation in say Level 3 cars leads to human co-drivers that become lulled into being adrift of the driving task, and when push comes to shove, and an urgency arises in the driving task, the human driver might not respond promptly and thus actually be a worse driver than if they were driving unaided by the automation.
Nobody can yet say how it will go, and we are entering into a massive experiment on our public roadways as a society with the emerging Level 3 cars.
In any case, let’s focus our attention about the moral imperative toward the advent of true self-driving driverless cars that are fully autonomous and for which there is no element of a human driver involved.
Moral Imperative Basis
Having defined the focus of the moral imperative, namely fully autonomous cars, we next ought to consider what does a moral imperative itself consist of.
If you want to be somewhat philosophical, you could hark back to the writings of Immanuel Kant in the 1780’s that attempted to define mankind’s sense of morality.
Generally, he suggested that a moral imperative would be a proposition by mankind that a particular action or possibly an inaction was a necessity for mankind. This would be a moral imperative as it either was supported by reasoning or logic, or instead it might be a divine aspect of our creation and our special place in the world, possibly going beyond any discernible semblance of reasoning per se and instead might simply be a defacto part of humanity.
Rather than getting stuck in the weeds of whether or not the “moral imperative” for the achievement of true self-driving driverless cars is a divine aspect, I’ll tackle herein the assumption that the moral imperative arises by some kind of logical basis or argument.
Okay, so then what is the logical basis that underlies the assertion that there is a moral imperative for the fully autonomous car emergence?
The logical basis appears to consist of these key tenets:
· Elimination of human fatalities and injuries due to cars
The biggest, boldest, and most oft mentioned of the moral argument tenets is that self-driving driverless cars will eliminate all human fatalities and injuries that today arise from the use of conventional cars.
If this was indeed a valid assertion, it pretty much argues vehemently for the need to get us to fully autonomous cars, saving the 40,000 lives lost each year in the United States alone and sparing us the approximately 2.5 million or more injuries in the U.S. due to car accidents (note that the numbers would be much higher in terms of lives saved and injuries avoided if based on worldwide counts).
You could then say that it has to be an “imperative” and one that we would seek to achieve as soon as possible, since seemingly any delay in trying to achieve this moral tenet would mean that lives are being needlessly lost.
Unfortunately, the argument is not so neatly simplified and there are valid counterpoints to be considered.
First, you can forget about the notion of zero fatalities and zero injuries in an era of self-driving driverless cars. If a pedestrian darts into the street from between two parked cars, and the self-driving driverless car is coming down the street at the posted speed limit of say 40 miles per hour, the physics of the situation belies the self-driving car of magically stopping in time. The same could be said about a bicyclist that suddenly veers into the path of a self-driving car. And so on.
In addition, there is an implied assumption that self-driving driverless cars are going to be perfect drivers that always will be infallible in their driving efforts. This belies the possibility of latent bugs or errors in the AI driving system. This belies the possibility of AI system failures that arise and for which the system gets out-of-whack and no longer is driving the car as prescribed.
And another twist is that we are going to have human driven cars mixing with AI driven fully autonomous cars, which I mention because there isn’t going to be a sudden day upon which all conventional cars and all semi-autonomous cars disappear from our roadways. The economics doesn’t make this feasible. As such, there will be ongoing chances of human driven cars that will be crashing into or being hit by self-driving driverless cars.
For those various reasons, you cannot carte blanche claim that self-driving driverless cars will eliminate all deaths and injuries resulting from the use of cars.
I suppose far off in the future we might someday in a science fiction kind of way have eventually switched over to self-driving driverless cars and completely done away with human driving, and maybe have some force fields or flying cars, but that’s a Utopian idea and not really a practical way to think about this topic.
Thus, I hope you might agree that instead of using the moral tenet that all fatalities and injuries will be eliminated, instead the more reasonable argument is that some amount of fatalities and injuries will still exist and one hopes or guesses that the number will be less than the count due to today’s conventional cars.
We don’t yet know though how much less the fatalities and injuries count will be.
As such, this tenet is based on a speculative belief that hopefully there will be less fatalities and injuries, and you can propose mathematical models to try and guess what it might be, but overall no one knows and it could turnout to only be a small dent in the numbers, or one supposes it could actually raise the number of deaths and injuries, depending upon how safe the fully autonomous cars are and how society responds to these systems.
· Eliminate the tedium of driving
A somewhat lower priority item on the list of moral imperatives for achieving self-driving driverless cars involves the suggestion that humans will be relieved of having to do driving and for which driving is labeled as a kind of tedious task.
Yes, one can say that many human drivers find driving to be boring or tedious, and therefore a fully autonomous car by definition does away with that element (since it is the AI doing solo driving).
One counterargument is that some people actually enjoy the driving act . Presumably, true self-driving driverless cars will deny those people the preference or joy of being able to drive (will we tell them they can drive only on closed tracks or special set asides?).
As a society, we have yet to ascertain whether perhaps the good to society of not having human drivers is “fair” for those human drivers that want to be able to drive. It is going to be a thorny topic .
Once again, this moral tenet is not so clearly indisputable.
· Eliminate the stress of driving
I’d wager that almost everyone would agree that driving is stressful , even for those that say they love to drive.
By definition, the stress of driving would no longer exist when the AI is doing the driving solo.
Yet, there’s the potential stress of being a passenger inside a self-driving driverless car and hoping that the AI will be able to safely drive the car.
Plus, you the human passenger have little control over the AI driving, other than presumably an ability to speak voice commands to the AI system and request that it drive in some other manner, though the AI might or might not so comply (if you tell the AI to go 10 miles per hour on a freeway, and there’s no seeming basis to do so, it likely would be that the AI would not abide by your command).
Some might argue that you are trading the stress of being a driver into the stress of being a rider. You might argue that if that’s the case then you already garner stress by getting into a taxi or a ridesharing car as a passenger, though the counterpoint is that doing so involves a human driver at the wheel rather than the AI at the wheel.
Will we eventually reach a point that the passenger in a self-driving driverless car has less stress than if they were in a human driven car? Presumably, but we don’t know when or if that will occur.
· Eliminate the logistics of finding a driver
Some would say that the beauty of a self-driving driverless car is that there is always a driver ready to go, the AI system, and thus you don’t need to deal with the logistics aspects of finding a driver and getting the driver to come and drive a car.
Yes, this seems pretty airtight.
What we don’t yet know is the cost associated with having the always-on always-available AI driver.
Suppose the cost turns out to be higher than using a human driver, including the logistics costs associated with having to find and make use of a human driver.
As a society, are we willing to incur a higher cost for being able to go in a self-driving driverless car, and if so, how much will that be?
I’m not saying the cost will be higher, and it might ultimately be less than the cost of using a human driver, but we just don’t yet know which way it will go.
· Provides expanded access to cars for mobility
Another moral imperative voiced is that the advent of self-driving driverless cars will provide expanded access to cars for those that might be mobility marginalized or otherwise not be able to as readily utilize cars today.
Yes, that seems to make sense in that if self-driving driverless cars are roaming around 24×7 and readily available , it would reduce the friction of seeking to be a passenger in a car and leverage that mobility.
Per the earlier point about costs, we don’t know what the cost of that added mobility is going to be.
Suppose the cost is so high that it turns out the promised mobility expansion is not viable and therefore self-driving driverless cars are only economically affordable by some. Indeed, there are those that are worried that fully autonomous cars will be used nearly exclusively by the elite and not be available for the rest of us.
Though I personally doubt that kind of scenario emerging, and I do believe that the advent of autonomous cars is going to increase mobility, the point is that we don’t know what will happen and thus one cannot make the argument that expanded access will unequivocally occur.
This discussion has tried to indicate that the “moral imperative” is not quite so obvious and nor so indisputable.
Am I then arguing that we should not be pursing self-driving driverless cars?
I am trying to clarify that when someone tries to wrap themselves into the “moral imperative” cloak, you have to be careful to not become blinded to the reality that we don’t yet know how the advent of self-driving driverless cars will turnout.
I say this because there is an ongoing debate about whether or not we should be allowing the emerging autonomous cars, consisting right now of (barely) Level 4 and not yet anywhere near to Level 5, onto our public roadways.
The “moral imperative” clamor can seemingly hide the ugly truth that we don’t yet know how safe the existing tryouts are, and nor how long they will need to occur, and nor if these tryouts are actually necessary and sufficient to arrive at true Level 4 and true Level 5.
As I’ve analyzed in my research (see this article here ), it could be that we might incur deaths and injuries now with the emerging Level 4’s that would be considered a “trade-off” against the future deaths and injuries if we stick with conventional cars (using Linear Non-Threshold or LNT thinking ).
I don’t think that society is currently contemplating this as a kind of trade-off and instead tends to assume there won’t be any deaths or injuries from the existing public roadway tryouts. Instances such as the Uber pedestrian death in Phoenix last year and the Tesla AutoPilot-related deaths and injuries that some have cited are indicative of how society doesn’t seem to be contemplating a deaths and injuries trade-off methodology.
In any case, next time that someone tries to use the “moral imperative” aces card, I trust that you’ll be mindful that there is no free lunch and that getting to self-driving driverless cars is a more ambiguous “imperative” and a less clear-cut moral “above all else” doctrine than it might be made out to be (suggesting that it presumably eclipses all other concerns or considerations in striving to achieve its aims).
We’ve seen throughout history the sometimes-untoward aspects that can arise as a result of so-called noble causes crusade .
Just want to add a dose of reality into these pursuits, and meanwhile, yes, I am indeed a proponent of achieving self-driving driverless autonomous cars, which I’m working stridently and daily toward that desired goal.