Advances in Artificial Intelligence (AI) will continue to spur widespread adoption of robots into our everyday lives. Robots that once seemed so expensive that they could only be afforded for heavy-duty manufacturing purposes have gradually come down in cost and equally been reduced in size. You can consider that Roomba vacuum cleaner in your home to be a type of robot, though we still do not have the ever-promised home butler robot that was supposed to take care of our daily routine chores.
Perhaps one of the most well-known facets about robots is the legendary set of three rules proffered by writer Isaac Asimov. His science fiction tale entitled The Three Laws was published in 1942 and has seemingly been unstoppable in terms of ongoing interest and embrace.
Here are the three rules that he cleverly devised:
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm,
2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law,
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Though he referred to these as laws, please know that they are ostensibly are not actual laws per se.
One viewpoint about laws is the notion of laws of nature. Those three rules are not laws of nature as there is nothing about them that somehow is determinable as being inexorably imbued via nature or required by the course of nature. Those rules are manmade and humanity discretionary. They can be observed and obeyed, or they can be completely disregarded, whichever we might choose to so abide.
Another perspective on laws is the aspect of mankind derived regulations and stipulations. Our revered Constitution is an indication of laws, something that we as a society have opted to try and live within. In that way of consideration, you could argue that Asimov was proposing laws that we ought to put onto our legal books as sensible to have in an era of robots that might be wandering amongst us.
His three rules are not yet embodied directly into our laws and we’ll have to see whether it eventually makes sense to codify them explicitly accordingly. You might find of interest that there is a great deal of angst and attention right now on how to best govern the latest AI systems and we might find ourselves leaning heavily into Asimov’s three rules (for my discussions about AI Ethics and governance, see the link here).
When you read Asimov’s remarks about robots, you might want to substitute the word “robot” with instead the overarching moniker of AI. I say this because you are likely to otherwise narrowly interpret his three rules as though they apply only to a robot that happens to look like us, conventionally having legs, arms, a head, a body, and so on.
Not all robots are necessarily so arranged.
Some of the latest robots look like animals. Perhaps you’ve seen the popular online videos of robots that are four-legged and appear to be a dog or a similar kind of creature. There are even robots that resemble insects. They look kind of creepy but nonetheless are important as a means to figure out how we might utilize robotics in all manner of possibilities.
A robot doesn’t have to be biologically inspired. A robotic vacuum cleaner does not particularly look like any customary animal or insect. You can expect that we will have all sorts of robots that look quite unusual and do not appear to be based solely on any living organism.
Anyway, amongst the variety of robots that we are going to see emerging, the Asimov three rules are quite helpful, regardless of what kind of robot it is and what it might look like. I know it seems perhaps a bit of a stretch of the imagination, but the lowly robotic vacuum can be a candidate for abiding by the three rules. Yes, presumably, your swirling robotic vacuum in your home should not try to harm you and ought to do what it can to avoid doing so.
Some robots are readily in front of our eyes and yet we do not think of them as robots.
One such example is the advent of AI-based true self-driving cars.
A car that is being driven by an AI system can be said to be a type of robot. The reason you might not think of a self-driving car as a robot is that it does not have that walking-talking robot sitting in the driver’s seat. Instead, the computer system hidden in the underbody or trunk of the car is doing the driving. This seems to escape our attention and thus the vehicle doesn’t readily appear to be a kind of robot, though indeed it is.
In case you are wondering, there are encouraging efforts underway to create walking-talking robots that would be able to drive a car (see my coverage at this link here).
Imagine how that would shake-up our world.
Right now, the crafting of a self-driving car involves modifying the car to be self-driving. If we had robots that could walk around, sit down in a car, and drive the vehicle, this would mean that all existing cars could essentially be considered self-driving cars (meaning that they could be driven by such robots, rather than having a human drive the car). Instead of gradually junking conventional cars for the arrival of self-driving cars, there would be no need to devise a wholly-contained self-driving car and we would rely upon those meandering robots to be our drivers.
At this time, the fastest or soonest path to having self-driving cars is the build-it into the vehicle approach. Some believe there is a bitter irony in this approach. They contend that these emergent self-driving cars are going to inevitably be usurped by those walking-talking robots. In that sense, the self-driving car of today will become outdated and outmoded, giving way to once again having conventional driving controls so that either the vehicle can be driven by a human or be driven by a driving robot.
As an added twist, there are some that hope we will be so far along on adopting self-driving cars that we will not use independent robots to drive our cars.
Here’s the logic.
If a robot driver is sitting at the wheel, this suggests that the conventional driving controls are still going to be available inside a car. This also implies that humans will still be able to drive a car, whenever they wish to do so. But the belief is that the AI driving systems, whether built-in or as part of a walking-talking robot, will be better drivers and reduce the incidences of drunk driving and other adverse driving behaviors. In short, a true self-driving car will not have any driving controls, precluding a walking-talking robot from driving (presumably) and precluding (thankfully, some assert) a human from driving.
This leads to the thinking that maybe the world will have completely switched to true self-driving cars and though a walking-talking driving robot might become feasible, things will be so far along that no one will turn back the clock and reintroduce conventional cars.
That seems somewhat like wishful (or possibly wistful) thinking.
One way or another, the central goal seems to be to take the human driver out of the equation.
This brings up another crucial and sometimes overlooked point about robots, namely that they can be put into positions of effort that entail life or death activities. Your home robotic vacuum cleaner is unlikely to be a life-or-death decision-maker. Meanwhile, a self-driving car, one that has the AI driving system built-in, or even a robot driver, would undeniably be in a life-or-death posture of ascertaining the fate of humans.
Each day that you get behind the wheel of a car, you are a life-or-death decision-maker, whether you realize it or recognize that this is so. With a bad turn of the steering wheel, you can get killed. Plus, you can end-up killing others, whether by veering into another car or possibly hitting pedestrians. There is life-or-death entirely surrounding the act of driving.
That might seem rather doom-and-gloom, but it is a harsh reality that needs to be emphasized.
This is also why the moment when you help your teenager learn to drive is more than just some simple bonding time. The instant that you put your beloved newbie driver at the driving controls, the specter of life-or-death suddenly becomes quite pronounced. The teenage driver usually also senses this heavy-duty, and you can see that some are reluctant to take on such a hefty burden.
Since it is life-or-death on the line, it is conceivable that we should consider applying Asimov’s three rules in the use case of self-driving cars (it would seem inconceivable to not do so, as suggested by Vizzini in The Princess Bride).
Here is today’s intriguing question: Do the Asimov three laws of robots apply to AI-based true self-driving cars, and if so, what should be done about it?
Let’s unpack the matter and see.
Understanding The Levels Of Self-Driving Cars
As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered a Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
Self-Driving Cars And Asimov’s Laws
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.
All occupants will be passengers.
The AI is doing the driving
Let’s briefly take a look at each of Asimov’s three rules and see how they might apply to true self-driving cars.
First, there is the rule that a robot or AI driving system (in this case) shall not injure a human, either doing so by overt action and nor by its inaction.
That’s a tall order when sitting at the wheel of a car.
A self-driving car is driving down a street and keenly sensing the surroundings. Unbeknownst to the AI driving system, a small child is standing between two parked cars, hidden from view and hidden from the sensory range and depth of the self-driving car. The AI is driving at the posted speed limit. All of a sudden, the child steps out into the street.
Some people assume that a self-driving car will never run into anyone since the AI has those state-of-the-art sensory capabilities and won’t be a drunk driver. Unfortunately, in the kind of scenario that I’ve just posited, the self-driving car is going to ram into that child. I say this because the law of physics is paramount over any dreamy notions of what an AI driving system can do.
If the child has appeared seemingly out of nowhere and now is say a distance of 15 feet from the moving car, and the self-driving car is going at 30 miles per hour, the stopping distance is around 50 to 75 feet, which means that the child could readily get struck.
No two ways about that.
And this would mean that the AI driving system has just violated Asimov’s first rule.
The AI has injured a human being. Keep in mind that I’m stipulating that the AI would indeed invoke the brakes of the self-driving car and do whatever it could to avoid the ramming of the child. Nonetheless, there is insufficient time and distance for the AI to avoid the collision.
Now that we’ve shown the impossibility of always abiding by Asimov’s first rule in terms of strictly adhering to the rule, you could at least argue that the AI driving system attempted to obey the rule. By having used the brakes, it would seem that the AI driving system tried to keep from hitting the child, plus the impact might be somewhat less severe if the vehicle was nearly stopped at the time of impact.
What about the other part of the first rule that states there should be no inaction that could lead to the harm of a human?
One supposes that if the self-driving car did not try to stop, this kind of inaction might fall within that realm, namely once again being unsuccessful at observing the rule. We can add a twist to this. Suppose the AI driving system was able to swerve the car, doing so sufficiently to avoid striking the child, but meanwhile, the self-driving car goes smack dab into a redwood tree. There is a passenger inside the self-driving car and this person gets whiplash due to the crashing action.
Okay, the child on the street was saved, but the passenger inside the self-driving car is now injured. You can ponder whether the action to save the child was worthy in comparison to the result of injuring the passenger. Also, you can contemplate whether the AI failed to take proper action to avoid the injury to the passenger. This kind of ethical dilemma is often depicted via the infamous Trolley Problem, an aspect that I have vehemently argued is very applicable to self-driving cars and deserves much more rapt attention as the advent of self-driving cars continues (see my analysis at this link here).
All told, we can seemingly agree that the first rule of Asimov’s triad is a helpful aspirational goal for an AI-based true self-driving car, though the fulfillment of that aspiration is going to be pretty tough to achieve and will forever likely remain a conundrum for society to wrestle with.
The second of Asimov’s laws is that the robot or in this case the AI driving system is supposed to obey the orders given to it by a human, excluding situations whereby such a human-issued command conflicts with the first rule (i.e., don’t harm humans).
This seems straightforward and altogether agreeable.
Yet, even this rule has its problems.
I covered the story of a man that used a car to run over a shooter on a bridge that was randomly shooting and killing people (see my discussion posted June 1, 2020). According to authorities, the driver was heroic by having stopped that shooter.
If the Asimov second law was programmed into the AI driving system of a self-driving car, and suppose a passenger ordered the AI to run over a shooter, presumably the AI would refuse to do so. This is amply obvious because the instruction would harm a human. But, we know that this was a case that seems to override the hard-and-fast always-to-be-obeyed convention that you should not use your car to ram into people.
You might be complaining that this is a rare exception, which I totally concur is absolutely an oddity.
Furthermore, if we were to open the door to allowing passengers in self-driving cars to tell the AI to run over someone, the resulting chaos and mayhem would be untenable. In short, there is certainly a basis for arguing that the second rule ought to be enforced, even if it means that on those rare occasions it would lead to harm due to inaction.
The thing is, you don’t have to reach that far beyond the everyday world to find situations that would be nonsensical for an AI driving system to unquestionably obey a passenger. A rider in a self-driving car tells the AI to drive up onto the sidewalk. There are no pedestrians on the sidewalk, thus no one will get hurt.
I ask you, should the AI driving system obey this human uttered command?
No, the AI should not, and we are ultimately going to have to cope with what types of utterances from human passengers the AI driving systems will consider, and which commands will be rejected (see my analysis of the barking of commands problem posted on December 13, 2020).
The third law that Asimov has postulated is that the robot or in this case the AI driving system must protect its own existence, doing so as long as the first and second rules are not countermanded.
Should a self-driving car attempt to preserve its existence?
In a prior column, I mentioned that some believe that self-driving cars will have about a four-year existence, ultimately succumbing to wear-and-tear in just four years of driving (see my posting on September 3, 2019). This seems surprising since we expect cars to last much longer, but the difference with self-driving cars is that they will presumably be operating nearly 24×7 and gain a lot more miles than a conventional car (a conventional car sits unused about 95% to 99% of the time).
Okay, so assume that a self-driving car is nearing its useful end. The vehicle is scheduled to drive itself to the junk heap for recycling.
Is it acceptable that the AI driving system might decide to avoid going to the recycling center and thus try to preserve its existence?
I suppose if a human told it to go there, the second rule wins out and the self-driving car has to obey. The AI might tricky and find some sneaky means to abide by the first and second rule, and nonetheless find a bona fide basis to seek its continued existence (I leave this as a mindful exercise for you to mull over).
Based on the aforementioned logic, it would seem that Asimov’s three rules have to be taken with a grain of salt.
The AI driving systems can generally be devised with those rules as part of the overarching architecture, but as might be abundantly evident from this discussion, the rules are aspirations and not ironclad irrefutable and immutable laws.
Perhaps the most important point of this mental workout about Asimov’s rules is to shed light on something that few are giving due diligence toward. In the case of AI-based true self-driving cars, there is a lot more to devising and deploying these autonomous vehicles than merely the mechanical facets of driving a car.
Driving a car is a huge ethical dilemma that humans oftentimes take for granted. We need to sort out the reality of how AI driving systems are going to render life-or-death decisions. This must be done before we start flooding our streets and byways with self-driving cars.
Asimov said it best when he lamented a remark that still applies today, some eighty years or so after he implored us with this: “The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.”
True words that are greatly worth revisiting.