When Law & Ethics Collide
Guest Post By Stephen Wu, Shareholder, Silicon Valley Law Group - San Jose, CA
From time to time, busy practicing lawyers face ethical issues of the kind taught in professional responsibility law school classes and continuing legal education courses. However, they do not often discuss the kinds of general ethical issues that academics and professional moral philosophers take up. Recent developments in artificial intelligence and robotics, and autonomous driving in particular, have rekindled interest in ethics throughout the world, and especially in the United States.
Autonomous vehicles (AVs) have captured the imagination of writers in popular media. Living close to the garage where Waymo (the new Google affiliate) houses its AVs in Mountain View, California, I feel like I am living in the AV capital of the world, as I frequently see AVs navigating the streets around my home in Los Altos. Nearby Tesla has deployed a driver assistance system in its cars and intends to deploy fully automated vehicles in two years. Companies are also working on freight truck automation, and their work eventually will result in fully automated trucks.
AV manufacturers will rely on sophisticated algorithms to control AVs. Software implementing such algorithms depends on inputs from sensors, such as light detection and ranging (LiDAR), radar, cameras, and GPS. The software analyzes the AV’s location, position relative to the road, and upcoming obstacles. These algorithms then determine the best path to follow and cause the AV throttle, brake, and steering to follow the planned path. A group of moral philosophers has raised ethical questions about these algorithms. In particular, this group asks how AVs should behave when accidents are about to occur. What is the moral way to design AV algorithms?
Should they try to preserve the maximum number of lives (assuming they are sophisticated enough to engage in such a calculation)? Or should they avoid doing harm to innocent pedestrians, bystanders, and passengers? Does the manufacturer owe any special ethical duties to the purchaser of the AV or the AV occupants, as opposed to occupants of other vehicles or those outside the AV? Many of the media stories raising these ethical issues rely on the work of Professor Patrick Lin of California Polytechnic State University.
Professor Lin likes to use “thought experiments” to explain ethical dilemmas. Thought experiments are “similar to everyday science experiments in which researchers create unusual conditions to isolate and test desired variables” and are similar to the hypotheticals law professors use to teach legal subjects. Thought experiments can be used to study ethical issues involving AV algorithms. Indeed, the last administration’s Department of Transportation policy on highly automated vehicles specifically mentions ethical issues in programming AVs: “Manufacturers and other entities, working cooperatively with regulators and other stakeholders (e.g., drivers, passengers and vulnerable road users), should address these situations to ensure that such ethical judgments and decisions are made consciously and intentionally.”
Of Trolleys and Autonomous Vehicles
Perhaps the most famous thought experiment is the so-called “trolley problem.” As the name suggests, the trolley problem involves a runaway trolley. British philosopher Philippa Foot invented the “trolley problem” and first introduced it in 1967. American philosopher Judith Jarvis Thomson expanded on the trolley problem in a 1985 Yale Law Journal comment, which is the more common formulation of the thought experiment: a runaway trolley is heading down the track toward five workers and will soon run over them if no intervention occurs. A spur of track leads off to the right, but there is one worker on the track. A bystander is standing by a switch. If the bystander throws the switch, the trolley will turn onto the spur, saving the five workers, but killing the single worker on the spur.
If the bystander does nothing, the bystander would not be killing anyone. The bystander would merely be “allowing” the five to die. Throwing the switch would involve killing just one person. Some philosophers such as Jarvis Thomson have the view that it is better to maximize the number of lives saved in situations like this. Others such as Foot disagree, saying that it is worse from an ethical standpoint to cause harm than it is to allow harm to happen, even if the consequences are worse. The trolley problem teases out the moral philosopher’s dilemma: is it better to throw the switch and save more lives (five versus one), or is it better (for the bystander) to do nothing in order to avoid causing harm to anyone?
Professor Lin has applied the trolley problem to AVs by posing the following thought experiment:
You are about to run over and kill five pedestrians. Your car’s crash-avoidance system detects the possible accident and activates, forcibly taking control of the car from your hands. To avoid this disaster, it swerves in the only direction it can, let’s say to the right. But on the right is a single pedestrian who is unfortunately killed.
News writers have (with or without crediting Professor Lin) repeated this and similar scenarios in numerous recent news articles. Philosophers continue to debate the question of whether it is better to save more lives or avoid doing harm. To the extent there is any consensus, a recent survey showed that philosophers favored throwing the switch in the trolley problem. Thus, if an AV manufacturer hires professional philosophers to advise it on how to design AV algorithms, they are likely to advise the manufacturer to program the AV to steer away from a large group at the cost of running over a single individual.
The Legal Trolley Problem Dilemma
As a practicing lawyer, I was curious. What would the legal consequences be if an AV manufacturer followed a philosopher’s advice and tried to “do the right thing” in trolley problem situations? What would happen if the manufacturer programmed its AVs to steer away from a large group and toward a single individual or small group when they anticipate that a crash is inevitable? For the remainder of this article, I imagine a hypothetical manufacturer (Manufacturer) has implemented just such an algorithm. And I imagine that an accident occurs where the AV steers away from five people (the Five), but at the cost of striking and killing a single individual (the One). I assume that the One was an innocent bystander or pedestrian, rather than a jaywalker or someone engaging in wrongful conduct. I also assume that if the AV had attempted to avoid collision altogether, the AV may have made things worse—it may have killed all six people. I imagine that a representative of the One files a complaint against the Manufacturer.
The most common causes of action in a suit claiming a defect in a product include strict products liability, negligence, breach of warranty, and statutory violations for unfair or deceptive trade practices. With each claim, counsel for the representative of the One would contend that the feature of swerving toward the One made the AV defective. But even worse, the Manufacturer’s conduct appears intentional. Indeed, the Manufacturer made a deliberate decision to cause the AV to swerve toward the One (or someone similarly situated to the One). The representative may even assert a cause of action for battery, the essence of which is harmful contact intentionally done. On its face, the representative seems to have a strong case.
The Manufacturer would not have fared any better if it programmed the AV to do nothing, allowing the AV to run over the Five. If the AV killed the Five, representatives of the Five could file suit against the Manufacturer, contending that the Manufacturer had a safer alternative design: it could have programmed the AV to run over the One. Thus, it appears the Manufacturer is in a no-win situation.
Possible Defenses
The Manufacturer might turn to traditional defenses recognized in the law to avoid the dilemma. For instance, it could assert a necessity defense, saying that running over the One was necessary to save lives. Under the necessity doctrine, “it has long [been] recognized that ‘[n]ecessity often justifies an action which would otherwise constitute a trespass, as where the act is prompted by the motive of preserving life or property and reasonably appears to the actor to be necessary for that purpose.’” The private necessity defense thus serves as a justification for a non-governmental defendant’s conduct where the defendant’s act causes harm, but the defendant acted to prevent an even worse harm. However, necessity is likely to be unavailing as a defense for the Manufacturer. In its traditional form, the necessity defense justifies acts of trespass or damage to personal property, but not bodily injury. In our hypothetical case, the AV killed the One and thus does not apply.
Another possible defense is the defense of third persons. Similar to self-defense, the Manufacturer might try to argue that its use of force against the One is justified on order to defend the Five against harm. The Restatement (Second) of Torts provides that an actor can defend any third person from wrongful injury by the use of force.12 However, the Manufacturer’s argument will fail because in our hypothetical case the One was not acting wrongfully. To the contrary, we have assumed that the One was an innocent actor. There is no wrongful conduct for the Manufacturer to defend against, and thus the defense does not apply.
A third defense the Manufacturer could try to assert is the “sudden emergency” doctrine, also known as the “imminent peril” doctrine. “[I]f an actual or apparent emergency is found to exist the defendant is not to be held to the same quality of conduct after the onset of the emergency as under normal circumstances.” Cases involving the sudden emergency doctrine in the car accident context involve split-second decisions of drivers in a difficult position. The facts of some of these cases sound like real-world trolley problems. The defense recognizes that an actor in such situations cannot be held to the same standard of care as when an actor is calm in normal circumstances. However, the problem for the Manufacturer is that the Manufacturer is considering how to program an AV in the ordinary course of the design process, far from any imminent accident. The sudden emergency doctrine applies only when, at the time of the actor’s conduct causing the accident, the actor faced a sudden choice between two or more actions. Here, the Manufacturer’s programming decision occurred long before the accident. The Manufacturer was not facing a sudden decision. To the contrary, we have assumed that the Manufacturer undertook a careful and deliberate analysis of how to design its AV algorithms and made a choice to program the AV to steer toward the One. No sudden emergency was occurring during the design process. Accordingly, the defense does not apply.
Resolving the Liability Dilemma
Because the traditional defenses offer no protection, the Manufacturer has no easy way out of the liability dilemma. As the law currently stands, I believe the only way for the Manufacturer to limit its legal liability in the trolley problem scenario is to program its AVs to attempt to avoid collision. It should neither steer toward the One nor allow the AV to run over the Five. Rather, it should try to maximize collision avoidance.
I recognize three problems with this approach. First, we have assumed that collision avoidance may make things worse and the AV may end up hurting or killing all six people. Nonetheless, it is more legally defensible and would, as a practical matter, sit better with a jury: the Manufacturer did all it could to save everyone’s life. If the accident ended up killing all six, then at least the Manufacturer tried to save lives.
Second, my position is implicitly at odds with the trolley problem thought experiment. I am implicitly rejecting what appears to be a false choice between running over the Five or running over the One.
Finally, I recognize that my choice of collision avoidance of the “legal” solution is not the one philosophers would consider “moral.” Law and morality sometimes diverge. Conduct we con- sider immoral may be legal, and some conduct considered to be morally permissible may be illegal. This is one more case in which law and morality may come to different conclusions. Given the liability dilemma, the only way to immunize the Manufacturer trying to “do the right thing” and allow it to program AVs to steer toward the One is to change the law through legislation or regulations.
Trolley problems are useful starting points for analyzing the ethical issues of programming AVs, if nothing else because they spark discussion among the media and their audience. Some people reject the real-world relevance of the trolley problem, but the principles gleaned from it will aid manufacturers in deciding how to program AVs. More generally, injecting discus- sions of ethics raises the awareness of ethical dimensions to AV design and manufacturers’ decisions, and that is a good thing.
Originally Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association.