When the robot overlords take control, there will still be plenty for PI attorneys to do.
At least that seems to be the summary of most speculation on the liability repercussions of the self-driving car. Through analyzing the likely legal consequences of this one up-and-coming technology, we get a peek at how tort law will confront an increasingly automated world.
High-speed Computer Crash
As Wired puts it: “At this point, self-driving cars are futuristic in the way next Thursday is futuristic: not here yet, but definitely coming.”
The US Government is already invested $4 billion already in research and infrastructure improvements, and companies like Google, Tesla, Uber, GM, and a slew of others are revving up for the moment they go fully-automated. .
But while we’re geeking out on the travel tech, there have been sobering moments. Like last May, when a self-driving Tesla-S failed to detect a white tractor-trailer truck against a white sky, and allowed a collision that killed the 40-year-old driver.
Though this is the first known fatality from automated-driving software, it’s not the only accident. Other crashes include a Valentine’s Day collision between Google’s little self-driving car (or ‘car-like nubbin,’ as I think it should be labeled) and a city bus. In the aftermath the tech company conceded: “In this case, we clearly bear some responsibility, because if our car hadn’t moved there wouldn’t have been a collision.”
The big hope behind driverless technology is that it will greatly improve upon the old-fashioned computer currently driving our cars: the human brain. Tech enthusiasts note that giving the reigns to robots could save tens of thousands of lives every year.
But accidents are still bound to happen from manufacturing defects or design flaws. As a member of the National Transportation Safety Board worried:
“the theory of removing human error by removing the human assumes that the automation is working as designed; so the question is what if the automation fails. Will it fail in a way that is safe? If it cannot be guaranteed to fail safe, will the operator be aware of the failure in a timely manner, and will the operator then be able to take over to avoid a crash?”
Even before automated cars became a possibility, Professor James Reason was noticing : “in their efforts to compensate for the unreliability of human performance, the designers of automated control systems have unwittingly created opportunities for new error types that can be even more serious than those they were seeking to avoid.”
It can feel like apocalyptic doom when your computer crashes — but the stakes are at a whole new level in a computer-car crash. Though we can hope technology will save lives, when the robot fails, liability lawsuits will not only help injured parties find justice, but will be key in developing safety standards for the cars of the future.
Who’s at the Wheel?
In the days before that Google-mobile was getting up-close and personal with a bus, the National Highway Traffic Safety Administration made an important definition: the driver of a driverless car is the software, not the human sitting inside.
This statement is about interpretations of safety standards rather than liability, but it’s an important first step. What this means in tort law is: if a google-car is responsible for a crash, the one to sue is Google.
These clear-cut distinctions smudge a bit when one considers Tesla-style technology, which allows humans to continue to have control of their car, toggling automated systems on and off according to driver preference. In these cases, determining facts of drivership will be a bigger dispute.
But either way, robot-drivers are making waves in our current understanding of products liability frameworks, already one of law’s most dynamic and adaptable areas. Law and tech scholars Gary Marchant and Rachel Lindor noted back in 2012 that the area of auto accident liability was on the cusp of rapid change. They discussed how, when a tech malfunction causes an accident, the list of those who could be held liable includes:
- the vehicle manufacturer
- the manufacturer of a component used in the autonomous system
- the software engineer who programmed the code for the autonomous operation of the vehicle
- and the road designer in the case of an intelligent road system that helps control the vehicle.
Of course, the one with the deepest pockets and greatest likelihood for actual liability is the car manufacturer. In our current world, car manufacturers are typically only sued by the driver in an accident — but once that manufacturer becomes the driver (or has partial responsibility), everyone injured in a collision is likely to sue them. The upshot will be a Gordian knot of connections between insurance companies, plaintiffs, drivers and owners listed as defendants, and car manufacturers.
Add to the mix a general anxiety that technology is developing faster than regulators can regulate, with only four states with legislation on automated driving (California, Florida, Michigan, and Nevada), and we’ve got something of the Wild West of product liability. On robot horses.
Blaming the Bot:
But maybe we don’t have to worry too much about re-inventing the (self-steering) wheel. After all, cars are already significantly automated. One need look no further than cruise control lawsuits, in which plaintiffs have successfully demonstrated that the automated system caused the cars to speed up unexpectedly and fail to brake (see Cole v. Ford Motor, Co., 900 P.2d 1059 (Or. Ct. App. 1995)).
We’ve seen driverless people-movers collide at the Miami International Airport in 2008, and the tragic train crash in Washington DC in 2009, which killed the operator and 8 passengers and injured 80 more. In this case, a train became ‘electronically invisible,’ its symbol disappearing from the display board in the dispatch center. An alarm sounded, but since it sounded several hundred times a day, it was ignored. The train behind was unaware of this ‘invisible’ train, and by the time they saw the stopped train on the tracks ahead of them, it was too late. Clearly, manufacturing or design defects for the electronics involved must be part of the considerations in determining liability for situations like these.
The Brookings Institute argues that though legal institutions are often altered by new tech, contract law is sufficiently flexible to deal with driverless cars. They set out seven areas of product liability law that are bound to play out in driverless litigation:
1. Negligence: Brookings gives the example of a car company testing braking systems only on dry road surfaces. “If the braking systems then prove unable to reliably avoid frontal collisions on wet roads, a person injured in a frontal collision on a rainy day could file a negligence claim.”
2. Strict Liability: though there are different readings on strict liability by state courts, the general idea is that a manufacturer can be found liable for selling a product with an “unreasonably dangerous” defect — even if that manufacturer has been found to have “exercised all possible care in the preparation and sale” of their product. Strict liability can even apply for users who are third parties, and didn’t buy the product.
This means if a software glitch hurts a passenger who doesn’t own the robot-car, that passenger could still sue with a strict liability claim against the manufacturer.
Marchant and Lindor note that courts are backing off from an “absolute form” of applying strict liability, considering standards of “reasonableness,” which aligns strict liability more closely to negligence.
3. Manufacturing defects: even if a manufacturer has exercised ‘all possible care’ in preparing a product, if it contains a dangerous defect that then causes an injury (as in faulty software in a car), they can be found liable. Modern manufacturing practices will likely keep the ratio of defective products low — especially if car makers face significant legal consequences for failure.
4. Design defects: On July 14th, Consumer Reports published a warning on Tesla’s autopilot feature, asserting it was “too much autonomy too soon,” and calling for the manufacturer to disable their hands-free system until they could iron out safety kinks. The main culprit appears to be a design defect.
Tesla warns drivers to keep their hands on the steering wheel, and even includes an alert if the driver has failed to touch the wheel for a certain amount of time. Companies using this style of automation are likely to argue, when accidents arise, that they have fulfilled their responsibility. But Consumer Reports argues that their ass-covering assertion that the driver “is still responsible for, and ultimately in control of, the car” is contradicted by their aggressive PR campaign touting that the car drives itself.
The two conflicting messages result in drivers who aren’t alert enough at the wheel to avoid accidents, but apparently are expected to take full blame for the consequences. The “Handoff Problem” — when the car switches from automated to personal control — is likely to be debated in design defect cases for cars on the Tesla model, that try to combine automation with traditional driving techniques. Consumer Reports found it takes test subjects from 3-17 seconds to regain control of a semi-autonomous vehicle when they’re alerted that the car isn’t under the computer’s control anymore. At 65 mph, that’s between 100 feet and a quarter mile. Plaintiffs damaged during the ‘handoff’ could reasonably argue the car is designed to be dangerous.
Law and engineering scholar Bryant Walker Smith asks whether legal and definitionaly ‘safe’ behavior would even be attainable for these cars:
“Would a human who is not actively providing input to her vehicle be willing or even able to maintain the level of vigilance that is assumed?” [. . .] “Would a human who is not even monitoring the roadway be willing to stay in an optimal driving position and able to stay awake? Would she be willing and able to acquire and then to retain the driving skills necessary to safely maneuver the vehicle when actually required?
Smith notes that many states have laws against “distracted driving,” but argues that those provisions may not apply in the same way to a driver who’s sharing the load with a robot.
In arguing against this and other liability charges, manufacturers are likely to counter that in general they have increased safety for drivers compared to human-operated cars. Plaintiff’s experts will have to argue that defendants should nevertheless have known about the design problem, and found a remedy.
5. Failure to warn: Smith notes that “[e]ven today, claims regarding this duty are among the most common allegations in products liability litigation.” To fulfill their point-of-sale warning obligations, manufacturers are likely to include a whole host of warnings about the risks of automated driving.
Some interesting legal territory concerns manufacturers’ post-sale responsibilities, as new risks are discovered. Section 10 of the Restatement (Third) of Torts, used by many states to guide their law, states:
“One engaged in the business of selling or otherwise distributing products is subject to liability for harm to persons or property caused by the seller’s failure to provide a warning after the time of sale or distribution of a product if a reasonable person in the seller’s position would provide such a warning.”
Smith believes technological developments could mean an expansion of this duty. The Restatement further provides that warnings are reasonable based on whether the ones needing a warning can be identified and reasonably assumed to be unaware of the risk; a warning can be effectively communicated to them; and the risk of harm justifies the burden of communication. As vehicles join the ‘internet of things,’ companies effectively stay in constant contact with the drivers who bought them, greatly simplifying communication. Smith continues:
“The potential recipients of a warning could also conceivably include other vehicle occupants, other road users, and perhaps even law enforcement. And unless broader interests of privacy or consumer preference are considered, the actual burden to the manufacturer of supplying such a warning might be small in comparison to more traditional mailings or media campaigns.”
When risks become apparent, questions arise concerning software upgrades. This will require a balancing act between rapidly deploying fixes and properly testing them before releasing them on the world. Other questions will include whether to do automatic upgrades, or allow drivers to specifically approve of changes, as well as dealing with cybersecurity risks.
In particularly dire cases, manufacturers may wish to disable their technology remotely, though this could open them up to other kinds of liability.
6. Misrepresentation: Liability for misrepresentation occurs when a company gives false or misleading information, and a person who reasonably relies on that information suffers harm. This misrepresentation can be fraudulent, meaning the lie is intentional (as when Volkswagon cheated on emissions tests), negligent (meaning a company should have known), or through strict liability (whether or not the company knew or should have known, they still bear responsibility for the consequences).
One example could go back to Tesla’s supposed ‘autopilot’ that still requires manual piloting and full vigilance. As Consumer Reports warns:
“By marketing their feature as ‘Autopilot,’ Tesla gives consumers a false sense of security. [. . .] [W]e’re deeply concerned that consumers are being sold a pile of promises about unproven technology. ‘Autopilot’ can’t actually drive the car, yet it allows consumers to have their hands off the steering wheel for minutes at a time. Tesla should disable automatic steering in its cars until it updates the program to verify that the driver’s hands are on the wheel.”
7. Breach of Warranty: This form of liability occurs when the product a user purchases simply doesn’t match the express or implied warranty. Defects and disappointments in driverless cars could be considered a breach of warranty.
Further considerations for anyone engaging in driverless lawsuits include possible emotional responses by juries. Marchant and Lindor note that some prejudices held by ordinary people are likely to be on the plaintiff’s side:
“Some jurors may value the effort made by manufacturers in producing a complex technology product that provides overall safety and other benefits. Alternatively, jurors could perceive autonomous vehicles as a premature, and even reckless, foray that deserves to be soundly punished and deterred. The latter reaction may be even stronger in the context of a lawsuit over an accident allegedly caused by an autonomous vehicle. There is some evidence that lay persons composing a jury are suspicious of unfamiliar and exotic-edge technologies, regardless of their actual probability of causing harm.”
They also mention “betrayal aversion,” a strong emotional reaction people have when an innovation that was intended to make things safer (such as driverless tech) winds up causing harm.
Finally, the extensive information-gathering of fully computerized cars will enrich the discovery process. Back in the 60s, when computers were starting to be a thing, a visionary noted:
“There is a very interesting facet of the problem that is unique to the computer. Unlike a man who would be likely to forget some detail or make some inadvertent change in repeating a process, the computer can recreate the process exactly. The program of instruction to the computer is spelled out in every detail in a separate write-up. Assuming that all the information is available after the accident, the process could be rerun through the computer up to the critical point, thus reconstructing the conditions preceding the accident. The computer might indeed testify — even testify against itself without benefit of the Fifth Amendment.”
Do Liability Lawsuits Stunt Innovation?
Some voices worry that the threat of liability lawsuits will significantly delay roll-out of the self-driving car. They even go so far as to call for limitations on robot liability, similar to the way Bill Clinton reduced plane manufacturer liability for accidents in 1994. Others point to the selective immunity extended to firearm manufacturers as a model.
But other voices argue broad new liability statutes to protect driverless car techies are unnecessary. As the Brookings Institute puts it: “While this will raise complex new liability questions, there is no reason to expect that the legal system will be unable to resolve them.”
Further, automation will likely mean a reduction in liability claims for manufacturers. Volvo has researched its City Safety system, an automation based on information from a windshield-mounted sensor that applies brakes to avoid or reduce the severity of a crash. They’ve found the system reduced property damage liability loss claim frequency by 15%, and bodily injury liability claim frequency by a third. A Mercedes-Benz automatic braking system reduced property damage liability claim frequency by 14%.
Last October, the president of Volvo announced the company would accept full liability when its cars are in autonomous mode. “If we made a mistake in designing the brakes or writing the software, it is not reasonable to put the liability on the customer . . . We say to the customer, you can spend time on something else, we take responsibility.” Google and Mercedes Benz have since made similar assurances.
Of course, these companies are certain to quibble on questions of fact regarding specific collisions. One broader PR campaign they’ve already engaged in is arguing that the real problem in the crashes they’ve seen isn’t their software, but those non-softwared dummies who expect to share the road with them. If all the cars were simply automated and on the same system, they argue, these collisions would be obsolete.
Media theorist Douglas Rushkoff notes that this argument is similar to the campaign established by the automobile industry in its early days. When complaints were growing over pedestrian deaths from car accidents, the industry invested in massive victim-blaming PR, which resulted in “jay walker” becoming a household term. Old newspapers announced, rather ominously: “Automobile clubs all over the country, it was decided today, will be asked to aid in exterminating ‘Mr. and Mrs. Jay Walker and all the little Walkers.’”
A wider requirement for those interested in road safety will be pushing back against this victim-blaming of regular drivers hit by malfunctioning robots.
More than Just Cars
Of course, these basic principles of robot liability can be used in all kinds of settings, as bots grow ever more pervasive in our daily lives. In the workplace, industrial robots can cause injuries to employees (we haven’t moved far from The Jungle). Though their manufacturers have been named in a number of lawsuits, liability has been attributed instead to employees’ failures to take proper safety precautions or workers’ decision to disable safety features of the machines.
Another space rife with robots is the home: home automation systems could be found liable when they fail to warn of intruders. A recent lawsuit against Comcast alleged a home automation system malfunctioned, allowing two men to break through a basement window and nearly kill an 18-year-old boy. The judge ruled in favor of Comcast, finding the fault was user error, but it opens the question of liability for home automators.
And the list is longer and will continue growing, as we put ourselves in the hands (or circuit boards) of robots. But even if the world starts seeming a little Jetson-esque, that’s no reason to expect a radical overhaul in the way our court system determines liability for injuries.