Two weeks ago the unthinkable and the inevitable happened: An Uber self-driving vehicle struck and killed a pedestrian. The incident revived concerns about self-driving cars, including safety and liability.
Autonomous vehicle safety features
Many automobiles currently on the road feature autonomous capabilities, such as automatic braking and lane keep assist. However, these systems supplement rather than replace the driver, who remains in full control of the automobile and bears ultimate responsibility for its operation.
Looking to the future, both established car companies, like Mercedes-Benz, as well as tech companies, like Uber and Waymo, are testing fully autonomous vehicles. The ultimate goal is to replace human drivers with software and advanced electronic controls. Last fall Uber touted its advanced autonomous systems, which include radar and lidar, a laser scanning system. Uber claimed these systems detect objects in the roadway, like jaywalkers, as far as 100 yards away to avoid collisions.
Contemporary risk versus future safety
According to a Business Insider article, companies developing autonomous driving systems claim the technology is safer than human drivers. However, based on video of the incident released by Uber, those systems do not always work as claimed, and most would agree that they have no place on the road.
The incident also raises issues of beta-testing autonomous systems on public roads. On the one hand, developers claim testing on public roads is necessary. Their systems “learn” based on data accumulated over millions of driving miles. Test conditions are not good enough; they say only public roads provide the varied, unexpected and sheer volume of situations that these systems will need to address when implemented on a broader scale. A number of states, including Hawaii, have passed legislation or executive orders allowing and setting conditions for autonomous vehicle testing.
Critics, however, point out that testing on public roads carries obvious risk to the public at large. Technology companies are used to beta-testing their programs with the public, but beta-testing software controlling a 4,500-pound SUV on public roads presents an entirely different set of risks from beta-testing a photo app for a mobile device.
Driverless allocation of risk and liability
In the meantime, accidents are bound to happen, and, counterintuitively, the removal of the human factor might lead to greater difficulty in proving both fault and liability. The unfortunate and deadly accident begs several questions about liability. As the technology develops, there are also a number of issues that may be raised in the courts:
>> If an autonomous system is, in fact, better than a reasonably competent human driver, does that mean its creator is never at fault in a fatal accident?
>> Can the creator of such an automated system demonstrate that, in every circumstance, it is better than a reasonably competent human driver?
>> Is it possible to prove the safety of a self-driving vehicle system while also protecting proprietary intellectual property?
>> If all autonomous driving systems are better than human drivers, but one autonomous system is superior, does that make the others deficient?
While Uber and some other developers have temporarily suspended their autonomous driving programs, it seems inevitable that such systems will someday be in wide use. As has been the case in recent years, new technologies likely will lead to new developments in the law. Protecting the public and allocating the risks and liability for those systems will work its way through legislatures, regulators and, ultimately, the courts.
William Harstad is a partner in the litigation and alternative dispute resolution practice group at Carlsmith Ball LLP. He can be reached at wharstad@carlsmith.com.