Autonomous vehicle detractors say the recent pedestrian fatality involving an AV Uber compels us to pump the brakes on the rollout of driverless cars.
However, the complexities of the testing and regulations of the technology, involving competing companies at different stages of development and various levels of oversight and sophistication of federal and state regulatory authorities, creates a complicated landscape requiring a more nuanced response.
On March 18, Elaine Herzberg was struck and killed by an autonomous vehicle being tested by Uber on the streets of Tempe, Ariz. The accident occurred at approximately 10 p.m., when Herzberg was walking her bicycle across the street in an area with no crosswalks. The vehicle being tested, a Volvo XC90 SUV, had lidar sensors and a “safety driver” behind the wheel who, according to video, was not watching the road until the moment of impact and made no attempt to take control of the vehicle.
Lidar sensors use a pulsed laser light to detect objects, measure distances and create digital 3D representations used by the car’s computer, in combination with radar and other navigation systems, to categorize and avoid contact with other objects.
Following the accident, Uber temporarily suspended testing of its driverless cars. Arizona Gov. Doug Ducey also suspended Uber’s self-driving car testing privileges. Volvo released a statement advising that its standard system of accident avoidance technology equipping the vehicle had been disabled.
While Uber has yet to release crash data, Herzberg put herself in danger by crossing the street illegally in the dark. And while the Uber was apparently not at fault under liability standards that would be applied to a human driver, there is widespread speculation that there was a failure of one or more of the vehicle’s driverless systems.
Adding to the suspicion that some sort of system failure led to the Uber’s failure to avoid the accident, Waymo CEO John Krafcik told attendees of the National Automobile Dealer Association’s annual convention this past March that he was confident Waymo’s technology would have avoided the accident.
Whether the failure was the lidar, the communications system that instructs the brakes to engage, or other crash avoidance technology is unknown. And Uber’s recent settlement with Herzberg’s estate for claims arising from the accident will likely further delay public access to the data underlying the accident.
But Arizona is the Wild West of driverless cars. And with reports surfacing that Uber had been having problems and was rushing to log test miles to impress executives, an accident like this was inevitable, even if ultimately determined not to be Uber’s fault. In California, considered one of the most closely regulated of the states currently testing driverless cars, companies testing driverless cars are required to report not only all accidents, they must report every time a human takes control of the vehicle. It’s the only state with that requirement.
This begs some questions: Will driverless cars be held to a higher standard of care than human drivers? And if so, should that higher standard be imposed during the technology’s testing phase?
Consider the Data
Yes, every loss of human life is tragic. However, when compared to miles driven, pedestrian deaths resulting from cars piloted by humans is remarkably low at just 1.6 people for every 100 million miles driven. While it’s difficult to quantify the total number of miles driven by driverless cars, it is undisputed that it is nowhere near 62.5 million miles traveled per pedestrian death by conventional cars.
By way of example, Waymo, which is considered the leader in testing of this technology, began testing its vehicles on public roadways in 2015. To date, the company has logged just over five million total miles — although Waymo is quick to point out that it has logged many more miles on its private test track and in computer simulations.
Some detractors of the technology are using this first pedestrian fatality to validate their position that the technology is unsafe. Overall, however, the data says otherwise.
According to a 2016 Virginia Tech study, human-driven vehicles find themselves in 4.2 crashes per million miles, while self-driving cars reduced that number to 3.2 crashes per million miles. The technology should see a further reduction in accidents because, when new automated systems are introduced, the rate of adverse events initially spikes but then decreases as the technology matures. Further caution against extrapolation of the overall safety of the technology from this isolated event is that the accident occurred on the roads of Arizona, one of the most relaxed regulatory systems for driverless cars. The test cars don’t need any sort of special permit there, just a standard vehicle registration.
Starting April 2, California began to permit the testing of fully driverless cars on California roadways. This marked the first time cars can legally drive on California roadways without a human present in the car. The new regulations do not entirely remove humans from the equation, as the regulations require that a fully driverless car be remotely monitored by a specially trained and certified “remote operator” who is capable of communicating with the occupants of the car and, if necessary, taking over control of the driving functions.
Similar remote control technology is already used by NASA and the military, and is seen by many industry insiders as the quickest path to a commercial rollout of self-driving cars. And ride-hailing services, which are among the most prominent early adopters of autonomous vehicle technology, are particularly keen on the efficiencies and resulting profits that could be realized by removing the need for paid drivers and increasing the number of potential passengers.
It remains to be seen how quickly AV developers take advantage of the opportunity. A spokesperson for Uber confirmed that it would not seek to continue testing in California in light of the crash in Arizona.
“We proactively suspended our self-driving operations, including in California, immediately following the Tempe incident,” the spokesperson said. “Given this, we decided to not reapply for a California DMV permit with the understanding that our self-driving vehicles would not operate on public roads in the immediate future.”
Meanwhile, on the national level, the U.S. House easily passed in September 2017 the SELF DRIVE Act, which would allow up to 100,000 fully autonomous vehicles to be tested on our nation’s roadways. However, the Senate version of the bill, called the AV START Act, was blocked from a chamber vote this past January due in part to California Sen. Dianne Feinstein’s criticism that the technology is unproven and the bill, as written, does not adequately address safety concerns.
When it’s taken into account that 94% of all auto accidents are caused by human error, the end game of enhanced safety benefits touted by the proponents of driverless car technology remain credible and a worthwhile pursuit. Regulatory bodies play a huge role in establishing confidence in the vehicles. With these new regulations in place, California should assert a leadership role in the development and deployment of this fledgling but potentially behemoth technology.
Far from the proverbial canary in the coal mine for driverless car technology, the Arizona accident demonstrates the importance of high standards and uniform regulations in the rollout of this technology.
Christian Scali and John Swenson are attorneys with Scali Rasmussen. This article includes reporting by Jack Schaedel and Jennifer Woo Burns. Contact the authors at [email protected] and [email protected].
See all comments