Published on: March 26, 2018by Kevin Coupe
There has been a lot of coverage of last week’s crash of an Uber self-driving car in Arizona, when a Volvo XC90 SUVs killed a 49-year-old woman as she was walking her bike across the street. The evidence to this point indicates that at no time did the car slow down when approaching her, and that the car’s “safety driver” - who is supposed to take control of the vehicle in such a circumstance - wasn’t looking at the road.
In short order, Uber said that it was suspending autonomous car tests that it was conducting in Arizona, Pittsburgh, San Francisco and Toronto. There were stories about how to some folks, this wasn’t a surprise; the New York Times wrote that Uber’s “robotic vehicle project was not living up to expectations months before” the accident, that “the cars were having trouble driving through construction zones and next to tall vehicles, like big rigs. And Uber’s human drivers had to intervene far more frequently than the drivers of competing autonomous car projects.”
In addition, it didn’t take long for the CEO of Waymo, a competing self-driving car company, to say, essentially, that it wouldn’t have happened with one of his cars.
Wired had a good piece, I thought, about the questionable practice of having humans teach robots how to drive safely, and having human safety drivers in the self-driving vehicles, “just in case.” An excerpt:
“Along with the entire notion that robots can be safer drivers than humans, the crash casts doubt on a fundamental tenet of this nascent industry: that the best way to keep everyone safe in these early years is to have humans sitting in the driver’s seat, ready to leap into action.
“Dozens of companies are developing autonomous driving technology in the United States. They all rely on human safety drivers as backups. The odd thing about that reliance is that it belies one of the key reasons so many people are working on this technology. We are good drivers when we’re vigilant. But we’re terrible at being vigilant. We get distracted and tired. We drink and do drugs. We kill 40,000 people on US roads every year and more than a million worldwide. Self-driving cars are supposed to fix that. But if we can’t be trusted to watch the road when we’re actually driving, how did anyone think we’d be good at it when the robot’s doing nearly all the work?”
That’s actually a pretty good point.
It is an Eye-Opening statistic, when you think about it - 40,000 people dying each year in automobile-related crashes. That works out to almost 110 people a day.
A couple of other stats from the Association for Safe International Road Travel: more than 1,600 children under 15 years of age die each year in car crashes, and nearly 8,000 people are killed in crashes involving drivers ages 16-20.)
And yet, nobody - to my knowledge - ever has suggested that we suspend all automobile traffic until this serious problem is solved.
I’m not making light of the Uber crash, or the death that it caused. In fact, I think that suspending the tests was absolutely the right move … though I don’t think we should delude ourselves that this technology doesn’t have a future.
The Times explains why: “Tech companies like Uber, Waymo and Lyft, as well as automakers like General Motors and Toyota, have spent billions developing self-driving cars in the belief that the market for them could one day be worth trillions of dollars.”
In other words, money.
But there’s also another good reason. As LeoMcGarry says in an episode of “The West Wing,” in words given him by the great Aaron Sorkin, “There's been a time in the evolution of everything that works when it didn't work.”
Which is probably the Eye-Opening way we ought to view most stuff.
(There is, by the way, another great line from that episode, this one uttered by President Josiah Bartlet: “They say a statesman is a politician who's been dead for 15 years. I'd like us to be statesmen while we are still alive.”)
- KC's View: