Tough Questions Not
The above video shows clearly what happened: Human error.
Seth Tyler, a spokesperson for the Chandler Police Department said “Waymo and the driver of the vehicle won’t get cited for anything because she didn't do anything wrong.”
Even if the Waymo car had been in autonomous mode at the time, self-driving tech can't make up for errors made by other human drivers.
"This crash is indeed pretty much unavoidable for AVs," says Raj Rajkumar, who researches autonomous driving at Carnegie Mellon University. "As can be seen in this video, when a vehicle out there goes out of control, no one really knows or controls what happens next—that vehicle can swerve in one of many different ways, roll over, etc. The AV in turn must make sure that, in an attempt to evade the situation, it does not make the situation worse given the speed at which events like this unfold."
“The images simplify the story and look like terrible accidents,” says Bart Selman, an artificial intelligence expert at Cornell University. “Lots of mistakes are made by human drivers. We have gotten used to that and don’t even report that anymore.”
Indeed, in 2016, human drivers in Arizona averaged nearly 350 crashes and two deaths a day. That’s why the promoters of autonomous technology harp on the facts that nearly 40,000 people die on US roads every year, and that human error causes more than 90 percent of crashes. Letting robots—which don’t get drunk, distracted, sleepy, or ragey—take the wheel could put a serious dent in those figures.
Sensational News in Perspective
- Over 37,000 people die in road crashes each year
- An additional 2.35 million are injured or disabled
- Over 1,600 children under 15 years of age die each year
- Nearly 8,000 people are killed in crashes involving drivers ages 16-20
- Road crashes cost the U.S. $230.6 billion per year, or an average of $820 per person
- Road crashes are the single greatest annual cause of death of healthy U.S. citizens traveling abroad
This crash is fresh on the heels of arguably negligent Uber testing. Some new facts have surfaced regarding the fatal Uber crash.
When Uber decided in 2016 to retire its fleet of self-driving Ford Fusion cars in favor of Volvo sport utility vehicles, it also chose to scale back on one notable piece of technology: the safety sensors used to detect objects in the road.
That decision resulted in a self-driving vehicle with more blind spots than its own earlier generation of autonomous cars, as well as those of its rivals, according to interviews with five former employees and four industry experts who spoke for the first time about Uber’s technology switch.
In scaling back to a single lidar on the Volvo, Uber introduced a blind zone around the perimeter of the SUV that cannot fully detect pedestrians, according to interviews with former employees and Raj Rajkumar, the head of Carnegie Mellon University’s transportation center who has been working on self-driving technology for over a decade.
Uber's decision to knowingly introduce blind spots was at best a piss poor decision. A case can be made criminal negligence on the part of Uber.
Technology Will Improve
The technology will improve. The skills, drunkenness, and sleep deprived behavior of humans won't.
Mike "Mish" Shedlock