I’m curious how this is going to work. Does it depend on tele-operation? What we’ve heard from Waymo riders is that the safety drivers disengage every 5 rides or so. We’ve also heard about Waymo vans phoning home to get remote assistance.
Unless Waymo has made huge progress in the last year, I don’t see how this is going to be safe without someone remotely monitoring the car.
Furiously curious to hear from the first Waymo One riders who are able to try the fully driverless service. Particularly, I want to know if riders are aware whether remote assistance of any kind is used and, if so, how often and under what kinds of circumstances.
I suppose it is possible that remote assistance could occur without the rider ever knowing, but I hope the riders will be able to tell us this.
I was hopeful in 2018 when Waymo announced it was going to launch a fully driverless commercial taxi service. Everything that has come out since then has been incredibly disappointing: safety drivers are still used, disengagements are frequent, and other problems are frequent. I would love to believe it’s really going to happen this time, but I’m worried about getting my hopes up.
Is it conceivable that Waymo is at superhuman safety even with such a high disengagement rate? Yes, I think so. It’s conceivable that safety drivers are over-cautious about when they disengage and that in most cases the vehicle would have recovered from its error with no harm done. This may be the methodology Waymo is using:
When they have a safety driver disengage, they play back the situation in simulator and see what would have happened. In particular, they track if the simulator says they would have something as light as a bad user experience, a failure to complete a route, and most importantly, safety-critical events at four levels.
That quote is from this article by Brad Templeton:
Waymo may use simulation — and possibly also structured tests (e.g. at Castle) — to attempt to determine whether a disengagement was over-cautious. If they determine that only a very small percentage of disengagements are necessary for safety reasons, then that could be enough to push them over the top. For example, if their safety-critical disengagement rate is once per 22,000 miles (2x what was reported for California in 2018) but they determine only 4% of those disengagements were actually necessary to avoid an unsafe event, then Waymo’s rate of unsafe events is 25x lower than the safety-critical disengagement rate: once per 550,000 miles. Since the average human crash rate is once per 500,000 miles, that would make Waymos safer than the average human driver.
I sincerely hope this is the case. This would mean the rationale for rolling out driverless rides would be to test if the predicted rate of unsafe events from simulation (and possibly also structured testing) is indeed superhuman. It’s always possible that the simulation (and structured tests) is wrong. If the results are good, then Waymo has truly cracked autonomy.
This is the most optimistic interpretation of Waymo’s decision to test driverless rides.
Alphabet Inc.’s self-driving car service Waymo has increased the frequency of “rider-only” service for customers in its public pilot program in suburban Phoenix since August and aims to make that the standard as quickly as possible, CEO John Krafcik told reporters ahead of the Forbes 30 Under 30 Summit in Detroit.
I hope Oliver is right. But I posted about my reservations on Twitter:
I would like to believe more than anything that Waymo has solved Level 4 (or is on the cusp of it), but Waymo removing the safety driver for some rides is just a proxy for whatever evidence Waymo believes demonstrates it’s safe to do so. What are the stats Waymo is looking at?
I would be a lot more convinced if Waymo also disclosed metrics from its simulated tests, structured tests, and public road driving. And/or gave us a deeper, more detailed explanation of why they believe it’s now safe to do drives without safety drivers. So far, they haven’t gone into any specifics on that.
Basically, I think confidence can be misplaced, so I’m skeptical about treating Waymo’s confidence that it’s safe as evidence that it’s safe without knowing Waymo’s justification for that confidence.
So, right now I’m unsure what to think and I’m waiting for further developments, such as:
Safety metrics (or other performance metrics) getting leaked out or disclosed by Waymo.
Waymo scaling up driverless rides to more members of the public who can report back or document their rides.
On multiple occasions in the past, Waymo and other companies such as Drive.ai (recently acquired by Apple after it ran out of cash) have said things to the effect that self-driving is done and it’s here today. Afterward it turned out that nothing had fundamentally changed. Tests were still tests, demos were still demos, and incremental progress was still ticking along. Before declaring victory for self-driving cars, I want to be sure it isn’t premature.
I’m a person with a strong sense of hope and optimism — I want to believe good things are true — so sometimes to prevent myself from getting out over my skis I have to check myself and ask skeptical questions.