Mobileye’s camera-first approach to full autonomy


#1

Mobileye’s CEO and CTO, Amnon Shashua, describes Mobileye’s approach to fully autonomous driving in a talk from May 2018:

What do we mean by “true redundancy”? We have to achieve a comprehensive, end-to-end — so end-to-end is the sensing, it is the mapping, localizing the vehicle inside the map, the planning, the action — everything is based just on cameras, and it’s one comprehensive solution.

Another comprehensive solution and independent is to detect all road users using radar and lidars.

Skip to 13:48 to hear his comments:

It’s very interesting to me that Mobileye’s position is that fully autonomous driving is achievable in the near-term using just cameras, and that radar and lidar are only required for redundancy.

Aurora Innovation (a startup co-founded by Chris Urmson from Waymo, Sterling Anderson from Tesla, and Drew Bagnell from Uber ATG) has similar reasoning:

We believe it will ultimately be entirely possible to build a self-driving car that can get by on, for instance, cameras alone. However, getting autonomy out safely, quickly, and broadly means driving down errors as quickly as possible. Crudely speaking, if we have three independent modalities with epsilon miss-detection-rates and we combine them we can achieve an epsilon ³ rate in perception. In practice, relatively orthogonal failure modes won’t achieve that level of benefit, however, an error every million miles can get boosted to an error every billion miles. It is extremely difficult to achieve this level of accuracy with a single modality alone.

Different sensor modalities have different strengths and weaknesses; thus, incorporating multiple modalities drives orders of magnitude improvements in the reliability of the system. Cameras suffer from difficulty in low-light and high dynamic range scenarios; radars suffer from limited resolution and artifacts due to multi-path and doppler ambiguity; lidars “see” obscurants.

Tesla is sometimes portrayed as iconoclastic for not using lidar (just cameras, radar, and ultrasonics), but Tesla’s position doesn’t seem too different from Mobileye’s or Aurora’s.

If the goal of autonomous vehicles is to reach a crash rate of once per 1 million miles, about 1/2 the crash rate of human drivers (once per 530,000 miles), then the level of redundancy Aurora is talking about isn’t needed.

Suppose two sensor modalities — cameras and radar — is enough to bring the perception error rate down to once per 10 million miles. That would be sufficient as long as all other types of errors had a combined rate of no more than 9 per 10 million miles (or once per 1.1 million miles). That would mean a total of 10 errors per 10 million miles, or an error once per 1 million miles — about 2x safer than human drivers. This is even a conservative figure, since less than 100% of errors lead to a crash. A perception error rate of once per 1 billion miles isn’t necessary.

A perception error rate below once per 10 million miles would provide even more leeway for other types of errors, or reduce the overall error rate even more.