Levandowski flips the table on AV narrative


So Levandowski seems to have come around to using NNs, no LIDAR, and starting with ADAS rather than transportation as a service.

This seems to me to be a pretty big defection from the “LIDAR Taxi” camp, coming from one of the plain spoken badboys of the field. Perhaps drama will ensue, but my interest is in whether Levandowski’s public revisionism will lead to changes in the public perception, and from there to changes in how these programs are structured.

Full disclosure: I agree wholeheartedly with Levandowski’s thinking as expressed in this article, but can’t say I’m a fan of the man himself.


Key quotes from Levandowski’s Medium post.

On Level 4 and 5 autonomy:

…the reason why nobody has achieved this level of functionality is because today’s software is not good enough to predict the future. It’s still nowhere close to matching the instincts of human drivers, which is the single most important factor in road safety.”

On lidar:

…traditional self-driving stacks attempt to compensate for their software’s predictive shortcomings through increasingly complex hardware. Lidar and HD maps provide amazing sensing and localization of the present moment but this precision comes at great cost (with respect to safety, scalability and robustness) while yielding limited gains in predictive ability.

Overall assessment of the industry:

Put simply, the self-driving industry has gotten two key things wrong: it’s been focused on achieving the dream of fully-autonomous driving straight from manual vehicle operation, and it has chased this false dream with crutch technologies.

On Pronto AI’s approach:

Over the past three years, amazing gains in machine learning and a new breed of tensor processing hardware have made it possible to pursue a different, ultimately much more promising, path toward solving the self-driving challenge. That’s what my new company — Pronto — is all about.​

…Our approach? Much better software. After all, the best and safest drivers don’t necessarily have the best eyes. They have the best brains and the most experience. We are building neural networks from the ground up that combine experience-based AI, end-to-end deep learning, and crowdsourced data with advanced computer vision to deliver a highly scalable and flexible driving stack. Nobody else is doing this.​

We are not building technology that tells vehicles how to drive. Instead, our team of engineers is building tech that can learn how to drive the way people do.

Our tech does not shy away from the rich complexities of real-world driving. Through better prediction and decision-making software, we are able to navigate previously vexing “edge cases,” such as very low light, direct sunlight glare, heavy rain, snow, construction zones, etc. in a safe, scalable, and repeatable manner on a wide variety of highways without mapping them.”​

On Pronto AI’s first product, Copilot:

The first step is to deliver a commercially-viable ADAS product that makes driving safer for everyone. It augments the driving experience by reducing the cognitive workload for drivers, allowing them to focus their full attention on monitoring the road ahead. The market that we believe makes the most sense to engage first is the commercial trucking industry…

Demo video:


Yes the problem with most autonomous driving systems isn’t perception. It isn’t even control algorithms for driving. It is predicting what all the other autonomous agents (people, cars, bicyclists, police, etc.) in the world are doing and going to do. Add in previously learned knowledge of the detailed foibles of the particular road you are driving on, and you’ve got a real self driving platform.

That’s why when I see Karpathy of Tesla tout the latest perception advance on his twitter feed, I groan inwardly. You need neural nets (plural) for each autonomous entity that you can see and model their behavior so you can get a good prediction of what the conditions will be in a few seconds.


The Guardian article provides lots of extra detailed information (including a picture of the hardware, which is just a big PC). In particular:

“One network recognizes lane markings, signs, obstacles and other road users, and extracts information about their position and speed. The second takes that information and controls the driving, using digital signals and mechanical actuators for the throttle, brake and steering.”

So, so much for modeling other autonomous agents. Sigh.


Hmm, isnt Tesla doing , at least to some degree, what levandowski proposes?


Yeah! Like Pronto AI, Tesla is going without lidar and starting with an ADAS product, Autopilot. Pronto AI’s “Copilot” is unoriginally named. (Even Nissan’s ADAS is called… ProPilot.)

We don’t know much of the details of Pronto AI or Tesla’s software, but Levandowski’s comments I quoted above to me hint at Pronto AI using supervised imitation learning or reinforcement learning for path planning. Karpathy has also made statements about wanting all of Tesla’s software to be Software 2.0, i.e. neural network-based, and Amir Efrati at The Information reported that Tesla is using imitation learning for path planning.

Levandowski says “Nobody else is doing this” but I don’t know exactly what that means or whether that’s true. Companies want to boast about their technology but they also 1) want to make their blog posts accessible to a general audience and 2) keep their trade secrets secret. So :man_shrugging: