Tesla from an Investor Perspective

I largely approach Tesla from an investor perspective, and I can say that most of the sophisticated investors that I hear from (and by sophisticated investors, I mean equity analysts from investment banks and large hedge funds / asset management funds, and not 99% of the stuff you might see on SeekingAlpha) are not looking at this topic of self-driving cars closely enough. They are simply too busy with other near-term issues (demand, production estimates, etc) and lack the background knowledge to really go more deeply into it. I find this forum really helpful and interesting because it provides a differentiated deep dive into an area of Tesla that is just not talked about enough, but could prove to be one of Tesla’s many wildcards that could send the company’s valuation soaring.

Within that sophisticated investor community I referenced above, they mostly seem to take the AI community consensus that Waymo is ahead of Tesla/Cruise/everyone else, and leave it at that. They assign no value to Tesla’s self-driving, which is probably prudent for now, but could perhaps change in the next several years.

I’m curious to hear from the posters here about 1) whether Waymo is actually ahead right now or not, 2) your thoughts on Elon’s confidence of reaching “feature-complete” self-driving by the end of the year, and 3) who will in the end reach the ultimate goal first (and I’ll leave it up to you as to define what the goal is, whether it’s level 4/5 or whatever else).

Apologies if you’ve already expressed your views elsewhere. I know @strangecosmos has written several times on this topic on SeekingAlpha (but I obviously would appreciate hearing his views still). But curious about what everyone else thinks.

Thanks all and thanks for contributing to this forum


I created a new topic category for posts touching on financial or regulatory aspects of autonomous cars: Self-driving law and economics. Although asking about the competitiveness of different autonomous car companies is a technical question, so you can change the category if you want.

Someone from Waymo contacted me about my Seeking Alpha article and I asked them for a statement I could append to the article. Maybe I got something wrong. (They weren’t super specific in their initial message.) Hopefully they will give me something I can publish.

I totally agree that this aspect of Tesla’s future is underestimated. There’s an unfortunate tendency, in economics and in investing, to give a value of zero to any parameter that is hard to evaluate. I think that’s one thing that leads to this underestimation. Another, perhaps more important one, is that the “experts” are telling the public and the investment community that Tesla’s chance of successfully deploying FSD is low. They believe it is almost certainly lower than Waymo’s chances or Cruise’s chances. This creates an environment where investors feel comfortable ignoring the possibility that Tesla will succeed.

This “expert bias” issue reminds me a lot of the issue in oil economics where all the pundits and analysts, virtually without exception, constantly underestimate the rate of growth of solar power and wind power. Whether you’re talking about the IEA, the EIA, BP, or BNEF - every single one of them has a long history of grossly underestimating future solar and wind deployments. And yet, strangely, none of them feels the need to change their methodology to fix these errors. They keep publishing figures that they have to know are too low, and they keep expressing confidence in those numbers. It’s very strange.

I think that the self driving vehicle “experts”, in a similar fashion, are going to continue to underestimate Tesla’s chances even to the point where they deny it is happening after it starts. There’s some kind of cognitive dissonance at work where Tesla succeeding simply doesn’t make sense to them. It violates the convention wisdom that has taken hold of their field and no amount of evidence is going to get them to shift their position.

I myself make my living as an investor. And I have to admit that while it’s annoying to have your investment thesis roundly and persistently denied by the majority of market participants the important thing here is knowing an investable truth that the market has not yet waken up to. That’s where money is made, after all.

Do you guys both think that Tesla will get to some sort of FSD by year end? How do you view Waymo relative to Tesla right now as well as a couple years from now?

I think comparing Waymo and Tesla is pretty hard because their development approaches and business models are so different. I don’t see them actually competing for a long time and prior to that they will have little impact on each other. They don’t even share their basic technical underpinnings so they can’t learn anything from seeing the other’s progress. The press loves to turn everything into a horse race and I’m sure they’ll keep doing that but IMO it makes no sense here.

Comparison aside:

Tesla is pretty close to a black box on this. So is Waymo. So is Cruise. On the latter two there’s almost nothing that would give us a fine-grained understanding of how close they are. Press accounts are vapid, the PR output is free of information, and there’s nothing being published in industrial or scientific outlets. I see the vehicles a lot in SF but that doesn’t tell me about failure rates or rate of progress. I can tell you that they drive cautiously but I can’t tell you how much of an issue that is or whether it implies anything about their capabilities. It’s also not clear how much tolerance they have for public failure, which could be a very large factor in how long it takes them to put out a commercial service. I could see them starting real service now. And I can also see it taking them ten years to have any appreciable presence in the cities of the U.S.

As for Tesla, the best indication of progress that can be independently verified is to watch how the capabilities of AP2 have improved over time and try to extrapolate that forward. So you drive the car a lot and watch how the capability evolves over time. It’s also occasionally possible to see detailed technical data and software extracted from the cars. From those two sources I see that development is proceeding rapidly and that pace of improvement is substantial. I can easily believe that they could have a “feature complete” implementation in use internally this year and that it could be good enough a year later to allow for supervised use by their customers. That’s not a very common opinion AFAIK, but I haven’t seen any credible refuting evidence. OTOH it’s clear that Tesla’s time tables on this stuff are aspirational and so far they have lagged badly relative to the timetables that they have targeted. Which is also true of all the other groups developing self driving vehicles. On top of this there are no self driving vehicles today so we don’t know where the finish line is or what success looks like.

So in summary things are moving very fast but we still don’t know how far we have to go. I think claims of progress by Tesla are credible but it could still take quite a while.

After I get a chance to put some miles on HW3 optimized networks I’ll know a lot more. I hope that happens this summer. One big wildcard right now is how much improvement in perception accuracy comes out of running a 10x bigger NN. It could be decisive and if so we’ll know a lot more in six months.

1 Like

ARK Invest estimates the net present value of autonomous ride-hailing is $2 trillion. If you assign a 5% chance to a) this analysis being correct and b) Tesla capturing 10% market share, then the net present value of Tesla’s autonomous ride-hailing opportunity is $10 billion, or $58 a share. To his credit, Adam Jonas actually include more than that amount for autonomy in his price target for Tesla. But I think he’s the only one.

(I did a few similar calculations for my article “Valuing Tesla Based On Vehicle Sales And Autonomy”. I think you can get behind the paywall via the author’s picks on my profile.)

I’ve been reading David Deutsch’s book The Beginning of Infinity lately and it has a lot of powerful ideas. One of them is an argument about why we can’t predict the content of future scientific discoveries: predicting the content of a discovery is equivalent to making that discovery. For example, predicting in 1800 that future scientists would discover natural selection is either a) a guess, and you can’t know whether it’s true or b) the discovery of natural selection, and not a prediction about a future discovery.

(Our inability to predict the content of future knowledge is a phenomenon I call the cosmic mystery, using Franciscan mystic Richard Rohr’s definition: “Mystery is not something you can’t know. Mystery is endless knowability.” Knowledge creation is an endless process — presumably not as it concerns physics but seemingly as it concerns technology and future biology — and so, no matter how much we learn, the permanent condition of intelligent beings in the universe is standing in front of the vast unknown.)

So, we can’t predict the content of scientific discoveries, but can we predict the timing? In some cases, yes! Say that a foreign object enters our solar system and we send a probe to investigate what it is. We can predict when the probe will arrive, even if we can’t predict what it will find—a space rock or an alien solar sail.

We can’t know in advance the content of the scientific discoveries required for full self-driving. To know that, we would need to have already invented full self-driving—and made those discoveries already. But can we predict the timing?

Unlike the astronomical example, predicting the timing of the discoveries required for full self-driving depends on knowledge about the content of those discoveries. Trivially, if you think full self-driving is a straightforward supervised learning problem, you can predict that it will come earlier than if you think it is an AGI problem. Or, less trivially, a hierarchical reinforcement learning problem.

I am agnostic about the timing of the solution insofar as I am agnostic about the content of the solution. For instance, if I knew the content of the solution were just labelling X many frames of video (to solve perception) and throwing Y many miles of state-action pairs into a neural network built off of existing architectures (to solve action), then I could estimate how long it would take to do that and predict when we will get full self-driving. I could try to find out the cost of data labelling, look at Tesla’s R&D budget, and project HW3 miles driven — the details of implementing the solution — and then give a prediction down to the month. But I don’t know what the content of the solution is. Not for sure.

Uncertainty about the timing of the solution = Uncertainty about the content of the solution + Uncertainty about the implementation of the solution

So, I’ve given up trying to think about the timing of self-driving independent of the exact content of the solution. Now I just devote all my thoughts to the solution first, and then any predictions about the timing will be derived from that.

1 Like

Not to disagree with anything you say, but I have a slightly different take on the nature of predictability with respect to technological progress.

It’s quite true that breakthroughs are inherently unpredictable, individually. But the pace of progress in many domains is predictable most of the time. Moore’s law is the most famous example of an industrial ‘learning rate’, but learning rates abound. And any industry with reasonably predictable learning rate extending backwards will probably have a predictable learning rate going forwards too. At this point in time the major elements of self driving have been progressing long enough that we can see their learning rates and that makes them fairly predictable. That’s not true of everything, but it’s true of enough things that the statement stands up.

The timing element that is missing is that we don’t know where the threshold of commercial utility is right now. There was a time when the threshold seemed to be “not directly causing damage with the vehicle” and during that time it seemed like we knew when it would happen and that it would happen soon. But that turned out to be wrong as Google’s initial deployment showed that not directly causing accidents was not sufficient to allow for deployment. There was a much higher hurdle which required smoothly integrating with other users of the streets. The point at which we satisfy this newer hurdle is immensely harder to predict because we don’t yet know the criteria needed to satisfy it. Humans can’t fully and completely describe how they do what they do with respect to other road users. We don’t have conscious access to that process. On top of that the emergent properties of a distribution of users with varying attributes complicates things much further. It’s this element that provides the major uncertainty right now.

That we don’t know where the finish line is actually plays to Tesla’s advantage right now. Unlike most of the other players in the space they haven’t decided to hold off deploying their system until they cross this difficult to know threshold. Instead they deploy what they have as it becomes useful enough to warrant sending to customers. Not having a well specified goal is an advantage in a domain where the satisficing criteria aren’t known.


I can’t speak too much to the details of self-driving, but I can speak pretty confidently on how the financial analysts are modeling Tesla, and what is going on in their thought process.

Just about all of them are not incorporating Tesla’s autonomous business in their valuation (aside from Morgan Stanley). I know this because I can see what they’re writing and what they’re modeling, and as someone who used to work in that industry, I know what they’re thinking.

I personally think this is the correct approach. Analysts often do not bring in estimates into their models until there is much more visibility into it. As you guys have mentioned, there are a ton of unknowns here, and the portion that is public and visible still seems a ways away from commercialization (as a transportation service). Much of forecasting is already guess-work, but assigning a valuation to Tesla based on this business would be even more out there. Note that in the VC world, investors do this constantly since they’re buying and selling businesses that are 10 steps removed from that stage. In the public equities world though, investors often won’t give too much credit to something until it’s much closer to being a full-fledged business.

In this case, it’s VERY easy to get huge valuation numbers with a top-down approach - look at the total size of the transportation industry, take a % of that for autonomy, and then give Tesla a small portion of that. But it is all dependent on Tesla getting to full self driving, and as you guys have discussed, it’s extremely difficult to have confidence that they will get there, and to convince a money manager to invest on that small-confidence idea. In the startup world, they joke about all the startup pitch decks that use this methodology to value their business and suddenly get to billions of dollars, when they don’t even have a dollar in revenue yet. Investors need to see more execution and progress until it becomes tangible to them.

The other element to consider is that most of the large buyside investors - hedge funds, asset managers, etc - take a shorter-term view on investing and so they won’t want to consider things that are too far away or too uncertain on whether it will ever become a full-fledged business of its own. ARK is a bit of an outlier here with their own views. But that also informs why those analysts won’t bring it into their price targets - because their clients don’t even consider it.

Ultimately though, while I agree that there is too little visibility to bring it into price targets and valuation, I think the street is missing a portion of the story by not even attempting to analyze this part more closely and instead taking the conventional wisdom that Waymo is ahead.

Edit: I don’t mean to say that you can’t invest in Tesla on the autonomy thesis. This is just why institutional investors, and sellside analysts that produce those targets and ratings, don’t incorporate it into their models or invest on it. As an individual, I think you can definitely do it. Tesla will move in the near-term on other factors, but in the medium-term or longer-term, I can see this portion becoming a bigger part of the story over time.


I don’t think Waymo is in the lead. I think they will fold up shop in 3+ years and go home or sell out. Leaders:

  1. Tesla
  2. Mobileye
  3. Uber

My main criteria for this ranking is willingness to take risk. Waymo has zero ability to take risk, they will not tolerate blood on their hands. On the other side, Tesla is o.k. with people dying as long as more lives are saved than lost.
Tesla is currently full of misinformation perpetrated by Elon. Elon has been wrong everytime on this subject since 2015 when he said it is a solved problem and will have coast to coast demo in two years. Everytime Elon is asked he says full self driverless where you can sleep in the car is two years away, he said this in 2016, 2017, 2018, 2019. Guess what he will say next year?

Having said that tesla is full of misinformation, there is great value in level 3 autonomous driving. Sometimes called eyes off. You can work in the car, but need to be ready to take over within 10 seconds according to spec. Reality will be different, but if you can do emails and stuff and ready to take over immediately with 6 second adjustment period, then that will be a big win. I’m confident Tesla will win Level 3 on the freeway before anybody else. Hopefully in a couple of years.

Waymo could win easily if they were not under Google. VCs would make Waymo win easily with a simple ability to take risk. Google could license their tech for Level 3 on the freeway immediately if they had any basketballs. Elon has basketballs made of steel. :slight_smile:

More on Tesla and misinformation: the way Elon defines full self driving is absurd. Having a few features is not even 10% self driving, let alone “full”.

I like how Mobileye has demonstrated aggressive driving style. Kind of the opposite of pussy cat Waymo. There are already implementations of Mobileye Level 3 in traffic jam situations, Audi in Germany, but perhaps this doesn’t work as well as expected since haven’t heard much.

The part that seems inconsistent is that several analyst firms have assigned large valuations to Waymo, but not to Tesla.

UBS: $25 billion to $135 billion

RBC: $119 billion

Morgan Stanley: $175 billion

Jeffries: $250 billion

Do you think these valuations are serious? Maybe they aren’t, and that explains the apparent inconsistency.

Have any of these firms actually added a Waymo valuation to their price target for Alphabet? If not, then in practice, they are not really assigning these values to Waymo.

Timothy B. Lee (a reporter at Ars Technica, previously at Vox) summed up what was also my experience of disillusionment with Waymo:

Waymo started in 2009. Any work on computer vision from 2009 to 2011 was probably thrown out after deep neural networks took the computer vision world by storm in 2012. Imagine that deep learning had never happened. Self-driving cars would probably be hopeless. So, in retrospect, what Waymo was developing from 2009 to 2011 was unworkable.

Today, it seems like Waymo is replacing some of its classical motion planning code with imitation learning — or at least trying to. I suspect that the same is true here as for computer vision. Classical motion planning is as hopeless as classical computer vision, and the only way to solve it is with neural networks.

My point is you can make a public, visible demo of a solution even if the solution is totally unworkable, and if it’s impossible to solve the problem given current technology. It’s the difference between building a fusion reactor and commercializing fusion power.

From the outside (or even from the inside), how do you tell the difference between an unworkable solution and an R&D project with a path to commercialization? Both can produce demos. Both can show progress. In 2010, Waymo had driven 140,000 autonomous miles — with an unworkable solution. What’s the difference between 140,000 miles then and 15 million miles now?

I don’t think looking at demos and then trying to make conclusions based on that is a good idea. In 2010, this could have tricked you into thinking what Google had was workable, and it was only by luck that deep learning happened 2 years later. (If it hadn’t, the project might have been cancelled by now.)

The only approach that makes sense to me is to try to think about the problem space and the solution space in a systematic, first principles way: what can neural networks do, and what needs to be done?


Do you think these valuations are serious? Maybe they aren’t, and that explains the apparent inconsistency.

I think there’s a couple things going on here:

  1. the analysts producing these Waymo valuations are tech analysts who cover tech companies, and are somewhat used to valuing businesses that are further away from monetization/profitability. They also believe (rightly or wrongly) that Waymo is well ahead and closer to commercialization than anyone else. It’s a possibility that Google could spin out Waymo into its own public company. So in a way, it actually IS closer to monetization for Google (in the form of a spinout or a sale). To @DanCar’s point, this would give Waymo a different investor base and allow the business to take more risk. So it kinda makes sense to assign a value to it.
  2. the analysts producing Tesla targets are auto analysts who have spent their careers analyzing GM, Ford, Toyota, and other mature businesses. The thought process behind valuation and what analysts need to see in order to value it is pretty different. Self-driving is seen as just a feature that adds to gross margin, but it’s not seen as a completely separate business given how far away self-driving is perceived to be by auto analysts. And Tesla does not have a mechanism (nor the desire) to to immediately recognize its full value, like Google potentially could .
  3. both groups of analysts are primarily finance guys who know very little about machine learning and how that applies to self-driving. They cover multiple companies (often times over 20) and have finite time. Even with Alphabet, the lion’s share of their revenue comes from search and so they will spend the majority of their time looking at that ad business and not Waymo. From what I have seen, most analysts have not gone further than simply taking the conventional wisdom in the AI community that Waymo is ahead.

From the outside (or even from the inside), how do you tell the difference between an unworkable solution and an R&D project with a path to commercialization? Both can produce demos. Both can show progress. In 2010, Waymo had driven 140,000 autonomous miles — with an unworkable solution. What’s the difference between 140,000 miles then and 15 million miles now?

From what I have seen, the analysts are primarily looking at:

  1. the lead Waymo has in fewer disengagements
  2. some progress in commercialization (to Tim Lee’s points, this has been pushed back, but they still appear to be further along than any other company)
  3. Google’s army of AI experts and computing power
  4. the seeming consensus among the AI experts that Waymo is ahead (Benedict Evans and others have said that this is the consensus)

It seems like a fairly compelling argument without really diving into the details, and so I can see why analysts don’t really question this too much. Without the background in AI, they are going to rely heavily on what the experts are saying, and it seems that they are saying that Waymo is ahead.

There are a few resources that I’ll need to dig up again that might shed some more light on the topic. GM had a very detailed (and public) presentation on Cruise and their path to commercialization in late 2017. Here’s a link to the deck. I have the transcript as well if anyone is curious. It is fairly high level, but it showed management’s commitment to getting this out, the resources they would devote to it, and the thought they put into commercialization. They also followed this presentation up with live demos for investors afterwards in San Francisco. There are some other interesting slides in there as well on transportation as a service.

I’ll do some more digging on Waymo as well - I am sure there are some analysts that have done a deeper dive on Waymo’s progress to commercialization.


A lot of people (including me, and including ARK Invest) made the reasonable mistake of taking these numbers at face value. But increasingly folks have soured on them. From The Verge:

Without strict definitions of when a safety driver should take over and more granular information from every company about where and when testing is happening, critics say there is no basis for comparison. … This broad interpretation of the rule has led some to dismiss the disengagement reports all together. “They are all utterly meaningless,” said Sam Abuelsamid, a senior analyst at Navigant Research.

From Jalopnik:

…the reports this week again show there’s the potential for serious gaps in what even qualifies as an event that required a human to manually take control of the car.

From Amir Efrati, a reporter at The Information:

“I wouldn’t pay attention to any of it.”

— Person who’s worked for Cruise re the “disengagement" reports it files to regulators (and that everyone writes about).

As I understand it from Jeff Schneider’s talk, companies are only required to report safety-critical disengagements. A safety-critical disengagement is one where a collision would have occurred if the safety driver didn’t take over. So, if 99% of disengagements are not safety-critical, and only safety-critical disengagements are reported, a company’s true disengagement rate is 100x higher than its reported disengagement rate.

For 2018, Waymo reported a rate of one disengagement per 11,000 miles. For all we know, the true rate could be once per 1,100 miles, or once per 110 miles. I’m aware of two Waymo riders who have talked publicly about their experience. One talked to Ars Technica:

He initially told me that he saw the safety drivers grabbing the wheel on multiple occasions over the course of his four Waymo rides. But Waymo says their records show that this actually happened only once during the four rides.

One did an AMA on Reddit:

[Disengagements] used to be at least once per ride, but it is getting to be less and less common. I would say the majority of rides are 100% self driven now and it appears that the safety drivers are instructed to avoid engaging if at all possible. If I had to put a number on it, I would say they disengage the auto drive mode once in every five rides or so, and even then it is only for a few seconds before they put it back into auto drive mode.

The average Uber ride in Phoenix is allegedly 8 miles. A disengagement every 5 trips would be once per 40 miles. That’s 275x higher than Waymo’s reported disengagement rate.

This is just anecdotal evidence from two people, but it would be a big coincidence if both of the Waymo riders to talk publicly so far were such outliers to a once per 11,000 miles average.

The only difference between Waymo One today and the Waymo early rider program 2 years ago is that Waymo One riders are released from their NDAs and have to pay. Waymo One is otherwise exactly the same, as far as I know. If the launch of Waymo One reflects any meaningful progress in the underlying technology, that isn’t evident from the outside.

It doesn’t seem to me like there is any technological purpose in launching a private beta test like the early rider program or Waymo One. You can do the testing just as well with no one in the back seat. It’s only relevant from a customer experience standpoint, but it seems premature to focus on that if the disengagement rate is indeed something like once every 5 trips.

A private beta program isn’t really evidence of progress. A cynical person would say its main purpose is good publicity. I’m more inclined to think Waymo got ahead of its skis and thought it could crack this nut sooner. Or it just saw no real downside to letting people ride in the cars — not much added cost to let people ride in vans that are going to be testing anyway — and the upside is that it would help get other aspects of the product and business ready to go whenever the technology is ready.

I agree these are genuine advantages if they can be leveraged. For example, if the solution is just reinforcement learning in simulation, then Waymo stands the best chance to solve it because it seems to have the best RL experts and it has massive compute.

If the solution requires massive amounts of unlabelled real world data (i.e. billions of miles’ worth), then for now Tesla is best positioned to solve it.

I wonder what the consensus among AI experts really is. Not just on who has the best technology today, but who has the best path forward. I would love to hear experts weigh in on my hypothesis about Tesla and imitation learning.

I suspect there is a difference between the real consensus and the perceived consensus. For example, Benedict Evans seems to think that Tesla is counter-consensus on lidar. He says, “Every significant autonomy project is using LIDAR.” But even if that is true, strictly speaking, I think it overlooks a more complicated reality. For example, Mobileye is trying to develop full autonomy using only cameras:

During this initial phase, the fleet is powered only by cameras. In a 360-degree configuration, each vehicle uses 12 cameras, with eight cameras providing long-range surround view and four cameras utilized for parking. The goal in this phase is to prove that we can create a comprehensive end-to-end solution from processing only the camera data. We characterize an end-to-end AV solution as consisting of a surround view sensing state capable of detecting road users, drivable paths and the semantic meaning of traffic signs/lights; the real-time creation of HD-maps as well as the ability to localize the AV with centimeter-level accuracy; path planning (i.e., driving policy); and vehicle control.

Radar and lidar will be added for redundancy after the camera-only system is complete. Mobileye agrees with Elon that cameras alone can do it all — not just eventually, but in the near term.

Anthony Levandowski, who was once an influential engineer at Waymo, has a new autonomy startup called Pronto AI and it sounds like it isn’t using lidar. Levandowski wrote:

… traditional self-driving stacks attempt to compensate for their software’s predictive shortcomings through increasingly complex hardware. Lidar and HD maps provide amazing sensing and localization of the present moment but this precision comes at great cost (with respect to safety, scalability and robustness) while yielding limited gains in predictive ability.

Put simply, the self-driving industry has gotten two key things wrong: it’s been focused on achieving the dream of fully-autonomous driving straight from manual vehicle operation, and it has chased this false dream with crutch technologies.

Levandowski did a cross-country highway drive in a prototype Pronto car without lidar:

I’m interested in the transcript.

From the perspective of a tech guy these kind of decks are maddeningly free of usable information. But talks by execs seem to often leak tidbits.

You can find it here. They also had a high level update in 2018 as well but did not provide as much detail as the 2017 discussion.

They talked about a scaled launch in urban areas in 2019; I’d be surprised if they are going to meet all those aspects this year.

Awesome. Thanks!

I looked at a few of those brokers and saw that most of them had not incorporated those price targets for Waymo into their actual price target for Google. It’s just a simple estimate that they are producing to make a larger point that Waymo is worth a lot and could drive up the price of Google shares, but they are not including it into their price of what Google is worth (at least not completely).

Here’s some snippets on what they’re saying on Waymo and its position in the autonomy race:

Atlantic Equities:

Waymo ahead, GM Cruise appears closest competitor: There is clear evidence that
Waymo has the lead in the race to develop a fully functioning autonomous vehicle. In terms
of total autonomous miles driven, frequency of human intervention during these miles, the
pace of simulation-related testing and plans for launching the first commercially available
service, Waymo is ahead of the competition, arguably unsurprising given the company has
been focused on the self-driving opportunity for longer than anyone else.
However, competitors are also making progress, with GM Cruise in particular showing an
impressive reduction in the frequency with which humans have to take control of their
vehicles. Furthermore, the amount of data being collected by Tesla suggests that the electric
carmaker should certainly not be dismissed as a potential competitor

The note goes on to talk a bit more about each of those reasons. They also briefly mention Tesla’s miles, but note that they “only recently gained permission to actually collect the data from these
vehicles, and the data being collected is less comprehensive than that available to Waymo” (on the basis that they do not have LIDAR).

Morgan Stanley:

At current levels, we do not believe that GOOGLis getting any credit for Waymo’s value and we see this launch (as well as the fact investors will soon be able to ride in these cars) as a potential catalyst for investors to analyze and value the opportunity.

I didn’t see anything talking about Waymo and why they are considered in the lead; MS simply took it as a given and also assumed that they would eventually solve self-driving.

Aegis Capital:

First Mover Advantage - First To Complete A Fully Self-Driving Trip On Public Roads
Most Number of Miles Driven - Over 4M Miles Driven, Far Exceeding All Competitors
Fully Autonomous, Level 4 SAE, Cars Are On The Road - Prior, Waymo vehicles had been test-driving on public roads with a driver at the wheel. Only service doing so at this level
Engineering Talent - Deep bench of software of hardware engineers; over 200
Technology - Waymo designed and built their self-driving sensors from the ground up and now have a fully integrated self-driving car system.
Early Rider Program - Waymo began offering families rides in the self-driving cars. Only service testing at this scale.

There are others as well, but they all say the same stuff. No one is analyzing this more deeply or looking into the paths forward. They’re all sizing up the market and assuming that Waymo will eventually solve it first.


Speaking of Waymo. https://medium.com/waymo/expanding-our-footprint-in-arizona-waymos-technical-service-center-in-mesa-a00cfe7dbc34

I listened to this discussion from GM/Cruise around the time it happened.

Kyle Vogt, the CTO and former CEO of Cruise, was recently on Lex Fridman’s AI podcast. (As is usually the way, there wasn’t enough technical information to really be that interesting — so I wouldn’t recommend listening.) Vogt predicted that Cruise would get to “superhuman” autonomous driving in 2019. I don’t see what basis there is for this statement, though. I don’t see why Cruise would succeed at this before Waymo.

Meanwhile, John Krafcik (the CEO of Waymo) has come across to me as much more pessimistic. This is a comment from an interview in November:

It’s going to be a really long time — I think decades — before you see this technology everywhere in the world.

Another comment from that interview:

The trucking shortage is now. … The use case here is fairly straightforward. Moving goods on freeways from hub to hub is a fairly straightforward application of our technology. It’s much easier than the initial problem we’re trying to solve using Waymo technology in a ride-sharing service. So this is something you could anticipate a material contribution to the world from Waymo over the next couple of years.

The implication is seemingly that Waymo will not make “a material contribution to the world” via ride-sharing in “the next couple of years”.


I wrote a detailed post on the Tesla Motors Club forum essentially resummarizing the same points I’ve made here and elsewhere. Here it is, reproduced in full:

Algorithms, compute, data: a mental model for thinking about autonomous vehicle competition

Autonomous driving can be split into two kinds of task:

Perception: object detection, depth mapping, semantic segmentation of driveable roadway, etc. In short, computer vision tasks that are done with supervised learning. (When people say “deep learning”, they’re typically referring to deep supervised learning.)

Action: driving policy (i.e. making high-level decisions like whether to overtake a slow car) and path planning (i.e. the exact trajectory and speed of a car).

Supervised learning is universally used for camera-based perception. Historically, action has been handled by traditional, hand-coded software, not neural networks or machine learning. But increasingly it seems like engineers are talking about using machine learning to solve the action part of the problem.

Types of machine learning

As far as I know, there are just two machine learning approaches to action:

One form of imitation learning, called “behavioural cloning”, is just supervised learning applied to action. A neural network learns to map actions taken by humans to the environmental cues that prompt those actions (e.g. maps stopping to stop signs).

Another form of imitation learning is inverse reinforcement learning, which attempts to derive a reward function (i.e. a points system used in reinforcement learning) from human actions.

Reinforcement learning is essentially trial and error over a massive number of iterations, with an agent taking actions that increase reward and avoiding actions that decrease reward.

What follows is strictly my own opinion.

Types of competitive advantage

We can boil down competitive advantage in machine learning into three main things: algorithms, compute, and data. Andrej Karpathy, among other experts, have identified these three things as fundamental inputs to AI progress:

I broadly like to think about four separate factors that hold back AI:​

  1. Compute (the obvious one: Moore’s Law, GPUs, ASICs),
  2. Data (in a nice form, not just out there somewhere on the internet - e.g. ImageNet),
  3. Algorithms (research and ideas, e.g. backprop, CNN, LSTM), and
  4. Infrastructure (software under you - Linux, TCP/IP, Git, ROS, PR2, AWS, AMT, TensorFlow, etc.).”

(In discussing competition between autonomous vehicle companies, I’ll ignore infrastructure since it doesn’t seem relevant — these things seem to be either open source or commoditized.)

If imitation learning turns out to be the solution to action for autonomous vehicles, then I believe algorithms and training data will be what gives companies a competitive advantage.

Similar to supervised learning for perception, the performance of imitation learning will be a function of the neural network architecture and the training dataset.

If reinforcement learning turns out to be the solution, then I believe algorithms and possibly compute will be what gives companies a competitive advantage.

Reinforcement learning (RL) will most likely take place in simulation. It will depend on RL algorithms that can learn to drive in a simulator, and transfer that learning to the real world. Since simulation and training is computationally intensive, computing resources may confer a competitive advantage.

Competitive advantage in imitation learning

In a scenario where imitation learning turns out to be the right solution, I predict Tesla will be in the best competitive position.

As I understand it, the operative metric for training data is the number of unique examples for each semantic class. In image classification, a semantic class would be an image type like images of great white sharks. In imitation learning, a semantic class would seem to be (this is my own guess) pairings of actions and environmental cues. If some environmental cues are incredibly rare — such as the conditions that lead to a deadly accident on average every 100 million miles — then it would take an incredibly large amount of data to collect a significant number of examples for each semantic class.

Drago Anguelov, Waymo’s head of research, recently said that Waymo doesn’t have the ability to collect data on the rare corner cases that form the “long tail” of human driving behaviour:

This suggests the scale of data needed to capture long tail events is not millions of miles, but perhaps billions of miles. For example, to capture 1,000 examples of deadly crashes, you would need 100 billion miles of data.

When Tesla has about 1.1 million HW3 cars on the road, the HW3 fleet will be driving 1 billion miles a month. If Tesla reaches 1.1 million HW3 cars in 2020 and builds 750,000 cars a year from 2020 on, the HW3 fleet will reach 100 billion miles cumulatively by the end of 2023.

As far as I know, no other company yet has the ability to collect the data Tesla can collect at the scale of billions of miles.

Even if other companies have superior algorithms for imitation learning, it doesn’t matter if they have no long tail data to train them on.

Competitive advantage in reinforcement learning

In a scenario where reinforcement learning turns out to be the right solution, I predict Waymo will be in the best competitive position.

DeepMind and Google are under the Alphabet umbrella with Waymo. Taken together, DeepMind and Google seem to have the highest number of world-class RL researchers and engineers of any company. This makes me think that, assuming Waymo can fully tap into this expertise, it is the best positioned to develop and apply RL algorithms for autonomous driving.

Google also has massive compute, which could turn out to be an advantage.

Competitive advantage in a hybrid approach

What if imitation learning is required to bootstrap reinforcement learning? This is an idea suggested by two researchers at Waymo:

…extensive simulations of highly interactive or rare situations may be performed, accompanied by a tuning of the driving policy using reinforcement learning (RL). However, doing RL requires that we accurately model the real-world behavior of other agents in the environment, including other vehicles, pedestrians, and cyclists. For this reason, we focus on a purely supervised learning approach in the present work, keeping in mind that our model can be used to create naturally-behaving “smart-agents” for bootstrapping RL.

In this case, it depends on how much training data is needed for imitation learning to bootstrap RL. If it’s on the scale of millions of miles, then Waymo is in the best competitive position. If it’s on the scale of billions of miles, then Tesla is in the best competitive position.


Given this framing — imitation learning vs. reinforcement learning, and competitive advantage as algorithms, compute, and data — the favourite to win in either scenario seems straightforward. The fundamentally uncertain part is what the solution to driving policy and path planning is going to be: imitation learning, reinforcement learning, a hybrid, or none of the above.

It’s possible that Tesla will lose its favourite position if another company gains or disclosed the ability to collect the same kind of data at a greater scale. Mobileye, for instance, collects some data at a large scale, but the details aren’t clear. The stated aim is compiling HD maps and not imitation learning, although that doesn’t necessarily preclude the data from being used for both purposes. A forward-thinking car manufacturer could copy Tesla’s approach and equip its cars with the hardware, software, and wifi connectivity to upload data for imitation learning.

To date, it feels to me like there has been little in the way of principled theories of autonomous vehicle competition. The discourse seems to largely revolve around demos, which convey basically no information about a system’s reliability over millions of miles, and the rate of safety-critical disengagements in California, which is a misleading metric.

Some common mistakes in analysis (a few of which I’ve made):

  • Conflating performance in a demo with a system’s reliability over millions of miles.

  • Taking California DMV disengagements data at face value (a mistake I made too).

  • Making apples-to-oranges comparisons between secretive prototype autonomous vehicles and production ADAS features used by thousands of people.

  • Treating time spent on development as a competitive advantage either a) on the assumption that hand-coded software is the solution, or b) without being specific about how much time has been spent on IL or RL algorithms versus hand-coded stuff that was thrown out.

  • Talking about data without being specific about what kind of data it is, how exactly it’s useful for machine learning, where the supervisory signal comes from (e.g. human labelling or human driver input), the quality, diversity, and rarity of the data (and why that matters), and the cost of collecting data vs. the cost of labelling it (if applicable).

  • Related to the above: conflating testing a vehicle and training a neural network.

That’s why it’s important to me that the above mental model doesn’t rely on:

  • Demos.

  • California DMV disengagements data.

  • Performance of ADAS features.

  • Brute time (i.e. time, without a deeper theory about why time matters).

  • Training data bottlenecked by human labelling, such as raw sensor data used to train perception neural networks.

  • Miles of testing in autonomous mode.

On reinforcement learning:

It seems to me that the essential difference between applying RL to driving versus other activities like Dota is the challenge of representing the driving task in simulation. How do you make sure what the agent learns is driving, and not a video game that closely resembles driving, but not closely enough?

With supervised learning, at least you don’t have that problem, because the data comes from the real world. The problem you do have is getting the neural network to learn all the correct mappings between action and state.

If you capture state-action pairs from long tail scenarios, in theory, the neural network should be able to learn the correct mappings.

If you don’t capture data from long tail scenarios, and you rely on interacting with other virtual agents in simulation, then I don’t see how your agent learns to handle those long tail scenarios. How do you generate them in simulation without observing them in the world? And if you don’t generate them in simulation, how does your agent learn to handle them?

What are the odds that policies learned outside of the long tail, in the chunky middle, will generalize to the long tail?

1 Like