Bootstrapping Tesla FSD 4D rewrite

Elon quote at 11:16 in his first Lex Fridman interview (from April 2019):

Well, there’s a lot of things that are learnt. There are certainly edge cases where say somebody’s on Autopilot and they take over. And then, okay, that’s a trigger that goes into our system that says, okay, did they take over for convenience, or did they take over because the Autopilot wasn’t working properly.

There’s also like, let’s say we’re trying to figure out what is the optimal spline for traversing an intersection. Then, the ones where there are no interventions are the right ones. So you then say okay, when it looks like this, do the following. And then you get the optimal spline for navigating a complex intersection.

FSD beta testers are having a hard time getting the correct spline for left turns at intersections. But we can surmise that their interventions are helping label the correct spline.

Seems like a shadow mode would be just as useful as a disengagement.

If Vehicle Distance greater than 12" from FSD spline, Upload(DriverSpline). You would get virtual disengagements from every single car regardless of them being an FSD customer or not.

1 Like

I have the intuition that this sounds very reasonable and yet is not fully correct, i.e. there is training signal you can extract from interventions that you can’t extract from shadow mode disagreements.

You also miss a lot of signals from disengagements. So for instance the way that Autopilot accelerates out of a green light might not be unsafe, but it’s incredibly irritating–just not irritating enough to disengage. By comparison, you get more human expert training when a human is actually demonstrating the desirable and comfortable acceleration curve.

1 Like