Tesla AI Day

Starts at 46:50.

I rewatched the segment on “Auto-Labeling” today and I’m left feeling no more clear about it.

Ashok goes rapid fire through so many examples of auto-labelling, giving so little time to each.

I’m left unclear when “auto-labeling” refers to:

a) Distillation — in which a neural network trained on manually labelled data then labels data, meaning that ultimately the training signal comes from human labels.

b) Self-supervised learning — in which the training signal is somehow automatic or inherent in the data, such as Tesla’s depth networks that use self-consistency to generate lidar-like depth maps.

My credence for (b) would be lower if not for a recent proof of concept of self-supervised learning.

There is no doubt some of both distillation (Karpathy said so explicitly during the Q&A) and self-supervised learning (e.g. the depth networks) at play. But I am unsure of the extent or significance of the SSL currently implemented in the training servers.

@jimmy_d thoughts?