Operation vacation is cool. Model learning is automated. Data labeling seems to be the key. I remember from previous talks that DOJO addresses the automation of data labeling.
@jimmy_d thinks that Dojo is custom hardware for training neural networks that Tesla will keep at its HQ (where Tesla currently keeps GPUs for training neural networks). So, it’s sort of like Tesla’s version of Google’s TPUs, which accelerate neural network training. In Jimmy’s words:
Dojo isn’t going to be a training computer deployed into car, it’s going to be training infrastructure that is optimized to perform unsupervised learning from video at scale. Tesla is probably going to produce custom silicon to enable this because available commercial hardware is inadequate to the task
In addition to unsupervised learning (which is also known as self-supervised learning), I’m curious if other learning approaches could make use of the same training hardware. What about semi-supervised learning, in which there is a mix of human-labelled data and unlabelled data? What about weakly supervised learning using weak labels from human drivers? What about end-to-end reinforcement learning or end-to-end imitation learning?
No need to even speculate or assume Dojo is in house. It’s in the first slide of their stack as in house.
And it’s discussed as the in house training equivalent to the on board inference (hw3) computer.
Two interesting slides I’m seeing are that Smart summon is demonstrating SLAM.
Also confirmation that they are working in a birds eye top down view to project features not image space.
I wrote an article on “Operation Vacation”. It will go behind a paywall in 8 days.