Is there anything about Nvidia decision-making approach? They have different networks for perception but I cannot find anything about their decision making. Do they use RL or IL?
I’m not sure. Nvidia is a special case because their business model is selling computing hardware to car manufacturers. They’re a direct competitor to Intel/Mobileye in that regard, although unlike Mobileye, which is planning to deploy its own robotaxis, it seems like Nvidia is content to just keep supplying the hardware. I don’t think Nvidia has any robotaxis of its own.
So, Nvidia may not have any actual decision-making software, at least not one intended for a commercial application. Nvidia demonstrated a proof of concept for end-to-end imitation learning, but I don’t think Nvidia plans to sell this as a product or do anything with it commercially themselves.
In this talk, Urs Muller shows an end-to-end diagram and then a video for the Nvidia self-driving car.
But as you said, I cannot find too much information about their decision-making part. They mostly have perception networks.
I believe BB8, the car shown in those demo videos, uses end-to-end imitation learning, as the diagram you posted shows. But this basically just seems like a fun proof of concept. I don’t think Nvidia is selling this end-to-end system as a product. I also don’t think they are working on robotaxis.
This video helps explain how Nvidia handles path planning:
Apparently Nvidia is competing quite directly with Mobileye in that they are shipping computing hardware along with ADAS software.