In his first public lecture of 2019, MIT’s Lex Fridman says that beyond perception, there is very little machine learning in autonomous cars today. But that is starting to change:
Tesla is reportedly using imitation learning for autonomous driving.
Mobileye is openly using reinforcement learning for autonomous driving.
Waymo says it may incorporate components of an imitation learning system into its autonomous driving software.
Anthony Levandowski — a famous self-driving car engineer who formerly worked at Waymo, Otto, and Uber ATG, and who competed in the DARPA Grand Challenge — recently announced a new startup called Pronto and said it will use “end-to-end deep learning”. That could mean imitation learning or reinforcement learning. Or both.
An honourable mention to Wayve, a small startup in the UK founded by Cambridge machine learning researchers. Wayve’s CEO says: “Rather than hand-engineering our solution with heavily rule-based systems, we aim to build data-driven machine learning at every layer of our system, which would learn from experience and not simply be given if-else statements.” Wayve’s website mentions reinforcement learning.
I feel pessimistic about the hand-coded, rule-based approach to driving. (That is, the non-perceptual, action-related parts of driving.) There are a few reasons:
It hasn’t seemed to work all that great for much of anything — not computer vision, getting bipedal robots to walk or open doors, board games, video games… Is there an example of it achieving human-level performance on any complex task?
A beaver can build a dam, but it has no idea how to tell you to build a dam. Similarly, many tasks that humans perform easily, effortlessly, mindlessly — we don’t actually know how we do them, and we don’t know how to tell a robot to do them. We might need a better scientific understanding of how humans drive before we can get robots to do it. The introspection of software engineers might not do the trick.
If we are stuck with hand coding robots, I worry that engineers will continue to gradually pluck away at the problem, inching ahead year by year. Only making as much progress this year as they made last year. There seems to be a wide chasm between today’s robot drivers and human drivers. Crossing that chasm inch by inch seems like it would take quite a while. To get across that chasm in a few years, it seems like we need progress to move a lot faster.
In sum, I worry that hand coding will only make slow linear progress, and it may at some point hit a ceiling where engineers just don’t know how to solve the next problems.
By contrast, machine learning has shown us a few examples of fast exponential progress, where it went from subhuman performance to superhuman performance in a few years. ImageNet, AlphaGo, maybe Dota.
If all these companies try various machine learning approaches to driving for a few years, and they don’t get any traction… I will feel pretty pessimistic about self-driving cars. If that happens, I think I might feel that the problem can’t be solved with the current machine learning paradigm, and that hand coding is unlikely to solve it either. So self-driving cars would be indefinitely on hold. Instead of being an engineering problem, self-driving cars would become (in my eyes) a science problem.
That’s a bleak place to be. Scientific progress in AI has happened in fits and starts. Prior to 2012, there was a long period of stagnation.
So, as a fan of self-driving cars — or the idea of self-driving cars — I am watching imitation learning and reinforcement learning because I think one or both of those techniques could be the key to all of it.