Ilya Sutskever: OpenAI’s progress in 2018


#1

A new milestone for reinforcement learning:

I wonder if designing a reward function aimed to maximizing novelty would be useful for any practical application. It makes sense that this works for video games like Montezuma’s Revenge, Mario, and Breakout, since beating the current level is how you advance to the next level. But are there real world problems structured like that?


#2

Thanks for sharing - I love this talk.

I feel bad for the OpenAI guys at times: people outside the field have a hard time understanding the argument being made and people inside the field are hostile to the idea that rapid advancement is going to continue. It seems like a lonely quest they are on.

I hadn’t thought about it recently, but Sutskever’s comment that experts in 2016 would have been quite confident that what happened two years later, in 2018, was not going to happen. This is also true of experts in 2015 wrt 2017 developments, 2014 experts wrt 2016 developments and so forth. The field is in a strange place right now where the people who should know best are having a hard time assimilating reality. The net effect ends up looking a lot like a state of denial.


#3

I’m a bit puzzled by the comment that more hardware capability is going to bring more advancements. The datacenters are huge. If it was that easy than Google and others would be doing it.


#4

Yeah, I can see why that comment would be confusing - the datacenters are indeed huge.

The ‘advancement’ Sutskever is referring to is, in the long term, the performance improvement that comes with new computer hardware. Future hardware will be faster than hardware is today, so if the primary obstacle to AI improvement is hardware limitations then it’s just a matter of time.

But Sutskever is probably also referring to something particular to neural network processing right now, which is that existing chip architectures - mainly discrete CPUs and GPUs designed to work well on a wide range of historically significant work loads - are a poor fit for NN processing and the near future introduction of NN specific accelerators is going to provide dramatic increases in the capacity for training NNs over the next several years.

The first generation of NN specific silicon is just arriving on the scene and it’s giving 10x improvements just for the most low hanging of the potential improvements. So while von neumann style computation capability has been improving about 30% per year for the last couple of decades, we could see 100% or 150% annual improvement in NN training specific hardware over the next decade or so. These anticipated capacity improvements are likely to have a big impact on what is doable with NNs even if there weren’t further progress at the software end.

Sutskever is trying to make the case to a skeptical audience that big advances in AI are likely in the next several years, and that the advances could be big enough that we get qualitatively different systems in the near future.