The Epistemology of Deep Learning - Yann LeCun

#1
Konrad Kording: Why we need more/better scientific theory in neuroscience and in deep learning alike
Konrad Kording: Why we need more/better scientific theory in neuroscience and in deep learning alike
#2

A fundamental question raised by this lecture:

Where does the knowledge to invent things like the telescope, the steam engine, etc. come from, given that their inventors don’t understand the scientific principles (e.g. optics, thermodynamics) underlying these inventions?

Expanding on that:

When can an invention precede scientific understanding of the principles that make the invention work? When is a scientific understanding of the relevant underlying principles required for an invention?

These questions are important for thinking about AGI. It feels more obvious — maybe it actually isn’t, but it feels that way — where scientific understanding comes from. It feels less obvious how knowledge is created in the process of invention or engineering innovation — like the knowledge required to invent the telescope or the steam engine. It is counterintuitive to think that engineering can discover new, fundamental knowledge before science (e.g. the embodiment of thermodynamic principles in the steam engine) and then spur science to make new discoveries (e.g. the discovery of thermodynamic principles by studying the steam engine).

My theory has long been that the engineering of intelligence is limited by the science of intelligence, and that the engineering lags behind the science. Neural networks are inspired by neuroscience, and they they seem like low-hanging fruit. Remarkably powerful, just as many ostensibly simple systems in nature are remarkably powerful, but limited. They just implement the low-level principles of animal neurons, and don’t attempt to implement any of the (still hazily understood) higher-level architectural principles of human brains that bestow us with what we call “general intelligence”. So, the answer looks clear: do more science, so engineering will have more to copy.

But then there is hierarchical reinforcement learning. As far as I know, it’s only biologically inspired insofar as it’s inspired by first-hand experience of how we, as humans, learn, rather than by neuroscientific discovery. Maybe reinforcement learning is inspired by the work of people like Skinner and Pavlov in psychology.

If hierarchical reinforcement learning ends up working — if it allows robots and virtual agents to efficiently learn a broad array of complex tasks with many steps — then this seems like an example of an engineering innovation that didn’t depend on any scientific discovery. It came out of the process of trying to build better systems. This could be an example of how engineering doesn’t need science to make progress.

With AGI and AGI safety, there are two fundamental factual questions:

  1. How will AGI get made? (For example: through scientific discovery, or through engineering innovation?)

  2. Will the AI’s implementation of “general intelligence” be fundamentally the same, or fundamentally different, from the way the human brain implements “general intelligence”? (For example: will AGI have consciousness, personality, selfhood, personhood, agency, autonomy, independent thought, critical thinking about its goals, etc.?)

If the answer to (1) is that AGI will be developed by studying the human brain and copying it, that implies that the answer to (2) is that AGI will implement “general intelligence” in fundamentally the same way as the human brain. This has different implications for AGI safety than the converse.

If the answer to (1) is that AGI will be developed through engineering innovation, then the answer to (2) could either be that AGI will be fundamentally the same as or fundamentally different from human GI. This depends on (among other things) whether there is more than one possible way to implement GI. We don’t know that, and I’m not sure how we would find out. In 500 years it might be obvious, but right now it isn’t. There are some good theories, but nothing so definitive we should bet the future of humanity on it. I could expand on this topic— but I won’t.

The uncertainty around (2) as a standalone question makes (1) all the more important. How AGI is developed will, in large part, determine the nature of AGI. And the nature of AGI will determine how safe or dangerous it is, and what the best approach to AGI safety is.

My view is that human-like or hominin minds are something we understand well, have thousands of years of experience with, and know how to make safe. Hominin AGI would also negate or reduce the sense that AGI is a matter of humans vs. non-human AI, an alien interloper (or invader). For these reasons, I think it would be better if AGI were built using a scientific understanding of the human brain, and a close copying of how the human brain works. But now I’m not so sure if it’s more likely to happen this way.

If AGI is developed through engineering innovation, there is the possibility that its design will converge on how the human brain works anyway. Sometimes unintentional similarities arise between biology and technology. Sonar was invented before echolocation in bats was discovered. AI engineers will also inevitably be influenced — at least subconsciously — by ideas both from science and from their own first-hand experience of how human minds work. However, convergence is not guaranteed. (Unless, as per the above, there is only one way to make a general intelligence.)

So, one way to reduce risk from AGI is to accelerate the advancement of hominin GI. This is Neuralink’s approach. Besides brain-computer interfaces, another way to do this would be human brain emulation: the most direct form of copying the brain that requires the most advanced experimental tools but the least amount of theoretical understanding. The eventual logical endpoint of Neuralink-like technologies may be human brain emulation and mind uploading.

Hominin GI evolution becomes a more pressing concern the more you think 1) hominin AGI is the safest, best option and 2) there is a chance engineering innovation will create AGI and won’t converge on the cognitive architecture of the human brain.

#3

@jimmy_d you might like this. At 56:50, Yann explains why you should make your neural network 5x or 10x bigger than you actually need to solve your problem. The extra headroom allows for more exploration in training.

Maybe this is common knowledge but it was my first time hearing it.

#4

I don’t think it’s common knowledge, or common practice. I tend to go big on network parameters to allow for exploration and then rely more heavily on regularization to avoid over fitting, but I’m doing RL these days. Network processing is not the limiter in RL so you can afford to be generous with parameters. If you’re doing something like BERT it’s a harder tradeoff since 5x or 10x parameters can exceed what a computer can handle - forcing you to go to a cluster. That can be a hard decision to make. I think I agree that in general starting out on the high side for parameters is more likely to discover interesting behavior but it’s the kind of thing that is probably very domain dependent.

1 Like
#5

Thanks for always sharing these links BTW. I often know about this stuff independently, but your commentary helps me to decide what is worth watching.

1 Like