Nature: A critique of pure learning and what artificial neural networks can learn from animal brains

Artificial neural networks (ANNs) have undergone a revolution, catalyzed by better supervised learning algorithms. However, in stark contrast to young animals (including humans), training such networks requires enormous numbers of labeled examples, leading to the belief that animals must rely instead mainly on unsupervised learning. Here we argue that most animal behavior is not the result of clever learning algorithms—supervised or unsupervised—but is encoded in the genome. Specifically, animals are born with highly structured brain connectivity, which enables them to learn very rapidly. Because the wiring diagram is far too complex to be specified explicitly in the genome, it must be compressed through a “genomic bottleneck”. The genomic bottleneck suggests a path toward ANNs capable of rapid learning.

https://www.nature.com/articles/s41467-019-11786-6 (Open Access)

1 Like

One of my favourite things I’ve read about AI and cognitive science in recent memory. This and “The Bitter Lesson” are probably my two favourites in a long time.

If this essay is right, then we should be thankful that the human genome isn’t much larger. If it were much larger and could specify more connections in the human brain, then the wiring of the human brain wouldn’t have to lean as much on generic, generalizable wirings (e.g. the visual cortex and auditory cortex share similar wirings). The bigger the genome, the more genome-coded wirings, the more innate behaviours, the less dependence on learning.

If you keep genome size fixed, then the bigger a species’ brain gets, the more general its brain’s wiring has to be, since there is the same number of genes specifying a greater number of neurons and connections between them. Human general intelligence may have evolved thanks to this genomic constraint. If genome size could evolve as fast as brain size, maybe we could have just had a huge number of innate behaviours wired into our big brains.

I think it would be exciting to design new artificial neural networks based on a newer, better understanding of the circuitry of the neocortex, especially including cortical columns. The neocortex is supposed to generalizable, generic and homogenous in its circuitry, and responsible for what’s unique about human intelligence. If we can figure out how cortical columns and layers work, then maybe we can implement functionally similar columns and layers of artificial neurons in a new neural network architecture.