The post above is a different way of thinking about AI progress than the way I thought about it even very recently. My strong hunch was that progress in AI would trickle down from neuroscience. This model of progress is a pipeline from basic natural science to engineering application. I’ll name this the pipeline model. But then Yann LeCun showed in his talk that in many historical examples a technology was developed before its underlying scientific principles were explicitly understood.
The big question for AI progress is: where does the knowledge of intelligence come from? With physics, since the whole world is physical, there is ample room for observation, experimentation, exploration, and discovery in all parts of life. You can discover things by accident or through cultural evolution while crafting weapons, or you can deliberately roll bowling balls down ramps to see what happens.
With biology, yes, life is all around us, but while everything is physical (or everything empirical is physical), some things are living and some things are non-living. Only interaction specifically with biological things can yield biological knowledge. You won’t get it from bowling balls.
With the cognitive sciences, an even narrower subset of things in the world exhibit intelligence. Especially if by “intelligence” we specifically mean the kind of intelligence found in birds and mammals that enables an animal to invent or discover a new behaviour that solves a problem. (Don’t know a good, short name for this. Maybe “originative intelligence” would be a good name, if one doesn’t already exist.*) The subset is even smaller if we mean the kind of intelligence found in humans that we call ”general intelligence”.
A possible model is that AI progress will come purely from engineering machine learning systems, and not at all from reverse engineering the brain. The “signal” that transmits the knowledge of intelligence to human engineers will essentially be trial and error. AI systems will get progressively better through a design process or evolutionary process (like how biological intelligent systems evolved in the first place). The trial and error model is the opposite of the pipeline model. In the pipeline model, all progress comes from neuroscience. In the trial and error model, no progress does.
The model of AI progress I described in my previous post I’ll name the loop model. Rather than a pipeline that goes in one direction from neuroscience to AI, in the loop model, engineering work in AI is seen as experimental neuroscience that feeds into the discipline of neuroscience. A lot of engineering work in deep learning is inspired by neuroscience, so effectively neuroscience feeds ideas to AI, then AI tests them, and the results of these tests feed back into neuroscience. Conversely, original, endogenous ideas in AI inspire reverse engineering work in neuroscience. It’s a two-way loop in which ideas and experimental results flow both ways.
*What we are currently trying to develop in AI is mostly non-originative intelligence, like you find in reptiles. The training process is originative, analogous to the evolutionary process for reptiles. But a robot or a virtual agent in the wild, like a reptile in the wild, doesn’t discover or invent new behaviours to solve previously unsolved problems. It simply carries out the behaviours it has already learned in training.
Continual learning/lifelong learning in AI is an attempt to endow AI systems with originative intelligence. AI researchers want to take the training process and allow it to happen in real time in an individual AI (or perhaps in a set of AIs sharing information). Continual learning is analogous to the evolutionary leap that occured in the brains of birds, mammals, and especially humans. The information flow of the evolutionary process got transported into the brain, and started running in real time rather than in multi-generational time.
Just because it was a leap forward for biology doesn’t necessarily mean it will be for AI, though. For example, if a robot is trained for 100,000 years in simulation and then deployed in the real world, the continual learning that occurs in the real world will be much slower than what occurred in simulation. This is the reverse of what happened with biological evolution and originative learning in animals. For AI, unlike animals, “evolution” might be much faster than real time originative learning.