Numenta: Artificial neural networks should be sparser, like the neocortex

Fascinating podcast. Numenta has found that by giving artificial neural networks sparser activations and sparser weights the networks retain their accuracy and become more robust to random noise in their inputs. This is true for MNIST, CIFAR-10, and the Google Speech Command Dataset. It’s also true for three different neural network architectures.

Audio version:

Video version:

The Numenta paper referenced in the podcast: