Fascinating podcast. Numenta has found that by giving artificial neural networks sparser activations and sparser weights the networks retain their accuracy and become more robust to random noise in their inputs. This is true for MNIST, CIFAR-10, and the Google Speech Command Dataset. It’s also true for three different neural network architectures.
The Numenta paper referenced in the podcast: