Scaling of information in large neural populations reveals signatures of information-limiting correlations

0 views • Nov 1, 2021
0
Save
Cite
Share

Author(s)

Author Name

MohammadMehdi Kafashan

Published 1 Project

Neuroscience

Anna Jaffe

Published 1 Project

Neuroscience

Selmaan N. Chettih

Published 1 Project

Neuroscience

Ramon Nogueira

Published 1 Project

Neuroscience

Iñigo Arandia-Romero

Published 1 Project

Neuroscience

Christopher D. Harvey

Published 1 Project

Neuroscience

Ruben Moreno-Bote

Published 2 Projects

Neuroscience

Uploader

Jan Drugowitsch

Add New Author

How is information distributed across large neuronal populations within a given brain area? One possibility is that information is distributed roughly evenly across neurons, so that total information scales linearly with the number of recorded neurons. Alternatively, the neural code might be highly redundant, meaning that total information saturates. Here we investigated how information about the direction of a moving visual stimulus is distributed across hundreds of simultaneously recorded neurons in mouse primary visual cortex (V1). We found that information scales sublinearly, due to the presence of correlated noise in these populations. Using recent theoretical advances, we compartmentalized noise correlations into information-limiting and nonlimiting components, and then extrapolated to predict how information grows when neural populations are even larger. We predict that tens of thousands of neurons are required to encode 95% of the information about visual stimulus direction, a number much smaller than the number of neurons in V1. Overall, these findings suggest that the brain uses a widely distributed, but nonetheless redundant code that supports recovering most information from smaller subpopulations.

Neuroscience
Neuroscience 179 Projects