Breakthrough Study Reveals How the Brain Optimises Neural Coding

Neueste Nachrichten

Breakthrough Study Reveals How the Brain Optimises Neural Coding

A detailed drawing of various neurons on paper, with some connected by thin lines and accompanying text.
Jeffrey Morgan
Jeffrey Morgan
2 Min.

Breakthrough Study Reveals How the Brain Optimises Neural Coding

A new study has introduced a groundbreaking approach to understanding neural coding. Instead of focusing on direct neural activity, researchers have shifted their attention to representational similarity—revealing a wide range of optimisation problems that can be solved using convex functions.

The work, led by a team at University College London, offers fresh insights into how the brain processes information, particularly in visual systems. It also provides neuroscientists and AI researchers with a more powerful analytical toolkit than previously available.

The research demonstrates that many neural coding problems, from linear to complex non-linear networks, can be framed as convex optimisation tasks. By concentrating on representational dot-product similarity matrices, the team uncovered a surprisingly large family of problems that become mathematically tractable.

One key finding explains why dense retinal codes and sparse cortical codes efficiently split a single variable into ON and OFF channels. This division leverages the natural nonlinearity found in convex problems, offering a clearer picture of how visual information is processed. Under specific conditions, the study confirms that neural tuning curves are uniquely tied to optimal representational similarity. This validates the long-standing neuroscience practice of analysing individual neuron responses. Additionally, the framework provides the first complete set of conditions for solving semi-nonnegative matrix factorisation—a critical step in understanding modularity and mixed-selectivity in the entorhinal cortex. The methodological shift allows researchers to study complex models with greater ease and interpretability. Unlike earlier approaches, which often relied on overly simplistic or computationally heavy models, this method balances mathematical clarity with biological realism.

The findings present a versatile new tool for both neuroscientists and AI specialists. By optimising representational similarity, the framework bridges gaps between different coding schemes and deepens understanding of neural processes.

The study’s conditions for semi-nonnegative matrix factorisation also open doors for further exploration of brain modularity. Its impact is expected to extend across neuroscience and artificial intelligence research.