What is Nostr?
Marc W Howard /
npub12jg…0d87
2024-01-17 01:57:03
in reply to nevent1q…04nm

Marc W Howard on Nostr: npub1qtkdu…txqrw Still reading the paper, so take this with a grain of salt. But my ...

npub1qtkdu9s8mnygvkcvycw0wtmp85t95v25v8eyavtkx2q0nhnuxmnq9txqrw (npub1qtk…xqrw)
Still reading the paper, so take this with a grain of salt. But my summary of the paper (and the available literature) is that populations of neurons tile continuous dimensions smoothly. Longer explanation of this reasoning follows.

Suppose you can arrange the neurons in your sample to have a diagonal covariance matrix with a little blur on the off-diagonal terms. For instance, suppose you have a set of place cells on a linear track and you index them by the location of their place fields. In this case, the number of meaningful principal components goes up linearly with the number of neurons you record from...which is the claim in this paper.

In this case, there *is* a meaningful low-dimensional manifold but you wouldn't find it with PCA or linear dimensionality reduction, because the covariance matrix is full rank! By construction the neurons tile a continuous dimension---here position along the linear track.

So what about the experiments where you *do* get something that looks sensibly ``low dimensional'' using linear dimensionality reduction techniques (Murray et al., 2017 PNAS is a really nice one)? Is this result contradictory to those findings? No.

Rather than radial basis functions (like place fields) to tile the place axis, suppose that we had exponential basis functions, \(e^{-sx}\) with a continuity of \(s\). In this case you still have a full-rank covariance matrix, so it's perfectly consistent with the result in this paper. But the first component is monotonic in \(s\) so the ``low dimensional'' projection looks sensible. (Full disclosure we're writing this up.)
Author Public Key
npub12jgnul3990mkty0myc22jeq7sv5ker6jqyldg4vsmwyu2wddxq7sfx0d87