This is an old revision of the document!


Questions from TK 2020-04-01

The calculations are repeated for 8 random sets for each sub-sample size.

My current understanding is that the mean $m_i$ and correlation $C_{i,j}$ are more universal (over different samplings of the system) than the inferred $h_i$ and $J_{i,j}$ in determining the thermodynamic behavior of the system.

Show evidences and analysis to support your claim?? also average $m$ for your sub-samples as compared to the original sample? Do we see this effect in all the data set we got from them? Those data sets have different numbers of ROI. An assumption proved? — This will establish the fact that even though we may not have single neuron resolution as Bialek's, the data is not sensitive to it.

This comes from sub-sampling with different sizes, we get similar $m_i$, $C_{i,j}$ distributions with very different $h_i$, $J_{i,j}$ distributions.

The pca in your fig. 1 for both cases, show a good conservation for different sizes, although the two cases have different slope. So can these two mice in different state?

However, whether the distributions of $m_i$ and $C_{i,j}$ are enough or the network topology also plays an important role does still need to be checked. Shuffling or re-sampling $m_i$ and $C_{i,j}$ from the observed distributions is a way of checking this. However, they are tricky since the result may not be a valid combination that can be generated by a distribution of system configurations. This is mentioned in http://arxiv.org/abs/0912.5409 and they coped with it by checking the consistency of all marginal distributions of spin pairs of the system and redraw from the distributions if any of the marginal distributions is invalid. I have improved their method so that redrawing is not necessary and all marginal distributions of spin pairs are valid. However, this does not guaranty triplets or higher marginal distributions are all valid and the shuffled $m_i$ and $C_{i,j}$ can still be un-physical. This makes me wonder if this is one of the reason why this paper was not published. And, we need to think of different ways of perturbing the system to find the minimal criteria of when the collective properties of the system are preserved.

What do you mean by marginal distribution? I thought you just arbitrarily select 64, 32 neurons but keep the experimental data of $C_{i,j}$ and $m_i$. I thought that is what Kay did by cutting off the first 8 neurons or the last 4 neurons.

In the paper you quoted, they enlarge the system to 120 neurons by a quote in page 10 “We thus generated several synthetic networks of 120 neuons by randomly choosing once more out of the distribution of hi and Cij observed experimentally.” – have you done this to study large system?

We can still try to follow our previous method, keeping the rank and shuffle only C(ij) within a small interval, similarly for mi. If we increase the size of the interval, then we will get to completely reshuffle. Then we can check how sensitive our result to the network.

In this Bialek's paper for 40 neurons, a statement is different from what I saw before: when we randomize the fields hi and interactions Jij, we find that distribution of mean spike probabilities sigma_i changes dramatically, and as a result everything else about the network also changes(heat capacity, entropy, … ).

This is same as our result that distribution is not good enough! But I thought the 78 neuron paper only cares the distribution not the network, I cannot find out where I saw this, but I should have mentioned it to you? Do you know where to find that statement?

TK